Improving Crawl Efficieny
-
Hi
I'm reading about crawl efficiency & have looked in WMT at the current crawl rate - letting Google optimise this as recommended.
What it's set to is 0.5 requests every 2 seconds, which is 15 URLs every minute.
To me this doesn't sound very good, especially for a site with over 20,000 pages at least?
I'm reading about improving this but if anyone has advice that would be great
-
Great thank you for this! I'll take them on board
Becky
-
You may be overthinking this, Becky. Once the bot has crawled a page, there's no reason (or benefit to you) for it to crawl the page again unless its content has changed. The usual way for it to detect this is through your xml sitemap,. If it's properly coded, it will have a <lastmod>date for Googlebot to reference.
Googlebot does continue to recrawl pages it already knows about "just in case", but your biggest focus should be on ensuring that your most recently added content is crawled quickly upon publishing. This is where making sure your sitemap is updating quickly and accurately, making sure it is pinging search engines on update, and making sure you have links from solid existing pages to the new content will help. If you have blog content many folks don't know that you can submit the blog's RSS feed as an additional sitemap! That's one of the quickest ways to get it noticed.
The other thing you can do to assist the crawling effectiveness is to make certain you're not forcing the crawler to waste its time crawling superfluous, duplicate, thin, or otherwise useless URLs.</lastmod>
Hope that helps?
Paul
-
There are actually several aspects to your question.
1. Google will make its own decision as to how important pages and therefore how often it should be crawled
2. Site speed is a ranking factor
3. Most SEO's belief that Google has a maximum timeframe in which to crawl each page/site. However, I have seen some chronically slow sites which have still crawl and indexed.
I forgot to mention about using an xml site map can help search engines find pages.
Again, be very careful not to confuse crawling and indexing. Crawling is only updating the index, once indexed if it doesn't rank you have another SEO problem, not a technical crawling problem.
Any think a user can access a crawler should be able to find it no problem, however if you have hidden pages the crawler may not find them.
-
Hi
Yes working on that
I just read something which said - A “scheduler” directs Googlebot to crawl the URLs in the priority order, under the constraints of the crawl budget. URLs are being added to the list and prioritized.
So, if you have pages which havent been crawled/indexed as they're seen as a low priority for crawling - how can I improve or change this if need be?
Can I even impact it at all? Can I help crawlers be more efficient at finding/crawling pages I want to rank or not?
Does any of this even help SEO?
-
As a general rule pages will be indexed unless there is a technical issue or a penalty involved.
What you need to be more concerned with is the position of those pages within the index. That obviously comes back to the whole SEO game.
You can use the site parameter followed by a search term that is present on the page you want to check to make sure the pages indexed, like: site:domain.com "page name"
-
Ok thank you, so there must be ways to improve on the number of pages Google indexes?
-
You can obviously do a fetch and submit through search console, but that is designed for one-off changes. Even if you submit pages and make all sorts of signals Google will still make up its own mind what it's going to do and when.
If your content isn't changing much it is probably a disadvantage to have the Google crawler coming back too often as it will slow the site down. If a page is changing regularly the Google bot will normally gobble it pretty quick.
If it was me I would let you let it make its own decision, unless it is causing your problem.
Also keep in mind that crawl and index are two separate kettles of fish, Google crawler will crawl every site and every page that it can find, but doesn't necessarily index.
-
Hi - yes it's the default.
I know we can't figure out exactly what Google is doing, but we can improve crawl efficiency.
If those pages aren't being crawled for weeks, isnt there a way to improve this? How have you found out they haven't been crawled for weeks?
-
P.S. I think the crawl rate setting you are referring to is the Google default if you move the radio button to manual
-
Google is very clever working out how often it needs to crawl your site, pages that get updated more often will get crawled more often. There is no way of influencing exactly what the Google bot does, mostly it will make its own decisions.
If you are talking about other web crawlers, you may need to put guidelines in place in terms of robots.txt or settings on the specific control panel.
20,000 pages to Google isn't a problem! Yes, it may take time. You say it is crawling at '0.5 requests every 2 seconds' - if I've got my calculation right in theory Google will have crawled 20,000 URLs in less than a day!
On my site I have a page which I updated about 2 hours ago, and the change has already replicated to Google, and yet other pages I know for a fact haven't been crawled for weeks.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl Budget and Faceted Navigation
Hi, we have an ecommerce website with facetted navigation for the various options available. Google has 3.4 million webpages indexed. Many of which are over 90% duplicates. Due to the low domain authority (15/100) Google is only crawling around 4,500 webpages per day, which we would like to improve/increase. We know, in order not to waste crawl budget we should use the robots.txt to disallow parameter URL’s (i.e. ?option=, ?search= etc..). This makes sense as it would resolve many of the duplicate content issues and force Google to only crawl the main category, product pages etc. However, having looked at the Google Search Console these pages are getting a significant amount of organic traffic on a monthly basis. Is it worth disallowing these parameter URL’s in robots.txt, and hoping that this solves our crawl budget issues, thus helping to index and rank the most important webpages in less time. Or is there a better solution? Many thanks in advance. Lee.
Intermediate & Advanced SEO | | Webpresence0 -
Webmaster Tools HTML Improvements Page Blank / Site Not Ranking Well
I have an ecommerce site that is not ranking well currently. It has about 1,000 pages indexed in Google but very few appear to be ranking. I normally find issues in Webmaster Tools HTML Improvements but for some reason it does not see a problem with the site. There are problems, trust me. Moz shows many issues. Google nothing! There is a problem somewhere but I am not seeing it. Why are HTML Improvements blank and the site not ranking? Am I in the dreaded sandbox? Any ideas? Sean We didn't detect any content issues with your site. As we crawl your site, we check it to detect any potential issues with content on your pages, including duplicate, missing, or problematic title tags or meta descriptions. These issues won't prevent your site from appearing in Google search results, but paying attention to them can provide Google with more information and even help drive traffic to your site. For example, title and meta description text can appear in search results, and useful, descriptive text is more likely to be clicked on by users.
Intermediate & Advanced SEO | | optin0 -
Crawl Issue for Deleted Pages
Hi, sometimes, I just delete a page and not necessarily want to make a 404 to another page. So Google Webmaster Tools shows me 108 'not found' pages under 'Crawling Errors'. Is that a problem for my site?
Intermediate & Advanced SEO | | soralsokal
Can I ignore this with good conscience?
Shall I make 404 to my homepage? I am confused and would like to hear your opinion on this. Best, Robin0 -
URL Parameter Being Improperly Crawled & Indexed by Google
Hi All, We just discovered that Google is indexing a subset of our URL’s embedded with our analytics tracking parameter. For the search “dresses” we are appearing in position 11 (page 2, rank 1) with the following URL: www.anthropologie.com/anthro/category/dresses/clothes-dresses.jsp?cm_mmc=Email--Anthro_12--070612_Dress_Anthro-_-shop You’ll note that “cm_mmc=Email” is appended. This is causing our analytics (CoreMetrics) to mis-attribute this traffic and revenue to Email vs. SEO. A few questions: 1) Why is this happening? This is an email from June 2012 and we don’t have an email specific landing page embedded with this parameter. Somehow Google found and indexed this page with these tracking parameters. Has anyone else seen something similar happening?
Intermediate & Advanced SEO | | kevin_reyes
2) What is the recommended method of “politely” telling Google to index the version without the tracking parameters? Some thoughts on this:
a. Implement a self-referencing canonical on the page.
- This is done, but we have some technical issues with the canonical due to our ecommerce platform (ATG). Even though page source code looks correct, Googlebot is seeing the canonical with a JSession ID.
b. Resubmit both URL’s in WMT Fetch feature hoping that Google recognizes the canonical.
- We did this, but given the canonical issue it won’t be effective until we can fix it.
c. URL handling change in WMT
- We made this change, but it didn’t seem to fix the problem
d. 301 or No Index the version with the email tracking parameters
- This seems drastic and I’m concerned that we’d lose ranking on this very strategic keyword Thoughts? Thanks in advance, Kevin0 -
Best way to fix 404 crawl errors caused by Private blog posts in WordPress?
Going over Moz Crawl error report and WMT's Crawl errors for a new client site... I found 44 High Priority Crawl Errors = 404 Not Found I found that those 44 blog pages were set to Private Mode (WordPress theme), causing the 404 issue.
Intermediate & Advanced SEO | | SEOEND
I was reviewing the blog content for those 44 pages to see why those 2010 blog posts, were set to private mode. Well, I noticed that all those 44 blog posts were pretty much copied from other external blog posts. So i'm thinking previous agency placed those pages under private mode, to avoid getting hit for duplicate content issues. All other blog posts posted after 2011 looked like unique content, non scraped. So my question to all is: What is the best way to fix the issue caused by these 44 pages? A. Remove those 44 blog posts that used verbatim scraped content from other external blogs.
B. Update the content on each of those 44 blog posts, then set to Public mode, instead of Private.
C. ? (open to recommendations) I didn't find any external links pointing to any of those 44 blog pages, so I was considering in removing those blog posts. However not sure if that will affect site in anyway. Open to recommendations before making a decision...
Thanks0 -
After Receiving a "Googlebot can't access your site" would this stop your site from being crawled?
Hi Everyone,
Intermediate & Advanced SEO | | AMA-DataSet
A few weeks ago now I received a "Googlebot can't access your site..... connection failure rate is 7.8%" message from the webmaster tools, I have since fixed the majority of these issues but iv noticed that all page except the main home page now have a page rank of N/A while the home page has a page rank of 5 still. Has this connectivity issues reduced the page ranks to N/A? or is it something else I'm missing? Thanks in advance.0 -
I'm pulling my hair out trying to figure out why google stopped crawling.. any help is appreciated
This is going to be kind of long, simply because there is a background to the domain name that is not typical to anybody in the world really and I'm not sure if its possible that it was penalized or ranked lower because of that or not. Because of that I'm going to include it with the hopes that giving the full picture some nice soul in the world who has more knowledge in this than me see's something or knows something and can point me in the right direction. Our site has been around for a few years, at one point the domain was seized by homeland security ICE, and then they had to give it back in Dec. which sparked a lot of the SOPA PIPA stuff and we became the poster child so to speak. The site had previously been up since 2008, but due to that whole mess the site was down for 13 months on the dreaded seized server with a scary warning graphic and site title which caused quite obviously a bunch of 404 errors and who knows what else damage to anything we'd had before that as far as page rank and incoming links. we had a lot of incoming links from high quality sites. We were advised upon getting the domain back to pretty much scrap all the old content that was on the site prior and just start fresh.. which we did. Googlebot started crawling slowly, but then as we started getting back into the swing of things people started linking to us,some with high page rank, we were getting indexed quite frequently and ranking high on search results in our niche.. Then something happened on March 4th, we had arguably our best day with google traffic, we'd been linked back by places like Huff Post etc for content in our niche.. and the next day literally it was a freefall. Darn near nothing. I've attached a screen shot from webmaster tools so you can see how drastic it was. I went crazy, trying to figure out what was wrong, searching obsessively through webmaster tools looking for any indication of a problem, searched the site on google site:dajaz1.com and what comes up is page 2 page 3 page 45 page 46. It's also taken to indexing our category and tag pages and even our search pages. I've now set those all to noindex follow but when I look at where the googlebots are at on the site, they're on the categories, pages, author pages, and tags. Some of our links are still getting indexed, but doing a search just of our site name and we're ranking below many of the media sites that have written about our legal issues, when a month ago we were at least top result for our own name. I've racked my brain trying to figure out the issue. I've disabled plugins, I'm on fetch as google bot all the time making sure our stuff is at least coming out as 200 (we had 2 days where we were getting 403 errors due to a super-cache issue, but once fixed googlebot returned like it never left) I've literally watched 1000 videos, read 100 forums, added in SEO plugins, tried to optimize the site to the point I'm worried I'm over doing it.. and still they've barely begun to crawl. As you can see there is some activity in the last 2-3 days, but even submitting a new site map once I changed the theme out of desperation it's only indexed 16. I've looked for errors all through webmaster tools and I can't find anything to tell me why that happened, how to fix it, and how to get googlebot to like us again. I'm pulling my hair out here. The links we have incoming are high quality links like huffington post , spin, complex, etc. Those haven't slowed down at all, we do outgoing links to sites we trust and are high quality as well. I've got interns working on how they're writing titles and such, I've gone through and attempted to fix duplicate pages and titles.. I've been going through and re-writing meta description tags What am I missing? I'm pulling my hair out trying to figure out what the issue is. Eternally grateful for any help provided. jnzb6.png
Intermediate & Advanced SEO | | malady0 -
What on-page/site optimization techniques can I utilize to improve this site (http://www.paradisus.com/)?
I use a Search Engine Spider Simulator to analyze the homepage and I think my client is using black hat tactics such as cloaking. Am I right? Any recommendations on to improve the top navigation under Resorts pull down. Each of the 6 resorts listed are all part of the Paradisus brand, but each resort has their own sub domain.
Intermediate & Advanced SEO | | Melia0