Improving Crawl Efficieny
-
Hi
I'm reading about crawl efficiency & have looked in WMT at the current crawl rate - letting Google optimise this as recommended.
What it's set to is 0.5 requests every 2 seconds, which is 15 URLs every minute.
To me this doesn't sound very good, especially for a site with over 20,000 pages at least?
I'm reading about improving this but if anyone has advice that would be great
-
Great thank you for this! I'll take them on board
Becky
-
You may be overthinking this, Becky. Once the bot has crawled a page, there's no reason (or benefit to you) for it to crawl the page again unless its content has changed. The usual way for it to detect this is through your xml sitemap,. If it's properly coded, it will have a <lastmod>date for Googlebot to reference.
Googlebot does continue to recrawl pages it already knows about "just in case", but your biggest focus should be on ensuring that your most recently added content is crawled quickly upon publishing. This is where making sure your sitemap is updating quickly and accurately, making sure it is pinging search engines on update, and making sure you have links from solid existing pages to the new content will help. If you have blog content many folks don't know that you can submit the blog's RSS feed as an additional sitemap! That's one of the quickest ways to get it noticed.
The other thing you can do to assist the crawling effectiveness is to make certain you're not forcing the crawler to waste its time crawling superfluous, duplicate, thin, or otherwise useless URLs.</lastmod>
Hope that helps?
Paul
-
There are actually several aspects to your question.
1. Google will make its own decision as to how important pages and therefore how often it should be crawled
2. Site speed is a ranking factor
3. Most SEO's belief that Google has a maximum timeframe in which to crawl each page/site. However, I have seen some chronically slow sites which have still crawl and indexed.
I forgot to mention about using an xml site map can help search engines find pages.
Again, be very careful not to confuse crawling and indexing. Crawling is only updating the index, once indexed if it doesn't rank you have another SEO problem, not a technical crawling problem.
Any think a user can access a crawler should be able to find it no problem, however if you have hidden pages the crawler may not find them.
-
Hi
Yes working on that
I just read something which said - A “scheduler” directs Googlebot to crawl the URLs in the priority order, under the constraints of the crawl budget. URLs are being added to the list and prioritized.
So, if you have pages which havent been crawled/indexed as they're seen as a low priority for crawling - how can I improve or change this if need be?
Can I even impact it at all? Can I help crawlers be more efficient at finding/crawling pages I want to rank or not?
Does any of this even help SEO?
-
As a general rule pages will be indexed unless there is a technical issue or a penalty involved.
What you need to be more concerned with is the position of those pages within the index. That obviously comes back to the whole SEO game.
You can use the site parameter followed by a search term that is present on the page you want to check to make sure the pages indexed, like: site:domain.com "page name"
-
Ok thank you, so there must be ways to improve on the number of pages Google indexes?
-
You can obviously do a fetch and submit through search console, but that is designed for one-off changes. Even if you submit pages and make all sorts of signals Google will still make up its own mind what it's going to do and when.
If your content isn't changing much it is probably a disadvantage to have the Google crawler coming back too often as it will slow the site down. If a page is changing regularly the Google bot will normally gobble it pretty quick.
If it was me I would let you let it make its own decision, unless it is causing your problem.
Also keep in mind that crawl and index are two separate kettles of fish, Google crawler will crawl every site and every page that it can find, but doesn't necessarily index.
-
Hi - yes it's the default.
I know we can't figure out exactly what Google is doing, but we can improve crawl efficiency.
If those pages aren't being crawled for weeks, isnt there a way to improve this? How have you found out they haven't been crawled for weeks?
-
P.S. I think the crawl rate setting you are referring to is the Google default if you move the radio button to manual
-
Google is very clever working out how often it needs to crawl your site, pages that get updated more often will get crawled more often. There is no way of influencing exactly what the Google bot does, mostly it will make its own decisions.
If you are talking about other web crawlers, you may need to put guidelines in place in terms of robots.txt or settings on the specific control panel.
20,000 pages to Google isn't a problem! Yes, it may take time. You say it is crawling at '0.5 requests every 2 seconds' - if I've got my calculation right in theory Google will have crawled 20,000 URLs in less than a day!
On my site I have a page which I updated about 2 hours ago, and the change has already replicated to Google, and yet other pages I know for a fact haven't been crawled for weeks.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can improve Domain Athturitymy web site
Hi;
Intermediate & Advanced SEO | | tohid1363b
I have a website that had an authoritative domain of 10 but today it has reached 5. I wanted to know what the reason for these changes was.
need improve my rank in Google my goal keyword . Can you guide my servant? need +5 Domain , and over +50 Page Aithroity .. my site: خریدگیفت کارت0 -
How can I stop my facets being crawled?
Hi If my facets are being crawled, how can I stop this? Or set them up so they are SEO friendly - this is new to me as I haven't had to deal with lots of facets in the past. Here's an example of a page on the site - https://www.key.co.uk/en/key/lift-tables Here's an example of a facet URL - https://www.key.co.uk/en/key/lift-tables#facet:-1002779711011711697110,-700000000000001001651484832107103,-700000000000001057452564832109109&productBeginIndex:0&orderBy:5&pageView:list& I've been trying to read up on URL parameters etc, I'm new to it so it's taking some time to understand Any advice would be great!
Intermediate & Advanced SEO | | BeckyKey0 -
When Mobile and Desktop sites have the same page URLs, how should I handle the 'View Desktop Site' link on a mobile site to ensure a smooth crawl?
We're about to roll out a mobile site. The mobile and desktop URLs are the same. User Agent determines whether you see the desktop or mobile version of the site. At the bottom of the page is a 'View Desktop Site' link that will present the desktop version of the site to mobile user agents when clicked. I'm concerned that when the mobile crawler crawls our site it will crawl both our entire mobile site, then click 'View Desktop Site' and crawl our entire desktop site as well. Since mobile and desktop URLs are the same, the mobile crawler will end up crawling both mobile and desktop versions of each URL. Any tips on what we can do to make sure the mobile crawler either doesn't access the desktop site, or that we can let it know what is the mobile version of the page? We could simply not show the 'View Desktop Site' to the mobile crawler, but I'm interested to hear if others have encountered this issue and have any other recommended ways for handling it. Thanks!
Intermediate & Advanced SEO | | merch_zzounds0 -
Magento E-Commerce Crawl Issues
Hi Guys, First post here! I am responsible for a Magento e-commerce store and there are a few crawl issues and potential solutions that I am working and would like to get some advice to see if you agree with my approach. Old Product Pages - The majority of our stock is seasonal, therefore when a product sells out, it is not usually going to come back into stock. However the approach for Magento websites is to leave the page present but take the product off the category pages, so users can still find these pages from the search engines and they are orphaned pages as not linked to from elsewhere and not totally clear products are out of stock (just doesn't show the size pulldown or 'Add to Basket' button). There is no process in place to 301 redirect these pages either. My solution to this problem is to: 1. Change design of these pages so a clear message is shown to users that the product is out of stock and suggest related products to reduce bounce rates. I was also planning on having a link from an 'Out of Stock' page on the site to these products so they are orphaned but is this required do you think? 2. When I know for sure (e.g. over a month) that the product will not be returned (e.g. refund) by the user, then 301 redirect the product pages back to category page. How do other users 301 redirect their pages in Magento, I would like an easy to use system. Crawl Errors Identified in Google Webmaster Tools It seems in the last 2 weeks there has been a sharp increase in the number of soft 404 pages identified on the website. When I inspect these pages they seem to be categories and sub categories that no longer have any products in them. However, I don't want to delete these pages as new products might come in and go onto these category pages, therefore how should I approach this? A suggestion I have thought of is to put related products on to these pages? Any better ideas? Thanks, Graeme
Intermediate & Advanced SEO | | graeme19940 -
Difference in Number of URLS in "Crawl, Sitemaps" & "Index Status" in Webmaster Tools, NORMAL?
Greetings MOZ Community: Webmaster Tools under "Index Status" shows 850 URLs indexed for our website (www.nyc-officespace-leader.com). The number of URLs indexed jumped by around 175 around June 10th, shortly after we launched a new version of our website. No new URLs were added to the site upgrade. Under Webmaster Tools under "Crawl, Site maps", it shows 637 pages submitted and 599 indexed. Prior to June 6th there was not a significant difference in the number of pages shown between the "Index Status" and "Crawl. Site Maps". Now there is a differential of 175. The 850 URLs in "Index Status" is equal to the number of URLs in the MOZ domain crawl report I ran yesterday. Since this differential developed, ranking has declined sharply. Perhaps I am hit by the new version of Panda, but Google indexing junk pages (if that is in fact happening) could have something to do with it. Is this differential between the number of URLs shown in "Index Status" and "Crawl, Sitemaps" normal? I am attaching Images of the two screens from Webmaster Tools as well as the MOZ crawl to illustrate what has occurred. My developer seems stumped by this. He has submitted a removal request for the 175 URLs to Google, but they remain in the index. Any suggestions? Thanks,
Intermediate & Advanced SEO | | Kingalan1
Alan0 -
How Long Does It Take Content Strategy to Improve SEO?
After 6 months of effort with an SEO provider, the results of our campaign have been minimal. we are in the process of reevaluating our effort to cut costs and improve ROI. Our site is for a commercial real estate brokerage in New York City. Which of these options would have the best shot of creating results in the not too long term future: -Create a keyword matrix and optimize pages for specific terms. Maybe optimize 50 pages.
Intermediate & Advanced SEO | | Kingalan1
-Add content to "thin" pages. Rewrite 150-250 listing and building pages.
-Audit user interface and adjust the design of forms and pages to improve conversions.
-Link building campaign to improve the link profile of a site with not many links (most of those being of low quality). I would really like to do something about links, but have been told this will have no effect until the next "Penguin refresh". In fact I have been told the best bet is to improve user interface since it is becoming increasingly difficult to improve ranking. Any thoughts? Thanks, lan0 -
Buying a domain and redirecting it to your website (improves seo?)
hello everyone, imagine that I have a website with Pagerank 7, PA50 DA59... and there is another website who is my competitor... so I decide to buy them... Pagerank3 PA30, DA25.. So I redirect this website to my domain...Using google webmasters I say to Google that it was redirected... So does this improve my SEO or no? Do I get part of the link juice and so on? Can this really improve my rankings?
Intermediate & Advanced SEO | | FCRMediaLietuva0 -
Stop Google crawling a site at set times
Hi All I know I can use robots.txt to block Google from pages on my site but is there a way to stop Google crawling my site at set times of the day? Or to request that they crawl at other times? Thanks Sean
Intermediate & Advanced SEO | | ske110