Is there a reason to set a crawl-delay in the robots.txt?
-
I've recently encountered a site that has set a crawl-delay command set in their robots.txt file. I've never seen a need for this to be set since you can set that in Google Webmaster Tools for Googlebot. They have this command set for all crawlers, which seems odd to me. What are some reasons that someone would want to set it like that? I can't find any good information on it when researching.
-
Google does not support the crawl delay command directly, but you can lower your crawl priority inside Google Webmaster Central.
So you are right using it the way you are. If you have it in the robots.txt, it does not really do anything and it will show in the webmaster console as well that GWT does not support it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Strange Crawl Report
Hey Moz Squad, So I have kind of strange case. My website locksmithplusinc.com has been around for a couple years. I have had all sorts of pages and blogs that have maybe ranked for a certain location a longtime ago and got deleted so I could speed up the site and consolidate my efforts. I said that because I think that might be part of the problem. When I was crawl reporting my site just three weeks ago on moz I had over 23 crawl report issues. Duplicate pages, missing meta tags the regular stuff. But now all of a sudden when I crawl report on MOZ it comes up with Zero issues. So I did another crawl On google analytic and this is what came up. SO im very confused because none of these url's are even url's on my site. So maybe people are searching for this stuff and clicking on broken links that are still indexed and getting this 404 error? What do you guys think? Thank you guys so much for taking a shot at this one. siS44ug
Technical SEO | | Meier0 -
Google crawl rate dropped after we activated CloudFront
Hello! Previously we've been using Amazon CloudFront for our static content (js, css etc). But to be able to reduce load on our origin servers and to be able to give our international users a good user experience we decided to deliver a couple of our sites through CloudFront. We noticed very nice drops in page load time, but when checking Google webmaster tools we noticed that all CloudFront-activated sites got a huge drop in pages crawled per day (from avg ~3500 to ~150). Also one of the sites have issues with the Google sitemaps (just marked as "Pending" in GWT) and no new pages or updated pages seems to be updated in the Google SERP. The rest of the sites gets some updates on the Google SERP, but very few compared to before CloudFront activation. Is there anybody here who have experience in full site delivery through CloudFront (or other CDNs) and effects on SEO/Google? Would be very glad for any insights or suggestions. The risk is that we need to remove CloudFront if this just continues.
Technical SEO | | Ludde0 -
Blocked jquery in Robots.txt, Any SEO impact?
I've heard that Google is now indexing links and stuff available in javascript and jquery. My webmastertools is showing that some links are blocked in robots.txt of jquery. Sorry I'm not a developer or designer. I want to know is there any impact of this on my SEO? and also how can I unblock it for the robots? Check this screenshot: http://i.imgur.com/3VDWikC.png
Technical SEO | | hammadrafique0 -
Google is indexing blocked content in robots.txt
Hi,Google is indexing some URLs that i don't want to be indexed and also is indexing the same URLs with https. This URLs are blocked in the file robots.txt.I've tried to block this URLs through Google WebmasterTools but Google doesn't let me do it because this URL are httpsThe file robots.txt is correct so, what can i do to avoid this content to be indexed?
Technical SEO | | elisainteractive0 -
Ajax Optimization in Mobile Site - Ajax Crawling
I'm working on a mobile site that has links embedded in JavaScript/Ajax in the homepage. This functionality is preventing the crawlers for accessing the links to mobile specific URLs. We're using an m. sub-domain. This is just an object in the homepage with an expandable list of links. I was wondering if using the following solution provided by Google will be a good way to help with this situation. https://developers.google.com/webmasters/ajax-crawling/ Thanks!
Technical SEO | | burnseo0 -
When Should I Ignore the Error Crawl Report
I have a handful of pages listed in the Error Crawl Report, but the report isn't actually showing anything wrong with these pages. I am double checking the code on the site and also can't find anything. Should I just move on and ignore the Error Crawl Report for these few pages?
Technical SEO | | ChristinaRadisic0 -
Rel Canonical errors after seomoz crawling
Hi to all, I can not find which are the errors in my web pages with the tag cannonical ref. I have to many errors over 500 after seomoz crawling my domain and I don't know how to fix it. I share my URL for root page: http://www.vour.gr My rel canonical tag for this page is: http://www.vour.gr"/> Can anyone help me why i get error for this page? Many thanks.
Technical SEO | | edreamis0 -
Meta description tag missing in crawl diagnostics
Each week I've been looking at my crawl diagnostics and seomoz still flags a few pages with missing meta description although they are definitely in there. Any ideas on why this would be happening?
Technical SEO | | British_Hardwoods0