Is there a reason to set a crawl-delay in the robots.txt?
-
I've recently encountered a site that has set a crawl-delay command set in their robots.txt file. I've never seen a need for this to be set since you can set that in Google Webmaster Tools for Googlebot. They have this command set for all crawlers, which seems odd to me. What are some reasons that someone would want to set it like that? I can't find any good information on it when researching.
-
Google does not support the crawl delay command directly, but you can lower your crawl priority inside Google Webmaster Central.
So you are right using it the way you are. If you have it in the robots.txt, it does not really do anything and it will show in the webmaster console as well that GWT does not support it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Role of Robots.txt and Search Console parameters settings
Hi, wondering if anyone can point me to resources or explain the difference between these two. If a site has url parameters disallowed in Robots.txt is it redundant to edit settings in Search Console parameters to anything other than "Let Googlebot Decide"?
Technical SEO | | LivDetrick0 -
Robots.txt Syntax for Dynamic URLs
I want to Disallow certain dynamic pages in robots.txt and am unsure of the proper syntax. The pages I want to disallow all include the string ?Page= Which is the proper syntax?
Technical SEO | | btreloar
Disallow: ?Page=
Disallow: ?Page=*
Disallow: ?Page=
Or something else?0 -
404 crawl errors ending with your domain name??
Hello, I have a crawl test with numerous 404 errors ending with my domain name..? Not sure what the cause is. Plugins? Ecommerce? I use Wordpress if that could lead to an answer. Thanks for your time. K
Technical SEO | | Hydraulicgirl0 -
Is there any value in having a blank robots.txt file?
I've read an audit where the writer recommended creating and uploading a blank robots.txt file, there was no current file in place. Is there any merit in having a blank robots.txt file? What is the minimum you would include in a basic robots.txt file?
Technical SEO | | NicDale0 -
What may be the reason a sitemap is not indexed in Webmaster Tools?
Hi,
Technical SEO | | SorinaDascalu
I have a problem with a client's website. I searched many related questions here about the same problem but couldn't figure out a solution. Their website is in 2 languages and they submitted 2 sitemaps to Webmaster Tools. One got 100% indexed. From the second one, from over 800 URLs only 32 are indexed. I checked the following hypothesis why the second sitemap may not get indexed: sitemap is wrongly formatted - False sitemap contains URLs that don't return 200 status - False, there are no URLs that return 404, 301 or 302 status codes sitemap contains URLs that are blocked by robots.txt - False internal duplicate content problems - False issues with meta canonical tags - False For clarification, URLs from the sitemap that is not indexed completely also don't show up in Google index. Can someone tell me what can I also check to fix this issue?0 -
Robots.txt issue - site resubmission needed?
We recently had an issue when a load of new files were transferred from our dev server to the live site, which unfortunately included the dev site's robots.txt file which had a disallow:/ instruction. Bad! Luckily I spotted it quickly and the file has been replaced. The extent of the damage seems to be that some descriptions aren't displaying and we're getting a message about robots.txt in the SERPs for a few keywords. I've done a site: search and generally it seems to be OK for 99% of our pages. Our positions don't seem to be affected right now but obviously it's not great for the CTRs on those keywords affected. My question is whether there is anything I can do to bring the updated robots.txt file to Google's attention? Or should we just wait and sit it out? Thanks in advance for your answers!
Technical SEO | | GBC0 -
Robots txt
We have a development site that we want google and other bots to stay out of but we want roger to have access. Currently our robots.txt looks like this: User-agent: *
Technical SEO | | LadyApollo
Disallow: /cgi-bin/
Disallow: /development/ What would i need to addd or change to let him through? Thank you.0 -
Understanding the actions needed from a Crawl Report
I've just joined SEOMOZ last week and have not even received my first full-crawl yet, but as you know, I do get the re-crawl report. It shows I have 50 301's and 20 rel canonical's. I'm still very confused as to what I'm supposed to fix...And, all the rel canonical's are my sites main pages, so hence I am still equally confused as to what the canonical is doing and how do I properly setup my site. I'm a technical person and can grasp most things fairly quickly, but on this the light bulb is taking a little while longer to fire-up 🙂 If my question wasn't total jibberish and you can help shed some light, I would be forever grateful. Thank you.
Technical SEO | | apmgsmith0