Is there a reason to set a crawl-delay in the robots.txt?
-
I've recently encountered a site that has set a crawl-delay command set in their robots.txt file. I've never seen a need for this to be set since you can set that in Google Webmaster Tools for Googlebot. They have this command set for all crawlers, which seems odd to me. What are some reasons that someone would want to set it like that? I can't find any good information on it when researching.
-
Google does not support the crawl delay command directly, but you can lower your crawl priority inside Google Webmaster Central.
So you are right using it the way you are. If you have it in the robots.txt, it does not really do anything and it will show in the webmaster console as well that GWT does not support it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Dropdown content on page being crawled
Hi, will the content within a dropdown on a page be crawled? I.e. if the page visitor has to click to reveal the content as a dropdown will it be crawled by bots. Thanks
Technical SEO | | BillSCC1 -
Robots.txt Syntax for Dynamic URLs
I want to Disallow certain dynamic pages in robots.txt and am unsure of the proper syntax. The pages I want to disallow all include the string ?Page= Which is the proper syntax?
Technical SEO | | btreloar
Disallow: ?Page=
Disallow: ?Page=*
Disallow: ?Page=
Or something else?0 -
Showing duplicate content when I have canonical url set, why?
Just inspecting my sites report and I see that I have a lot of duplicate content issues, not sure why these two pages here http://www.thecheapplace.com/wholesale-products/Are-you-into-casual-sex-patch http://www.thecheapplace.com/wholesale-products/small-wholesale-patches-1/Are-you-into-casual-sex-patch are showing as duplicate content when both pages have a clearly defined canonical url of http://www.thecheapplace.com/Are-you-into-casual-sex-patch Any answer would be appreciated, thank you
Technical SEO | | erhansimavi0 -
Crawling a subfolder with a dev site
I am trying to set up a campaign where I am crawling a subfolder of our main site where I have dev version of the new site. However, even though the new site resolves and I have included the full resolving URL but the crawl results come back saying that only one page has been crawled. The site has had a protected block on it for a period of time but this has now been removed. Any ideas? Thanks Nick
Technical SEO | | Total_Displays0 -
Application/x-msdownload in crawl report?
In a crawl report for my company's site, the "content_type_header" column usually contains "text/html". But there are some random pages with "application/x-msdownload"... what does "application/x-msdownload" mean? <colgroup><col width="121"></colgroup>
Technical SEO | | JimLynch
| |0 -
What is the sense of robots.txt?
Using robots.txt to prevent search engine from indexing the page is not a good idea. so what is the sense of robots.txt? just for attracting robots to crawl sitemap?
Technical SEO | | jallenyang0 -
Crawl report showing only 1 crawled page
Hi, I´m really new to this and have just setup some Campaigns. I have setup a Campaign for the root domain: portaldeldiablo.com.uy which returned only 2 crawled pages.. As this page had a 301 redirect from the non-www to the www version, I deleted this Campaign and setup a new one for www.portaldeldiablo.com.uy which returned only 1 crawled page.. I really don´t know why is my website not being crawled..Thanks in advance for your help.
Technical SEO | | ceci27100 -
Is robots.txt a must-have for 150 page well-structured site?
By looking in my logs I see dozens of 404 errors each day from different bots trying to load robots.txt. I have a small site (150 pages) with clean navigation that allows the bots to index the whole site (which they are doing). There are no secret areas I don't want the bots to find (the secret areas are behind a Login so the bots won't see them). I have used rel=nofollow for internal links that point to my Login page. Is there any reason to include a generic robots.txt file that contains "user-agent: *"? I have a minor reason: to stop getting 404 errors and clean up my error logs so I can find other issues that may exist. But I'm wondering if not having a robots.txt file is the same as some default blank file (or 1-line file giving all bots all access)?
Technical SEO | | scanlin0