Robots.txt Blocking - Best Practices
-
Hi All,
We have a web provider who's not willing to remove the wildcard line of code blocking all agents from crawling our client's site (user-agent: *, Disallow: /). They have other lines allowing certain bots to crawl the site but we're wondering if they're missing out on organic traffic by having this main blocking line. It's also a pain because we're unable to set up Moz Pro, potentially because of this first line.
We've researched and haven't found a ton of best practices regarding blocking all bots, then allowing certain ones. What do you think is a best practice for these files?
Thanks!
User-agent: * Disallow: / User-agent: Googlebot Disallow: Crawl-delay: 5 User-agent: Yahoo-slurp Disallow: User-agent: bingbot Disallow: User-agent: rogerbot Disallow: User-agent: * Crawl-delay: 5 Disallow: /new_vehicle_detail.asp Disallow: /new_vehicle_compare.asp Disallow: /news_article.asp Disallow: /new_model_detail_print.asp Disallow: /used_bikes/ Disallow: /default.asp?page=xCompareModels Disallow: /fiche_section_detail.asp
-
Thanks for taking the time to respond in depth, GreenStone. We appreciate the advice and have passed your response along to the web hosting company (along with a frustrated email) explaining they're not adhering to anyone's best practices. Hopefully this will convince them!
-
Thanks, Dmitrii for your response! From our research we've seen similar recommendations and it helps to have more evidence to back it up. Hopefully these guys will give in a bit!
-
Completely agree, I really wouldn't want to host my stuff with a company that can't figure out what really the best practices are ;-). This is very well layed out why you shouldn't want to set up your robots.txt like it is right now.
-
In general, I definitely wouldn't recommend the way the web-provider is handling this.
- Disallowing all while adding exceptions should never be the norm. Allowing all to crawl while adding exceptions for other crawlers aside from google would be best practice generally,
- It makes a lot more sense to just allow crawlers full access, and then add crawl delays for non google crawlers, in addition to disallowing those specific sub-folders: Disallow: /new_vehicle_detail.asp Disallow: /new_vehicle_compare.asp Disallow: /news_article.asp Disallow: /new_model_detail_print.asp Disallow: /used_bikes/ Disallow: /default.asp?page=xCompareModels Disallow: /fiche_section_detail.asp.
- Googlebot Disallow: Crawl-delay: 5, does not do you any good, as google does not obey these commands. Only Search Console can control this.
- You can test what is visible to googlebot within search console's "robots" subsection, in order to verify what they can access.
- Disallowing all while adding exceptions should never be the norm. Allowing all to crawl while adding exceptions for other crawlers aside from google would be best practice generally,
-
Here is another video from Matt - https://www.youtube.com/watch?v=I2giR-WKUfY
Lots of good points there too.
-
Hi.
Super weird client - that's for sure.
User-agent: * Disallow: /
Every bot will be blocked off! how in the world are they ranking?
watch that video, there are good ideas of bot and crawlers controlling. As well as you can consider that as best practices. And yes, what they have now is ridiculous.
https://moz.com/community/q/should-we-use-google-s-crawl-delay-setting
Here is a q/a about crawler delays. As far as I know Google ignores delays anyway, plus there is nothing good about it anyway.
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO Best eCommerce Practice - Same Product Different Keywords
I want to target different keywords for the same e-commerce product. What's the best SEO practice? I'm aware of the pitfalls to keyword stuffing. The product example is the GoPro Hero 5 Action Camera. The same action camera can be used in many different activities, e.g. surfing, auto racing, mountain biking, sky diving, search & rescue, law enforcement etc. These activities target completely different markets, so naturally the keywords are different. I have three strategies to tackle the issue. Please let me know which one you think is best. 1) Create different keyword landing pages with a call-to-action to the same conversion page Each landing page will be optimized for the targeted keywords e.g. surfing, auto racing, mountain biking, sky diving, search & rescue etc. Obviously this will be a big task because there will be numerous landing pages. Each page will show how the product can be used in these activities. For Surfing, the content would include surfing images with the GoPro Hero 5, instructions on how to mount the camera to a surfboard, waterproof tests, surfing testimonials and surfing owner reviews, etc. The call-to-action leads to a generic product conversion page displaying product information such as specs, weight, video formats, price, shipping, warranty etc. The same product page will be the call-to-action for all keyword landing pages. Positives Vast number of targeting long-tail keywords, numerous landing pages Good specific user experience who may be looking for "underwater action camera" (specific mounting instructions related to surfboards etc.) Less duplicate content as there is only one product page showing the same information Negatives Challenging to come up with each page for the vast amount of activities. Inbound Link Considerations
Intermediate & Advanced SEO | | ChrisCK
Inbound links from publications can link directly to the product page or the keyword landing page Surf Magazine may link to:
"Surfing Action Camera | GoPro Hero 5 | GoPro.com" - gopro.com/hero5/underwater-surf-camera
"GoPro Hero 5 Action Camera | GoPro.com" - gopro.com/hero5 2) Create different keyword landing pages with call-to-action to directly add product to cart Similar to the first option, but the call-to-action on the landing page is to Add Hero 5 to Cart. The user experience will be similar, the content creation challenges will be similar, but the techy product info e.g. specs, price, video format, etc. will be displayed on the same landing page. Positives Same benefit to long-tail keywords targeting Same benefit to a good, specific user experience Negatives Same challenges to create each long-tail keyword landing page Since there is no aggregate "product page", inbound links will be split between the landing pages Splitting of Page Authority to each landing conversion page Surf Magazine will link to:
"Surfing Action Camera | GoPro Hero 5 | GoPro.com" - gopro.com/hero5/underwater-surf-camera
Cycling Magazine will link to:
"Cycling Action Camera | GoPro Hero 5 | GoPro.com" - gopro.com/hero5/cycling-camera 3) Create conversion-focused product page with casual blog about keywords This is currently what GoPro has chosen - GoPro Hero 5. The product page displays the many different types of activities on the same page. The page is focused on the user experience with images of the action camera being used in different cool activities, showing its versatility. Note, very little long-tail keyword targeting on this page, instead they could use a broad keyword "action camera". To target long-tails, maybe a blog can be used brand ambassadors displaying the product being used in the various activities. Positives User experience focused Higher conversion rate Less content creation work Inbound links go to the same product page, building Page Authority Negatives Poor ranking with short-tail keyword (GoPro is not even in Top 10 SERP for "action camera") Poor ranking with long-tail keywords, (GoPro doesn't rank for "diving camera, cycling camera, surf camera") For blogging the long-tail keywords, who really converts from landing on a blog of the actual seller?! I hope those three strategies were explained clear enough and have enough of a differentiator. Please let me know what you think!0 -
Best Format to Index a Large Data Set
Hello Moz, I've been working on a piece of content that has 2 large data sets I have organized into a table that I would like indexed and want to know the best way to code the data for search engines while still providing a good visual experience for users. I actually created the piece 3 times and am deciding on which format to go with and I would love your professional opinions. 1. HTML5 - all the data is coded using tags and contains all the data on page in the . This is the most straight forward method and I know this will get indexed; however, it is also the ugliest looking table and least functional. 2. Java - I used google charts and loaded all the data into a
Intermediate & Advanced SEO | | jwalker880 -
Search engine blocked by robots-crawl error by moz & GWT
Hello Everyone,. For My Site I am Getting Error Code 605: Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag, Also google Webmaster Also not able to fetch my site, tajsigma.com is my site Any expert Can Help please, Thanx
Intermediate & Advanced SEO | | falguniinnovative0 -
Best Tool For Finding Related Keywords?
What is the best tool for finding related keywords to the primary keyword we are targetting? Cheers
Intermediate & Advanced SEO | | webguru20140 -
Whole site blocked by robots in webmaster tools
My URL is: www.wheretobuybeauty.com.auThis new site has been re-crawled over last 2 weeks, and in webmaster tools index status the following is displayed:Indexed 50,000 pagesblocked by robots 69,000Search query 'site:wheretobuybeauty.com.au' returns 55,000 pagesHowever, all pages in the site do appear to be blocked and over the 2 weeks, the google search query site traffic declined from significant to zero (proving this is in fact the case ).This is a Linux php site and has the following: 55,000 URLs in sitemap.xml submitted successfully to webmaster toolsrobots.txt file existed but did not have any entries to allow or disallow URLs - today I have removed robots.txt file completely URL re-direction within Linux .htaccess file - there are many rows within this complex set of re-directions. Developer has double checked this file and found that it is valid.I have read everything that google and other sources have on this topic and this does not help. Also checked webmaster crawl errors, crawl stats, malware and there is no problem there related to this issue.Is this a duplicate content issue - this is a price comparison site where approx half the products have duplicate product descriptions - duplicated because they are obtained from the suppliers through an XML data file. The suppliers have the descriptions from the files in their own sites.Help!!
Intermediate & Advanced SEO | | rrogers0 -
Should I block wordpress archive and tag?
I use Wodpress and Wordpress SEO by Yoast. I've set ip up to add noindex meta tag on all archive and tag pages. I don't think its useful to include thoses pages in search results because there's quite a few. Especialy the tag archive. Should I consider anything else or change my mind? What do you think? Thanks
Intermediate & Advanced SEO | | Akeif0 -
Best way to geo redirect
Hi I have a couple of ecommerce websites which have both a UK and USA store. At the moment I have both the UK and the USA domains sending me traffic from UK and USA search engines which means that a number of users are clicking a Google page for the store not in their location, ie UK people are clicking on a .com listing and ending up on the USA website. What is the best way to automatically redirect people to the correct store for their region? If I use an IP based auto redirect system would Google see some of the pages are doorway pages? Thanks
Intermediate & Advanced SEO | | Grumpy_Carl0 -
How can I block unwanted urls being indexed on google?
Hi, I have to block unwanted urls (not that page) from being indexed on google. I have to block urls like example.com/entertainment not the exact page example.com/entertainment.aspx . Is there any other ways other than robot.txt? If i add this to robot.txt will that block my other url too? Or should I make a 301 redirection from example.com/entertainment to example.com/entertainment.aspx. Because some of the unwanted urls are linked from other sites. thanks in advance.
Intermediate & Advanced SEO | | VipinLouka780