Robots.txt Blocking - Best Practices
-
Hi All,
We have a web provider who's not willing to remove the wildcard line of code blocking all agents from crawling our client's site (user-agent: *, Disallow: /). They have other lines allowing certain bots to crawl the site but we're wondering if they're missing out on organic traffic by having this main blocking line. It's also a pain because we're unable to set up Moz Pro, potentially because of this first line.
We've researched and haven't found a ton of best practices regarding blocking all bots, then allowing certain ones. What do you think is a best practice for these files?
Thanks!
User-agent: * Disallow: / User-agent: Googlebot Disallow: Crawl-delay: 5 User-agent: Yahoo-slurp Disallow: User-agent: bingbot Disallow: User-agent: rogerbot Disallow: User-agent: * Crawl-delay: 5 Disallow: /new_vehicle_detail.asp Disallow: /new_vehicle_compare.asp Disallow: /news_article.asp Disallow: /new_model_detail_print.asp Disallow: /used_bikes/ Disallow: /default.asp?page=xCompareModels Disallow: /fiche_section_detail.asp
-
Thanks for taking the time to respond in depth, GreenStone. We appreciate the advice and have passed your response along to the web hosting company (along with a frustrated email) explaining they're not adhering to anyone's best practices. Hopefully this will convince them!
-
Thanks, Dmitrii for your response! From our research we've seen similar recommendations and it helps to have more evidence to back it up. Hopefully these guys will give in a bit!
-
Completely agree, I really wouldn't want to host my stuff with a company that can't figure out what really the best practices are ;-). This is very well layed out why you shouldn't want to set up your robots.txt like it is right now.
-
In general, I definitely wouldn't recommend the way the web-provider is handling this.
- Disallowing all while adding exceptions should never be the norm. Allowing all to crawl while adding exceptions for other crawlers aside from google would be best practice generally,
- It makes a lot more sense to just allow crawlers full access, and then add crawl delays for non google crawlers, in addition to disallowing those specific sub-folders: Disallow: /new_vehicle_detail.asp Disallow: /new_vehicle_compare.asp Disallow: /news_article.asp Disallow: /new_model_detail_print.asp Disallow: /used_bikes/ Disallow: /default.asp?page=xCompareModels Disallow: /fiche_section_detail.asp.
- Googlebot Disallow: Crawl-delay: 5, does not do you any good, as google does not obey these commands. Only Search Console can control this.
- You can test what is visible to googlebot within search console's "robots" subsection, in order to verify what they can access.
- Disallowing all while adding exceptions should never be the norm. Allowing all to crawl while adding exceptions for other crawlers aside from google would be best practice generally,
-
Here is another video from Matt - https://www.youtube.com/watch?v=I2giR-WKUfY
Lots of good points there too.
-
Hi.
Super weird client - that's for sure.
User-agent: * Disallow: /
Every bot will be blocked off! how in the world are they ranking?
watch that video, there are good ideas of bot and crawlers controlling. As well as you can consider that as best practices. And yes, what they have now is ridiculous.
https://moz.com/community/q/should-we-use-google-s-crawl-delay-setting
Here is a q/a about crawler delays. As far as I know Google ignores delays anyway, plus there is nothing good about it anyway.
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best SEO Practices for Displaying FAQs throughout site?
I've got an FAQ plugin (Ultimate FAQ) for a Wordpress site with tons of content (like 30 questions each with a page full, multi-paragraphs, of answers with good info -- stuff Google LOVES.) Right now, I have a main FAQ page that has 3 categories and about 10 questions under each category and each question is collapsed by default. You click an arrow to expand it to reveal the answer.I then have a single category's questions also displayed at the bottom of an appropriate related page. So the questions appear in two places on the site, always collapsed by default.Each question has a permalink that links to an individual page with only that question and answer.I know Google discounts (doesn't ignore) content that is hidden by default and requires a click (via js function) to reveal it.So what I'm wondering is if the way I have it setup is optimal for SEO? How is Google going to handle the questions being in essentially three places: it's own standalone page, in a list on a category page, and in a list on a page showing all questions for all categories. Should I make the questions not collapsed by default (which will make the master FAQ page SUPER long!)Does Google not mind the duplicate content within the site?What's the best strategy?
Intermediate & Advanced SEO | | SeoJaz0 -
What are best page titles for sub-domain pages?
Hi Moz communtity, Let's say a website has multiple sub-domains with hundreds and thousands of pages. Generally we will be mentioning "primary keyword & "brand name" on every page of website. Can we do same on all pages of sub-domains to increase the authority of website for this primary keyword in Google? Or it gonna end up as negative impact if Google consider as duplicate content being mentioned same keyword and brand name on every page even on website and all pages of sub domains? Thanks
Intermediate & Advanced SEO | | vtmoz0 -
If Robots.txt have blocked an Image (Image URL) but the other page which can be indexed has this image, how is the image treated?
Hi MOZers, This probably is a dumb question but I have a case where the robots.tags has an image url blocked but this image is used on a page (lets call it Page A) which can be indexed. If the image on Page A has an Alt tags, then how is this information digested by crawlers? A) would Google totally ignore the image and the ALT tags information? OR B) Google would consider the ALT tags information? I am asking this because all the images on the website are blocked by robots.txt at the moment but I would really like website crawlers to crawl the alt tags information. Chances are that I will ask the webmaster to allow indexing of images too but I would like to understand what's happening currently. Looking forward to all your responses 🙂 Malika
Intermediate & Advanced SEO | | Malika11 -
Question about Syntax in Robots.txt
So if I want to block any URL from being indexed that contains a particular parameter what is the best way to put this in the robots.txt file? Currently I have-
Intermediate & Advanced SEO | | DRSearchEngOpt
Disallow: /attachment_id Where "attachment_id" is the parameter. Problem is I still see these URL's indexed and this has been in the robots now for over a month. I am wondering if I should just do Disallow: attachment_id or Disallow: attachment_id= but figured I would ask you guys first. Thanks!0 -
Best practice for retiring old product pages
We’re a software company. Would someone be able to help me with a basic process for retiring old product pages and re-directing the SEO value to new pages. We are retiring some old products to focus on new products. The new software has much similar functionality to the old software, but has more features. How can we ensure that the new pages get the best start in life? Also, what is the best way of doing this for users? Our plan currently is to: Leave the old pages up initially with a message to the user that the old software has been retired. There will also be a message explaining that the user might be interested in one of our new products and a link to the new pages. When traffic to these pages reduces, then we will delete these pages and re-direct them to the homepage. Has anyone got any recommendations for how we could approach this differently? One idea that I’m considering is to immediately re-direct the old product pages to the new pages. I was wondering if we could then provide a message to the user explaining that the old product has been retired but that the new improved product is available. I’d also be interested in pointing the re-directs to the new product pages that are most relevant rather than the homepage, so that they get the value of the old links. I’ve found in the past that old retirement pages for products can outrank the new pages as until you 301 them then all the links and authority flow to these pages. Any help would be very much appreciated 🙂
Intermediate & Advanced SEO | | RG_SEO0 -
Link-building best SEO practice (one-off VS periodic blogging)
Hi all, Generally, what would be best when building a website's ranking through link building? Having the same links from the same bloggers or receiving new links from different bloggers every time? A lot of the SEO services offer 4-8 blog backlinks per month. Would it be best if these links came from different sources every time or most from the same sources each month? I know there's a lot of factors but I hope this question is clear. Happy holidays and thank you for your insightful feedback. Carlos
Intermediate & Advanced SEO | | 90miLLA0 -
What is the best way to handle special characters in URLs
What is the best way to handle special characters? We have some URL's that use special characters and when a sitemap is generate using Xenu it changes the characters to something different. Do we need to have physically change the URL back to display the correct character? Example: URL: http://petstreetmall.com/Feeding-&-Watering/361.html Sitmap Link: http://www.petstreetmall.com/Feeding-%26-Watering/361.html
Intermediate & Advanced SEO | | WebRiverGroup0 -
C Block IP Links Strategy
Hi guys i run a web design company and have around 50 sites that i have designed most dont have links but to us i was considering adding a footer link that will link to a blog page within that site, each post on each site will have unique content about the project and about us as a design company. As you can see most of my ip address are c blocks, any advice here please, thanks in advance Example Ip list
Intermediate & Advanced SEO | | Will_Craig
abc.32.230.1
def.20.252.37
ghi.48.68.82
zz.32.229.131
zz.32.231.208
zz.32.253.87
xx.170.40.170
xx.170.40.172
xx.170.40.232
xx.170.40.247
xx.170.40.32
xx.170.43.200
xx.170.44.103
xx.170.44.105
xx.170.44.108
xx.170.44.111
xx.170.44.127
xx.170.44.137
xx.170.44.146
xx.170.44.157
xx.170.44.77
xx.170.44.81
xx.170.44.86
xx.170.44.95
xx.170.44.96 [question edited by staff to remove full IP addresses]0