Can URLs blocked with robots.txt hurt your site?
-
We have about 20 testing environments blocked by robots.txt, and these environments contain duplicates of our indexed content. These environments are all blocked by robots.txt, and appearing in google's index as blocked by robots.txt--can they still count against us or hurt us?
I know the best practice to permanently remove these would be to use the noindex tag, but I'm wondering if we leave them they way they are if they can still hurt us.
-
90% not, first of all, check if google indexed them, if not, your robots.txt should do it, however I would reinforce that by making sure those URLs are our of your sitemap file and make sure your robots's disallows are set to ALL *, not just google for example.
Google's duplicity policies are tough, but they will always respect simple policies such as robots.txt.
I had a case in the past when a customer had a dedicated IP, and google somehow found it, so you could see both the domain's pages and IP's pages, both the same, we simply added a .htaccess rule to point the IP requests to the domain, and even when the situation was like that for long, it doesn't seem to have affected them. In theory google penalizes duplicity but not in this particular cases, it is a matter of behavior.
Regards!
-
I've seen people say that in "rare" cases, links blocked by Robots.txt will be shown as search results but there's no way I can imagine that would happen if it's duplicates of your content.
Robots.txt lets a search engine know not to crawl a directory - but if another resource links to it, they may know it exists, just not the content of it. They won't know if it's noindex or not because they don't crawl it - but if they know it exists, they could rarely return it. Duplicate content would have a better result, therefore that better result will be returned, and your test sites should not be...
As far as hurting your site, no way. Unless a page WAS allowed, is duplicate, is now NOT allowed, and hasn't been recrawled. In that case, I can't imagine it would hurt you that much either. I wouldn't worry about it.
(Also, noindex doesn't matter on these pages. At least to Google. Google will see the noindex first and will not crawl the page. Until they crawl the page it doesn't matter if it has one word or 300 directives, they'll never see it. So noindex really wouldn't help unless a page had already slipped through.)
-
I don't believe they are going to hurt you, it is more of a warning that if you are trying to have these indexed that at the moment they can't be accessed. When you don't want them to be indexed i.e. in this case, I don't believe you are suffering because of it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Changing URL's During a Site Redesign
What are the effects of changing URL's during a site redesign following all of the important processes (ie: 301 redirects, reindexing in google, submitting a new sitemap) ?
Intermediate & Advanced SEO | | jennifer-garcia0 -
What is the Redirect Rule for corresponding https urls to new domain with the same https urls?
2 sites have the same urls but the owner wants just the 1 site. So I will be doing a 301 redirect with .htaccess from https://www.example.co.uk/sportsbook/SOCCER/today/ redirecting to https://www.example.com//sportsbook/SOCCER/today/ There are a lot of urls that are the same, so I was wondering what the rule is to put in the file please that will change them all to the corresponding urls? Would this be correct?... RewriteEngine on
Intermediate & Advanced SEO | | WSIDW
RewriteCond %{HTTPS_HOST} ^example.co.uk [NC,OR]
RewriteCond %{HTTPS_HOST} ^www.example.co.uk [NC]
RewriteRule ^(.*)$ https://example.com$1 [L,R=301,NC] Or would a simple rule like this work... redirect 301 / http://www.new domain.com/ If not correct could you please give me the correct rule, thanks! Then of course doing a change of address of address in webmaster tools after. Also... do I still need to do the forwarding from the https://www.example.co.uk/ domain provider after as well? Many thanks for your help in advance.0 -
Help with Robots.txt On a Shared Root
Hi, I posted a similar question last week asking about subdomains but a couple of complications have arisen. Two different websites I am looking after share the same root domain which means that they will have to share the same robots.txt. Does anybody have suggestions to separate the two on the same file without complications? It's a tricky one. Thank you in advance.
Intermediate & Advanced SEO | | Whittie0 -
What to do about old urls that don't logically 301 redirect to current site?
Mozzers, I have changed my site url structure several times. As a result, I now have a lot of old URLs that don't really logically redirect to anything in the current site. I started out 404-ing them, but it seemed like Google was penalizing my crawl rate AND it wasn't removing them from the index after being crawled several times. There are way too many (>100k) to use the URL removal tool even at a directory level. So instead I took some advice and changed them to 200, but with a "noindex" meta tag and set them to not render any content. I get less errors but I now have a lot of pages that do this. Should I (a) just 404 them and wait for Google to remove (b) keep the 200, noindex or (c) are there other things I can do? 410 maybe? Thanks!
Intermediate & Advanced SEO | | jcgoodrich0 -
Files blocked in robot.txt and seo
I use joomla and I have blocked the following in my robots.txt is there anything that is bad for seo ? User-agent: * Disallow: /administrator/ Disallow: /cache/ Disallow: /components/ Disallow: /images/ Disallow: /includes/ Disallow: /installation/ Disallow: /language/ Disallow: /libraries/ Disallow: /media/ Disallow: /modules/ Disallow: /plugins/ Disallow: /templates/ Disallow: /tmp/ Disallow: /xmlrpc/ Disallow: /mailto:myemail@myemail.com/ Disallow: /javascript:void(0) Disallow: /.pdf
Intermediate & Advanced SEO | | seoanalytics0 -
Can your site be penalized for changing the url structure and if so how long till you get back?
I'm doing well on yahoo and bing and the only reason I can think of for why I'm not showing on Google is because I changed the url structure a couple of months ago. I have solid on and off page done for this site.
Intermediate & Advanced SEO | | deciph220 -
Robots.txt unblock
I'm currently having trouble with what appears to be a cached version of robots.txt. I'm being told via errors in my Google sitemap account that I'm denying Googlebot access to the entire site. I uploaded clean and "Allow" robots.txt yesterday, but receive the same error. I've tried "Fetch as Googlebot" on the index and other pages, but still the error. Here is the latest: | Denied by robots.txt |
Intermediate & Advanced SEO | | Elchanan
| 11/9/11 10:56 AM | As I said, there in not blocking on the robots.txt for 24 hours. HELP!0 -
Can Anyone show me a site that has followed the seomoz seo rules
Hi i have been reading the seo information on here which is very interesting and i would like to know if anyone can point to any sites that have followed the rules and advice. It is great when you can read the info and rules but i feel it is also better to see a site that has followed the rules and to hear from people who have followed the information and put them into practice and explain what results they have got. I am currently building the following website http://www.womenlifestylemagazine.com so it would be great to see a site that has followed all the rules and who can explain if they work or not.
Intermediate & Advanced SEO | | ClaireH-1848860