Can URLs blocked with robots.txt hurt your site?
-
We have about 20 testing environments blocked by robots.txt, and these environments contain duplicates of our indexed content. These environments are all blocked by robots.txt, and appearing in google's index as blocked by robots.txt--can they still count against us or hurt us?
I know the best practice to permanently remove these would be to use the noindex tag, but I'm wondering if we leave them they way they are if they can still hurt us.
-
90% not, first of all, check if google indexed them, if not, your robots.txt should do it, however I would reinforce that by making sure those URLs are our of your sitemap file and make sure your robots's disallows are set to ALL *, not just google for example.
Google's duplicity policies are tough, but they will always respect simple policies such as robots.txt.
I had a case in the past when a customer had a dedicated IP, and google somehow found it, so you could see both the domain's pages and IP's pages, both the same, we simply added a .htaccess rule to point the IP requests to the domain, and even when the situation was like that for long, it doesn't seem to have affected them. In theory google penalizes duplicity but not in this particular cases, it is a matter of behavior.
Regards!
-
I've seen people say that in "rare" cases, links blocked by Robots.txt will be shown as search results but there's no way I can imagine that would happen if it's duplicates of your content.
Robots.txt lets a search engine know not to crawl a directory - but if another resource links to it, they may know it exists, just not the content of it. They won't know if it's noindex or not because they don't crawl it - but if they know it exists, they could rarely return it. Duplicate content would have a better result, therefore that better result will be returned, and your test sites should not be...
As far as hurting your site, no way. Unless a page WAS allowed, is duplicate, is now NOT allowed, and hasn't been recrawled. In that case, I can't imagine it would hurt you that much either. I wouldn't worry about it.
(Also, noindex doesn't matter on these pages. At least to Google. Google will see the noindex first and will not crawl the page. Until they crawl the page it doesn't matter if it has one word or 300 directives, they'll never see it. So noindex really wouldn't help unless a page had already slipped through.)
-
I don't believe they are going to hurt you, it is more of a warning that if you are trying to have these indexed that at the moment they can't be accessed. When you don't want them to be indexed i.e. in this case, I don't believe you are suffering because of it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can my affiliate subdomain hurt in any way?
Hello everyone, My main website is: http://www.virtualsheetmusic.com Whereas the above site's related "affiliate" website is located on the subdomain below: http://affiliates.virtualsheetmusic.com I was wondering if having that "affiliate section" on a subdomain could affect the main website negatively in some way... or would be better to put it in a sub-folder on the main website, or even on a totally different domain. Thanks in advance for any advice!
Intermediate & Advanced SEO | | fablau0 -
Can multiple geotargeting hreflang tags be set in one URL? International SEO question
Hi All, I have a question please. If i target www.onedirect.co.nl/en/ in English for Holland, Belgium and Luxembourg, are the tags below correct? English for Holland, Belgium and Luxembourg: http://www.example.co.nl/en/" hreflang="en-nl" /> http://www.example.co.nl/en/" hreflang="en-be" /> http://www.example.co.nl/en/" hreflang="en-lu" /> AND Targeting Holland and Belgium in Dutch: Pour la page www.onedirect.co.nl on peut inclure ce tag: http://www.example.co.nl" hreflang="nl-nl" /> http://www.example.co.nl" hreflang="nl-be" /> thanks a lot for your help!
Intermediate & Advanced SEO | | Onedirect_uk0 -
Robots.txt - Googlebot - Allow... what's it for?
Hello - I just came across this in robots.txt for the first time, and was wondering why it is used? Why would you have to proactively tell Googlebot to crawl JS/CSS and why would you want it to? Any help would be much appreciated - thanks, Luke User-Agent: Googlebot Allow: /.js Allow: /.css
Intermediate & Advanced SEO | | McTaggart0 -
I have a general site for my insurance agency. Should I create niche sites too?
I work with several insurance agencies and I get this questions several times each month. Most agencies offer personal and business insurance and in a certain geographic location. I recommend creating a quality general agency site but would they have more success creating other nice sites as well? For example, a niche site about home insurance and one about auto insurance. What would your recommendation be?
Intermediate & Advanced SEO | | lagunaitech1 -
PDF on financial site that duplicates ~50% of site content
I have a financial advisor client who has a downloadable PDF on his site that contains about 9 pages of good info. Problem is much of the content can also be found on individual pages of his site. Is it best to noindex/follow the pdf? It would be great to let the few pages of original content be crawlable, but I'm concerned about the duplicate content aspect. Thanks --
Intermediate & Advanced SEO | | 540SEO0 -
Our Site's Content on a Third Party Site--Best Practices?
One of our clients wants to use about 200 of our articles on their site, and they're hoping to get some SEO benefit from using this content. I know standard best practices is to canonicalize their pages to our pages, but then they wouldn't get any benefit--since a canonical tag will effectively de-index the content from their site. Our thoughts so far: add a paragraph of original content to our content link to our site as the original source (to help mitigate the risk of our site getting hit by any penalties) What are your thoughts on this? Do you think adding a paragraph of original content will matter much? Do you think our site will be free of penalty since we were the first place to publish the content and there will be a link back to our site? They are really pushing for not using a canonical--so this isn't an option. What would you do?
Intermediate & Advanced SEO | | nicole.healthline1 -
Could you use a robots.txt file to disalow a duplicate content page from being crawled?
A website has duplicate content pages to make it easier for users to find the information from a couple spots in the site navigation. Site owner would like to keep it this way without hurting SEO. I've thought of using the robots.txt file to disallow search engines from crawling one of the pages. Would you think this is a workable/acceptable solution?
Intermediate & Advanced SEO | | gregelwell0 -
Redirecting my new Website URL to my old Website URL
Hi! OK, I am semi - new to SEO Moz but have been self-teaching for 3 years. However I am stuck.. I have been operating my e-commerce site from www.shopadornonline.com for the past 3 years. I just purchased www.shopadorn.com Right now Shopadorn.com re-directs to www.shopadornonline.com because all my products and links go to shopadornonline.com/productblahblahblah I guess I am stuck. Not sure what to tell my web designer to do? Do I give up on having shopadorn.com OR do I start re-directing customers and doing 301 re-directs? I think from what i have read that it is bad to have traffic going to both shopadorn and shopadornonline as they compete for rankings? Where should I start?
Intermediate & Advanced SEO | | Shopadorn0