Block an entire subdomain with robots.txt?
-
Is it possible to block an entire subdomain with robots.txt?
I write for a blog that has their root domain as well as a subdomain pointing to the exact same IP. Getting rid of the option is not an option so I'd like to explore other options to avoid duplicate content. Any ideas?
-
Awesome! That did the trick -- thanks for your help. The site is no longer listed
-
Fact is, the robots file alone will never work (the link has a good explanation why - short form: all it does is stop the bots from indexing again).
Best to request removal then wait a few days.
-
Yeah. As of yet, the site has not been de-indexed. We placed the conditional rule in htaccess and are getting different robots.txt files for the domain and subdomain -- so that works. But I've never done this before so I don't know how long it's supposed to take?
I'll try to verify via Webmaster Tools to speed up the process. Thanks
-
You should do a remove request in Google Webmaster Tools. You have to first verify the sub-domain then request the removal.
See this post on why the robots file alone won't work...
http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts
-
Awesome. We used your second idea and so far it looks like it is working exactly how we want. Thanks for the idea.
Will report back to confirm that the subdomain has been de-indexed.
-
Option 1 could come with a small performance hit if you have a lot of txt files being used on the server.
There shouldn't be any negative side effects to option 2 if the rewrite is clean (IE not accidently a redirect) and the content of the two files are robots compliant.
Good luck
-
Thanks for the suggestion. I'll definitely have to do a bit more research into this one to make sure that it doesn't have any negative side effects before implementation
-
We have a plugin right now that places canonical tags, but unfortunately, the canonical for the subdomain points to the subdomain. I'll look around to see if I can tweak the settings
-
Sounds like (from other discussions) you may be stuck requiring a dynamic robot.txt file which detects what domain the bot is on and changes the content accordingly. This means the server has to run all .txt file as (I presume) PHP.
Or, you could conditionally rewrite the /robot.txt URL to a new file according to sub-domain
RewriteEngine on
RewriteCond %{HTTP_HOST} ^subdomain.website.com$
RewriteRule ^robotx.txt$ robots-subdomain.txtThen add:
User-agent: *
Disallow: /to the robots-subdomain.txt file
(untested)
-
Placing canonical tags isn't an option? Detect that the page is being viewed through the subdomain, and if so, write the canonical tag on the page back to the root domain?
Or, just place a canonical tag on every page pointing back to the root domain (so the subdomain and root domain pages would both have them). Apparently, it's ok to have a canonical tag on a page pointing to itself. I haven't tried this, but if Matt Cutts says it's ok...
-
Hey Ryan,
I wasn't directly involved with the decision to create the subdomain, but I'm told that it is necessary to create in order to bypass certain elements that were affecting the root domain.
Nevertheless, it is a blog and the users now need to login to the subdomain in order to access the Wordpress backend to bypass those elements. Traffic for the site still goes to the root domain.
-
They both point to the same location on the server? So there's not a different folder for the subdomain?
If that's the case then I suggest adding a rule to your htaccess file to 301 the subdomain back to the main domain in exactly the same way people redirect from non-www to www or vice-versa. However, you should ask why the server is configured to have a duplicate subdomain? You might just edit your apache settings to get rid of that subdomain (usually done through a cpanel interface).
Here is what your htaccess might look like:
<ifmodule mod_rewrite.c="">RewriteEngine on
# Redirect non-www to wwww
RewriteCond %{HTTP_HOST} !^www.mydomain.org [NC]
RewriteRule ^(.*)$ http://www.mydomain.org/$1 [R=301,L]</ifmodule> -
Not to me LOL I think you'll need someone with a bit more expertise in this area than I to assist in this case. Kyle, I'm sorry I couldn't offer more assistance... but I don't want to tell you something if I'm not 100% sure. I suspect one of the many bright SEOmozer's will quickly come to the rescue on this one.
Andy
-
Hey Andy,
Herein lies the problem. Since the domain and subdomain point to the exact same place, they both utilize the same robots.txt file.
Does that make sense?
-
Hi Kyle Yes, you can block an entire subdomain via robots.txt, however you'll need to create a robots.txt file and place it in the root of the subdomain, then add the code to direct the bots to stay away from the entire subdomain's content.
User-agent: *
Disallow: /hope this helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Internal search pages (and faceted navigation) solutions for 2018! Canonical or meta robots "noindex,follow"?
There seems to conflicting information on how best to handle internal search results pages. To recap - they are problematic because these pages generally result in lots of query parameters being appended to the URL string for every kind of search - whilst the title, meta-description and general framework of the page remain the same - which is flagged in Moz Pro Site Crawl - as duplicate, meta descriptions/h1s etc. The general advice these days is NOT to disallow these pages in robots.txt anymore - because there is still value in their being crawled for all the links that appear on the page. But in order to handle the duplicate issues - the advice varies into two camps on what to do: 1. Add meta robots tag - with "noindex,follow" to the page
Intermediate & Advanced SEO | | SWEMII
This means the page will not be indexed with all it's myriad queries and parameters. And so takes care of any duplicate meta /markup issues - but any other links from the page can still be crawled and indexed = better crawling, indexing of the site, however you lose any value the page itself might bring.
This is the advice Yoast recommends in 2017 : https://yoast.com/blocking-your-sites-search-results/ - who are adamant that Google just doesn't like or want to serve this kind of page anyway... 2. Just add a canonical link tag - this will ensure that the search results page is still indexed as well.
All the different query string URLs, and the array of results they serve - are 'canonicalised' as the same.
However - this seems a bit duplicitous as the results in the page body could all be very different. Also - all the paginated results pages - would be 'canonicalised' to the main search page - which we know Google states is not correct implementation of canonical tag
https://webmasters.googleblog.com/2013/04/5-common-mistakes-with-relcanonical.html this picks up on this older discussion here from 2012
https://moz.com/community/q/internal-search-rel-canonical-vs-noindex-vs-robots-txt
Where the advice was leaning towards using canonicals because the user was seeing a percentage of inbound into these search result pages - but i wonder if it will still be the case ? As the older discussion is now 6 years old - just wondering if there is any new approach or how others have chosen to handle internal search I think a lot of the same issues occur with faceted navigation as discussed here in 2017
https://moz.com/blog/large-site-seo-basics-faceted-navigation1 -
Robots.txt wildcards - the devs had a disagreement - which is correct?
Hi – the lead website developer was assuming that this wildcard: Disallow: /shirts/?* would block URLs including a ? within this directory, and all the subdirectories of this directory that included a “?” The second developer suggested that this wildcard would only block URLs featuring a ? that come immediately after /shirts/ - for example: /shirts?minprice=10&maxprice=20 BUT argued that this robots.txt directive would not block URLS featuring a ? in sub directories - e.g. /shirts/blue?mprice=100&maxp=20 So which of the developers is correct? Beyond that, I assumed that the ? should feature a * on each side of it – for example - /? - to work as intended above? Am I correct in assuming that?
Intermediate & Advanced SEO | | McTaggart0 -
Process to move blog from subdomain on Wordpress, to subfolder on BigCommerce store
Hi Having weighed up all the angles, it's time to bite the bullet and move our blog from a subdomain to a subfolder on our ecommerce store. But as someone new to SEO I am struggling to find the correct process for doing this properly for our situation. Can anyone help? I have outlined what I have learned so far in 10 steps below to hopefully help you understand my situation, where I am at and what I am struggling with. Advice, tips and suggested further reading on all/any of the 10 points would be great. Some quick background The blog is on Wordpress, and on a subdomain of our store (blog.store.com). It is four years old, with 80 original posts we want to move to a subfolder of the store (store.com/blog). The store has been built using BigCommerce, and has also been active for four years. Both the blog and the store exist as properties within our Google Search Console. The 10 steps required for the move, based on research so far, and the associated questions 1 Prepare new site: which I am guessing means reproducing all of the content over at the new subfolder location (store.com/blog)? 2 Setup errors for any pages not being transferred: I have no idea how to do this! 3 Make sure analytics is working for the new pages: it should be as the both the site the pages are moving to a subfolder of is already running with analytics and has been for years - is this a safe assumption? 4 Map all URLs being moved to their new counterparts: is this just record keeping? In a spreadsheet? Or is it a process I don't yet understand?? 5 Add rel='cannonical' tags: while I understand the concept of these, I have no idea how to implement them properly here! 6 Create and save new sitemaps: as both the blog.store.com and store.com exist in Google Search Console already, can I just refresh the sitemap for store.com/blog once the subfolder is created to achive this? 7 Setup and test 301 redirects: these can be created in BigCommerce for the new pages in the store.com/blog subfolder, and will refer back to the blog.store.com URLs the pages came from - is this the right way to do this? I am still learning here and know enough to know how much this can matter, but not enough to fully grasp the intricacies of the process 8 Move URLs simultaneously: I have no idea what this means or how to achieve it! is this just for big site moves? Does it still apply to 80 blog posts shifting from a subdomain to a subfolder on the same root? If so, how? 9 Submit a change of address in Google Search Console: This looks simple enough although Google ominously warn: ‘Don't use this tool unless you are moving your primary website presence to a new address’ Which makes me wonder how simple it really is - my primary website in this case is the store, which is not moving. address But does 'primary' here simply mean the individual property with search console? I am going in circles on this one! 10 Configure the old blog on the subdomain to redirect people and engines to the new pages: I thought the 301 redirects and rel='cannonical' stuff did that already? What did I miss?? For anyone still here, thanks for making it this far and if you still have the energy left, any advice would be great! Thanks
Intermediate & Advanced SEO | | Warren_331 -
Google robots.txt test - not picking up syntax errors?
I just ran a robots.txt file through "Google robots.txt Tester" as there was some unusual syntax in the file that didn't make any sense to me... e.g. /url/?*
Intermediate & Advanced SEO | | McTaggart
/url/?
/url/* and so on. I would use ? and not ? for example and what is ? for! - etc. Yet "Google robots.txt Tester" did not highlight the issues... I then fed the sitemap through http://www.searchenginepromotionhelp.com/m/robots-text-tester/robots-checker.php and that tool actually picked up my concerns. Can anybody explain why Google didn't - or perhaps it isn't supposed to pick up such errors? Thanks, Luke0 -
What is better for web ranking? A domain or subdomain?
I realise that often it is better put content in a subfolder rather than a subdomain, but I have another question that I cannot seem to find the answer to. Is there any ranking benefit to having a site on a .co.uk or .com domain rather than on a subdomain? I'm guessing that the subdomain might benefit from other content on the domain it's hosted on, but are subdomains weighted down in any way in the search results?
Intermediate & Advanced SEO | | RG_SEO0 -
Block Level Link Juice
I need a better understanding of how links in different parts of the page pass juice. Much has been written about how footer links pass less juice than other parts of the page. The question I have is that if a page has a hypothetical 1000 points of Link Juice and can pass on +/-800 points via links, and I have 1 and only 1 link in the footer to another page, does it pass the full 800 points? Or... since footers only pass a small fraction of link juice, it passes lets say 80 points, and the other 720 points stays locked up on the page. This question is a hypothetical - I'm just trying to understand relationships. I don't know if I've explained the question too well, but if someone could answer i it, or point me in the right direction, I would appreciate it.
Intermediate & Advanced SEO | | CsmBill0 -
Should Site Search results be blocked from search engines?
What are the advantages & disadvantages of letting Google crawl site search results? We currently have them blocked via robots.txt, so I'm not sure if we're missing out on potential traffic. Thanks!
Intermediate & Advanced SEO | | pbhatt0 -
Negative impact on crawling after upload robots.txt file on HTTPS pages
I experienced negative impact on crawling after upload robots.txt file on HTTPS pages. You can find out both URLs as follow. Robots.txt File for HTTP: http://www.vistastores.com/robots.txt Robots.txt File for HTTPS: https://www.vistastores.com/robots.txt I have disallowed all crawlers for HTTPS pages with following syntax. User-agent: *
Intermediate & Advanced SEO | | CommercePundit
Disallow: / Does it matter for that? If I have done any thing wrong so give me more idea to fix this issue.0