Robots.txt blocking Addon Domains
-
I have this site as my primary domain: http://www.libertyresourcedirectory.com/
I don't want to give spiders access to the site at all so I tried to do a simple Disallow: / in the robots.txt. As a test I tried to crawl it with Screaming Frog afterwards and it didn't do anything. (Excellent.)
However, there's a problem. In GWT, I got an alert that Google couldn't crawl ANY of my sites because of robots.txt issues. Changing the robots.txt on my primary domain, changed it for ALL my addon domains. (Ex. http://ethanglover.biz/ ) From a directory point of view, this makes sense, from a spider point of view, it doesn't.
As a solution, I changed the robots.txt file back and added a robots meta tag to the primary domain. (noindex, nofollow). But this doesn't seem to be having any effect. As I understand it, the robots.txt takes priority.
How can I separate all this out to allow domains to have different rules? I've tried uploading a separate robots.txt to the addon domain folders, but it's completely ignored. Even going to ethanglover.biz/robots.txt gave me the primary domain version of the file. (SERIOUSLY! I've tested this 100 times in many ways.)
Has anyone experienced this? Am I in the twilight zone? Any known fixes? Thanks.
Proof I'm not crazy in attached video.
-
Sort of resolved, maybe the wrong place to ask any further. The above is a working fix for what seems like a legit bug, I'll update if WordPress forums say anything.
-
No, I don't like to waste memory and bandwidth. If you can do it yourself, you should probably do it yourself. I'm moving this question to WordPress.
-
Hi Ethan
One thing I have heard of people trying is a plugin that serves dynamic robots.txt files. I don't use add-on sites so you will probably have to test the behavior. He is an example of one of the plugins.
https://wordpress.org/plugins/wp-robots-txt/
hope this helps,
Anthony -
Ethan
It sounds like the issue has been resolved. I'm not too familiar with domain add-ons but if you have any more trouble let us know and I'll be sure another Moz Associate takes a look.
-Dan (Moz Associate)
-
-
Hi Ethan
Sorry, I wasn't clear. I was thinking you could drop the use of the robots.txt all together and just use the Meta Tag approach since it seems that the robots.txt is having a global impact to your sites. Search engines will still crawl the pages, but it should exclude them from the index.
Hope this helps,
Anthony -
Anthony, based on your response it's obvious you haven't read the question or follow-up.
-
Hi Ethan
One approach may be to try using the Robots Meta Tag. You can use noindex to tell Google not to index. This won't prevent crawling, but Google should respect the request to not index your site. I have included a good guide below to get you started.
https://developers.google.com/webmasters/control-crawl-index/docs/robots_meta_tag
Hope this helps,
Anthony B
Biondo Creative
biondocreative.com -
I've found a quick fix for now: http://ethanglover.biz/using-robots-txt-with-addon-domains/
This is still an issue, and it may be exclusive to WordPress.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots File
For some reason the robots file on this site: http://rushhour.net.au/robots.txt Is giving this in Google: <cite class="_Rm">www.rushhour.net.au/bootcamp.html</cite>A description for this result is not available because of this site's robots.txtLearn moreCan anyone tell me why please?thanks.
Technical SEO | | SuitsAdmin0 -
Blocking Affiliate Links via robots.txt
Hi, I work with a client who has a large affiliate network pointing to their domain which is a large part of their inbound marketing strategy. All of these links point to a subdomain of affiliates.example.com, which then redirects the links through a 301 redirect to the relevant target page for the link. These links have been showing up in Webmaster Tools as top linking domains and also in the latest downloaded links reports. To follow guidelines and ensure that these links aren't counted by Google for either positive or negative impact on the site, we have added a block on the robots.txt of the affiliates.example.com subdomain, blocking search engines from crawling the full subddomain. The robots.txt file is the following code: User-agent: * Disallow: / We have authenticated the subdomain with Google Webmaster Tools and made certain that Google can reach and read the robots.txt file. We know they are being blocked from reading the affiliates subdomain. However, we added this affiliates subdomain block a few weeks ago to the robots.txt, but links are still showing up in the latest downloads report as first being discovered after we added the block. It's been a few weeks already, and we want to make sure that the block was implemented properly and that these links aren't being used to negatively impact the site. Any suggestions or clarification would be helpful - if the subdomain is being blocked for the search engines, why are the search engines following the links and reporting them in the www.example.com subdomain GWMT account as latest links. And if the block is implemented properly, will the total number of links pointing to our site as reported in the links to your site section be reduced, or does this not have an impact on that figure?From a development standpoint, it's a much easier fix for us to adjust the robots.txt file than to change the affiliate linking connection from a 301 to a 302, which is why we decided to go with this option.Any help you can offer will be greatly appreciated.Thanks,Mark
Technical SEO | | Mark_Ginsberg0 -
Should a 301 from a penalised domain to a new domain be removed?
A business traded on a domain let's say example.COM which was heavily penalised due to non-removable spammy back links. Their previous SEO advised them to set up on example.CO.UK but redirected example.COM to example.CO.UK. Example.CO.UK ranks very poorly, presumably due to being 'tarred with the same brush' i.e. attributed with the ills of example.COM. Will it do any good to remove the redirect or is example.CO.UK now doomed as well?
Technical SEO | | Ewan.Kennedy1 -
Removal request for entire catalog. Can be done without blocking in robots?
Bunch of thin content (catalog) pages modified with "follow, noindex" few weeks ago. Site completely re-crawled and related cache shows that these pages were not indexed again. So it's good I suppose 🙂 But all of them are still in main Google index and shows up from time to time in SERPs. Will they eventually disappear or we need to submit removal request?Problem is we really don't want to add this pages into robots.txt (they are passing link juice down below to product pages)Thanks!
Technical SEO | | LocalLocal0 -
Question about Robot.txt
I just started my own e-commerce website and I hosted it to one of the popular e-commerce platform Pinnacle Cart. It has a lot of functions like, page sorting, mobile website, etc. After adjusting the URL parameters in Google webmaster last 3 weeks ago, I still get the same duplicate errors on meta titles and descriptions based from Google Crawl and SEOMOZ crawl. I am not sure if I made a mistake of choosing pinnacle cart because it is not that flexible in terms of editing the core website pages. There is now way to adjust the canonical, to insert robot.txt on every pages etc. however it has a function to submit just one page of robot.txt. and edit the .htcaccess. The website pages is in PHP format. For example this URL: www.mycompany.com has a duplicate title and description with www.mycompany.com/site-map.html (there is no way of editing the title and description of my sitemap) Another error is www.mycompany.com has a duplicate title and description with http://www.mycompany.com/brands?url=brands Is it possible to exclude those website with "url=" and my "sitemap.html" in the robot.txt? or the URL parameters from Google is enough and it just takes a lot of time. Can somebody help me on the format of Robot.txt. Please? thanks
Technical SEO | | paumer800 -
Un-Indexing a Page without robots.txt or access to HEAD
I am in a situation where a page was pushed live (Went live for an hour and then taken down) before it was supposed to go live. Now normally I would utilize the robots.txt or but I do not have access to either and putting a request in will not suffice as it is against protocol with the CMS. So basically I am left to just utilizing the and I cannot seem to find a nice way to play with the SE to get this un-indexed. I know for this instance I could go to GWT and do it but for clients that do not have GWT and for all the other SE's how could I do this? Here is the big question here: What if I have a promotional page that I don't want indexed and am met with these same limitations? Is there anything to do here?
Technical SEO | | DRSearchEngOpt0 -
Confused about robots.txt
There is a lot of conflicting and/or unclear information about robots.txt out there. Somehow, I can't make out what's the best way to use robots even after visiting the official robots website. For example I have the following format for my robots. User-agent: * Disallow: javascript.js Disallow: /images/ Disallow: /embedconfig Disallow: /playerconfig Disallow: /spotlightmedia Disallow: /EventVideos Disallow: /playEpisode Allow: / Sitemap: http://www.example.tv/sitemapindex.xml Sitemap: http://www.example.tv/sitemapindex-videos.xml Sitemap: http://www.example.tv/news-sitemap.xml Is this correct and/or recommended? If so, then how come I see a list of over 200 or so links blocked by robots when Im checking out Google Webmaster Tools! Help someone, anyone! Can't seem to understand this robotic business! Regards,
Technical SEO | | Netpace0 -
What to do with domain?
I bought a domain a couple months ago while having delusions of granger. The domain is PR2, has around 30 decent backlinks, and a small amount of traffic. It also has one of our keywords in the domain name. I'm thinking of 301 redirect to our homepage. Is there a downside to doing this? We really could use the additional traffic on our site.
Technical SEO | | dmac0