Disallow statement - is this tiny anomaly enough to render Disallow invalid?
-
Google site search (site:'hbn.hoovers.com') indicates 171,000 results for this subdomain. That is not a desired result - this site has 100% duplicate content. We don't want SEs spending any time here.
Robots.txt is set up mostly right to disallow all search engines from indexing this site. That asterisk at the end of the disallow statement looks pretty harmless - but could that be why the site has been indexed?
User-agent: * Disallow: /*
-
Interesting. I'd never heard that before.
We've never had GA or GWT on these mirror sites before, so it's hard to say what Google is doing these days.
But the goal is definitely to make them and their contents invisible to SEs. We'll get GWT on there and start removing URLs.
Thanks!
-
The additional asterisk shouldn't do you any harm, although standard practice seems to be just putting the "/".
Does it seem like Google is still crawling this subdomain when you look at webmasters crawl stats? While the disallow function in robots.txt will usually stop bots from crawling, it doesn't prevent them from indexing or keeping pages indexed that were before the disallow was put in place. If you want these pages removed from the index, you can request it through webmasters and also use meta robots noindex as opposed to the robots.txt file. Moz has a good article about it here: http://moz.com/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts
If you're just worried about bots crawling the subdomain, it's possible they've already stopped crawling it, but continue to index it due to history or additional indicators suggesting they should index it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Invalid Microdata - How much of an impact does invalid microdata have on SERPS
Invalid Microdata - How much of an impact does invalid microdata have on SERPS?? The Low down. We are located in Australia We run our business on the Bigcommerce platform. Problem is Google is crawling our bigcommerce in USD and displaying our micro data (price in USD instead of AUD) How much of a problem is this in terms of SEO issues? We have seen a steady decline or many of our top 3 rankings shift down a few pegs to mid-bottom of top 10. We're also getting google shopping microdata warnings too. Hi, I am just wondering how we fix invalid micro data (Price) is displaying USD where we are located in Australia so it should be AUD. Solutions: Does anyone have a solution for this they can help me out with to resolve this microdata issue on the bigcommerce platform (stencil cornerstone based template)? Are there any other technical elements at first glance you note on our website that may be a potential cause in the SERP decline from top 3's to top 10's? URL https://wwww.fishingtackleshop.com.au
Technical SEO | | oceanstorm0 -
Implications of Disallowing A LOT of Pages
Hey everyone, I just started working on a website and there are A LOT of pages that should not be crawled - probably in the thousands. Are there any SEO risks of disallowing them all at once, or should I go through systematically and take a few dozen down at a time?
Technical SEO | | rachelmeyer1 -
Should I disallow crawl of my Job board?
MOZ crawler is telling me we have loads of duplicate content issues. We use a Job Board plugin on our Wordpress site and we have allot of duplicate or very similar jobs (usually just a different location), but the plugin doesn't allow us to add any rel canonical tags to the individual jobs. Should I disallow the /jobs/ url in the robots.txt file? This will solve the duplicate content issue but then Google wont be able to crawl any of the individual job listings Has anyone had any experience working with a job board plugin on Wordpress and had a similar issue, or can advise on how best to solve our duplicate content?? Thanks 🙂
Technical SEO | | O2C0 -
How to activate Page Fetching and Rendering
We have a coupons and deal website. Coupons are added and removed from the website on a daily basis. But crawler isn't crawling it that often. Lately we started fetching and rendering the page, but that is a time taking task as we have more than 500 stores with coupons. So, I was looking for some API or some method using which the crawler would crawl the website as defined. Suppose "x" store page should be crawled every alternate day as we daily update the coupons there, whereas "y" store coupons are update fortnightly so , they can be crawled weekly. Can somebody suggest me something..
Technical SEO | | jaintechnosoft0 -
Can I disallow my subdomain for penguin recover?
Hi, I have a site like BannerBuzz.com, before last penguin my site's all keywords were in good position in google, but after penguin hit on my website, my all keywords are going down and down day by day, i have done some changes in my website for improvement, but in 1 change i have some confusion. i have one sub domain (http://reviews.bannerbuzz.com/), which display my websites all keywords user reviews, in which every category's 15 reviews are display in my website http://www.bannerbuzz.com so are those user reviews consider as duplicate content between sub domain and main website. can i disallow sub domain from all search engine? currently sub domain is open for all search engine, is that helpful to block it? Thanks
Technical SEO | | CommercePundit0 -
W3C html5 meta tags invalid?
Dear Mozers, we get errors when validating meta tags in html5. I know it's experimental and not all metas are valid, but how do you handle this? Leave the tags out? here are some examples: `…name="DC.title" content="my content...xyc.." /**>**` ``` `>` ``` `>` ``` `>` ``` `>` `>` I tried to find some information but couldn't. What would you do? Thanks a lot, Barbara
Technical SEO | | barbara-f0 -
Spider Indexed Disallowed URLs
Hi there, In order to reduce the huge amount of duplicate content and titles for a cliënt, we have disallowed all spiders for some areas of the site in August via the robots.txt-file. This was followed by a huge decrease in errors in our SEOmoz crawl report, which, of course, made us satisfied. In the meanwhile, we haven't changed anything in the back-end, robots.txt-file, FTP, website or anything. But our crawl report came in this November and all of a sudden all the errors where back. We've checked the errors and noticed URLs that are definitly disallowed. The disallowment of these URLs is also verified by our Google Webmaster Tools, other robots.txt-checkers and when we search for a disallowed URL in Google, it says that it's blocked for spiders. Where did these errors came from? Was it the SEOmoz spider that broke our disallowment or something? You can see the drop and the increase in errors in the attached image. Thanks in advance. [](<a href=)" target="_blank">a> [](<a href=)" target="_blank">a> LAAFj.jpg
Technical SEO | | ooseoo0 -
Differences between Lynx Viewer, Fetch as Googlebot and SEOMoz Googlebot Rendering
Three tools to render a site as Googlebot would see it: SEOMoz toolbar.
Technical SEO | | qlkasdjfw
Lynxviewer (http://www.yellowpipe.com/yis/tools/lynx/lynx_viewer.php )
Fetch as Googlebot. I have a website where I can see dropdown menus in regular browser rendering, Lynxviewer and Fetch as Googlebot. However, in the SEOMoz toolbar 'render as googlebot' tool, I am unable to see these dropdown menus when I have javascript disabled. Does this matter? Which of these tools is a better way to see how googlebot views your site?0