Disallow statement - is this tiny anomaly enough to render Disallow invalid?
-
Google site search (site:'hbn.hoovers.com') indicates 171,000 results for this subdomain. That is not a desired result - this site has 100% duplicate content. We don't want SEs spending any time here.
Robots.txt is set up mostly right to disallow all search engines from indexing this site. That asterisk at the end of the disallow statement looks pretty harmless - but could that be why the site has been indexed?
User-agent: * Disallow: /*
-
Interesting. I'd never heard that before.
We've never had GA or GWT on these mirror sites before, so it's hard to say what Google is doing these days.
But the goal is definitely to make them and their contents invisible to SEs. We'll get GWT on there and start removing URLs.
Thanks!
-
The additional asterisk shouldn't do you any harm, although standard practice seems to be just putting the "/".
Does it seem like Google is still crawling this subdomain when you look at webmasters crawl stats? While the disallow function in robots.txt will usually stop bots from crawling, it doesn't prevent them from indexing or keeping pages indexed that were before the disallow was put in place. If you want these pages removed from the index, you can request it through webmasters and also use meta robots noindex as opposed to the robots.txt file. Moz has a good article about it here: http://moz.com/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts
If you're just worried about bots crawling the subdomain, it's possible they've already stopped crawling it, but continue to index it due to history or additional indicators suggesting they should index it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Fetch and Render misses middle chunk of page
Hey folks, I was checking out a site with the Search Console's "Fetch and Render" function and found something potentially worrisome. A big chunk of the middle of the page (the homepage) shows up as empty space in the preview render window. The site isn't doing so hot in terms of rankings, and I'm wondering if this issue is causing it (since it could indicate that 80% of the copy on the homepage is invisible to Google) A few other details: The specific content isn't showing in either view. Both the "What Google sees" and "What the visitor sees" are missing this chunk of the page The content IS visible in cached versions of the page The html for the content seems to be in the The "Fetch" part returns "Complete" as opposed to "Partial" so I don't THINK it's a matter of javascript stuff getting blocked by robots.txt This website was built using the Wordpress theme "Suco" and the parts of the page that aren't rendering are all built with the Themify Builder tool Not ALL of the Themify Builder elements are showing up as blank. There's a slider element that's rendering just fine Any ideas on what could cause whole portions of a page not to show up in Fetch and Render? Thanks!
Technical SEO | | BrianAlpert780 -
Bing indexing at a tiny fraction of Google
I've read through other posts about this but I can't find a solution that works for us. My site is porch.com, 1M+ pages indexed on Google, ~10k on Bing. I've submitted the same sitemaps, and there's nothing different for each bot in our robots file. It looks like Bing is more concerned with our 500 errors than Google, but not sure if that might be causing the issue. Can anyone point me to the right things to be researching/investigating? Fixing errors, sitemap crawling issues, etc. I'm not sure what to spend my time looking into...
Technical SEO | | Porch0 -
Is Noindex Enough To Solve My Duplicate Content Issue?
Hello SEO Gurus! I have a client who runs 7 web properties. 6 of them are satellite websites, and 7th is his company's main website. For a long while, my company has, among other things, blogged on a hosted blog at www.hismainwebsite.com/blog, and when we were optimizing for one of the other satellite websites, we would simply link to it in the article. Now, however, the client has gone ahead and set up separate blogs on every one of the satellite websites as well, and he has a nifty plug-in set up on the main website's blog that pipes in articles that we write to their corresponding satellite blog as well. My concern is duplicate content. In a sense, this is like autoblogging -- the only thing that doesn't make it heinous is that the client is autoblogging himself. He thinks that it will be a great feature for giving users to his satellite websites some great fresh content to read -- which I agree, as I think the combination of publishing and e-commerce is a thing of the future -- but I really want to avoid the duplicate content issue and a possible SEO/SERP hit. I am thinking that a noindexing of each of the satellite websites' blog pages might suffice. But I'd like to hear from all of you if you think that even this may not be a foolproof solution. Thanks in advance! Kind Regards, Mike
Technical SEO | | RCNOnlineMarketing0 -
Will invalid HTML code generated by WordPress affect SEO efforts?
Hi all, I'm new to SEOmoz and SEO in general really. I run a small but well regarded freelance website and graphic design business, and until very recently had an employee who handled the SEO side of things. I'm now looking to step into this role myself and hopefully learn the in's and out's of SEO. I've no doubt there will be much to learn, but the SEOmoz tools and it's community seem excellent and helpful. My question then is basically, if WordPress generated HTML code can have an effect on SEO, when it's reported as invalid by tools such as the W3C HTML validator? I'm used to hand coding the majority of my websites for clients, where creating valid HTML and CSS code is something I can do with relative ease. A new client however wants to use WordPress - for ease of updating the site content themselves. The client does however consider any potential SEO implications to be a very important factor in choosing a hand coded vs. WordPress based website. I am aware that WordPress itself is just a means of generating HTML code, and that to the search engines there is no difference between this and the hand coded websites I usually produce. However if WordPress is generating HTML that is being reported as invalid, would this make the search engines penalise the site? On a second note, will the search engines look negatively on a WordPress site where it is being used as a standard website, and the content may not be updated as frequently, as say, a blog? Thanks for your time, and I look forward to hearing your suggestions.
Technical SEO | | SavilleWolf0 -
Allow or Disallow First in Robots.txt
If I want to override a Disallow directive in robots.txt with an Allow command, do I have the Allow command before or after the Disallow command? example: Allow: /models/ford///page* Disallow: /models////page
Technical SEO | | irvingw0 -
Can I Disallow Faceted Nav URLs - Robots.txt
I have been disallowing /*? So I know that works without affecting crawling. I am wondering if I can disallow the faceted nav urls. So disallow: /category.html/? /category2.html/? /category3.html/*? To prevent the price faceted url from being cached: /category.html?price=1%2C1000
Technical SEO | | tylerfraser
and
/category.html?price=1%2C1000&product_material=88 Thanks!0 -
Disallowing https URLs
It there a problem disallowing all https URLs to be indexed in order to avoid duplication? This is the article recommending this practice - http://blog.leonardchallis.com/seo/serve-a-different-robots-txt-for-https/ Thanks!
Technical SEO | | theLotter0 -
How do you disallow HTTPS?
I currently have a site (startuploans.org) that runs everything as http, recently we decided to start an online application to process loan apps. Now, for one certain section we configured ssl to work (https://www.startuploans.org/secure/). If I go to the HTTPS url for any of my other pages they show up...I was going to just 301 everything from https but because it is in a subdirectiory I can't... Also, canonical URL's won't work either because it's a totally different system and the pages are generated in an odd manor. It's really just 1 page that needs to be disallowed.. Is there any way to disallow all HTTPS requests from robots.txt while keeping all the HTTP requests working as normal?
Technical SEO | | WebsiteConsultants0