Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Block Domain in robots.txt
-
Hi.
We had some URLs that were indexed in Google from a www1-subdomain. We have now disabled the URLs (returning a 404 - for other reasons we cannot do a redirect from www1 to www) and blocked via robots.txt. But the amount of indexed pages keeps increasing (for 2 weeks now). Unfortunately, I cannot install Webmaster Tools for this subdomain to tell Google to back off...
Any ideas why this could be and whether it's normal?
I can send you more domain infos by personal message if you want to have a look at it.
-
Hi Philipp,
I have not heard of Google going rogue like this before, however I have seen it with other search engines (Baidu).
I would first verify that the robots.txt is configured correctly, and verify there is no links anywhere to the domain. The reason I mentioned this prior, was due to this official notification on Google: https://support.google.com/webmasters/answer/156449?rd=1
While Google won't crawl or index the content of pages blocked by robots.txt, we may still index the URLs if we find them on other pages on the web. As a result, the URL of the page and, potentially, other publicly available information such as anchor text in links to the site, or the title from the Open Directory Project (www.dmoz.org), can appear in Google search results.
My next thought would be, did Google start crawling the site before the robots.txt blocked them from doing so? This may have caused Google to start the indexing process which is not instantaneous, then you have the new urls appear after the robots.txt went into effect. The solution is add the meta tag noindex, or block put an explicit block on the server as I mention above.
If you are worried about duplicate content issues you maybe able to at least canonical the subdomain urls to the correct url.
Hope that helps and good luck
-
Hi Don
Thanks for your hint. It doesn't look like there are any links to the www1 subdomain. Also, since we've let the www1-Subdomain return 404's and blocked it with robots, the indexed pages increased from 39'300 to 45'100 so this is more than anybody would link to... Really strange why Google just ignores robots and keeps indexing...
-
Hi Phil,
Is it possible that google is find the links on another site (like somebody else has your links on their site)? Depending on your situation a good catch all block is to secure the www1 domain with (.htaccess/**.**htpasswd ) this would force anybody (even bots) to provide credentials to see or explore the site. Of course everybody who needs access to the site would have the credentials. So in theory you shouldn't see any more urls getting indexed.
Hope that helps,Don
-
Thanks for the resource Chris! The strange thing is that Google keeps indexing new URLs even though it is clearly blocked via robots.txt...
But I guess I'll just wait for these 90 days to pass then...
-
Phillip,
If you've deleted the URLs, there's not much else for you to do. You're experiencing the lag between when Google crawls and indexes pages new pages and when it finds and removes a 404 URL from it's index.
You should think 90 days as an approximate time frame for your page count in the index to start dropping. Here's more from google:
https://support.google.com/webmasters/answer/1663419
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl solutions for landing pages that don't contain a robots.txt file?
My site (www.nomader.com) is currently built on Instapage, which does not offer the ability to add a robots.txt file. I plan to migrate to a Shopify site in the coming months, but for now the Instapage site is my primary website. In the interim, would you suggest that I manually request a Google crawl through the search console tool? If so, how often? Any other suggestions for countering this Meta Noindex issue?
Technical SEO | | Nomader1 -
Disallow wildcard match in Robots.txt
This is in my robots.txt file, does anyone know what this is supposed to accomplish, it doesn't appear to be blocking URLs with question marks Disallow: /?crawler=1
Technical SEO | | AmandaBridge
Disallow: /?mobile=1 Thank you0 -
Old domain to new domain
Hi, A website on server A is no longer required. The owner has redirected some URLS of this website (via plugin) to his new website on server B -but not all URLS. So when I use COMMAND site:website A , I see a mixture of redirected URLS and not redirected URLS.Therefore two websites are still being indexed in some form and causing duplication. However, weirdly when I crawl with Screaming Frog I only see one URL which is 301 redirected to the new website. I would have thought I'd see lots of URLs which hadn't been redirected. How come it is different to using the site:command? Anyway, how do I move to the new website completely without the old one being indexed anymore. I thought I knew this but have read so many blogs I've confused myself! Should I: Redirect all URLS via the HTACESS file on old website on server A? There are lots of pages indexed so a lot of URLs. What if I miss some? or Point the old domain via DNS to server B and do the redirects in website B HTaccess file? This seems more sensible but does this method still retain the website rankings? Thanks for any help
Technical SEO | | AL123al0 -
Robots txt. in page with 301 redirect
We currently have a a series of help pages that we would like to disallow from our robots txt. The thing is that these help pages are located in our old website, which now has a 301 redirect to current site. Which is the proper way to go around? 1- Add the pages we want to disallow to the robots.txt of the new website? 2- Break the redirect momentarily and add the pages to the robots.txt of the old one? Thanks
Technical SEO | | Kilgray0 -
Are robots.txt wildcards still valid? If so, what is the proper syntax for setting this up?
I've got several URL's that I need to disallow in my robots.txt file. For example, I've got several documents that I don't want indexed and filters that are getting flagged as duplicate content. Rather than typing in thousands of URL's I was hoping that wildcards were still valid.
Technical SEO | | mkhGT0 -
Is there any value in having a blank robots.txt file?
I've read an audit where the writer recommended creating and uploading a blank robots.txt file, there was no current file in place. Is there any merit in having a blank robots.txt file? What is the minimum you would include in a basic robots.txt file?
Technical SEO | | NicDale0 -
Googlebot does not obey robots.txt disallow
Hi Mozzers! We are trying to get Googlebot to steer away from our internal search results pages by adding a parameter "nocrawl=1" to facet/filter links and then robots.txt disallow all URLs containing that parameter. We implemented this late august and since that, the GWMT message "Googlebot found an extremely high number of URLs on your site", stopped coming. But today we received yet another. The weird thing is that Google gives many of our nowadays robots.txt disallowed URLs as examples of URLs that may cause us problems. What could be the reason? Best regards, Martin
Technical SEO | | TalkInThePark0