Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Robots.txt & meta noindex--site still shows up on Google Search
-
I have set up my robots.txt like this:
User-agent: *
Disallow: /and I have this meta tag in my on a Wordpress site, set up with SEO Yoast
name="robots" content="noindex,follow"/>
I did "Fetch as Google" on my Google Search Console
My website is still showing up in the search results and it says this:
"A description for this result is not available because of this site's robots.txt"
This site has not shown up for years and now it is ranking above my site that I want to rank for this keyword. How do I get Google to ignore this site? This seems really weird and I'm confused how a site with little content, that has not been updated for years can rank higher than a site that is constantly updated and improved.
-
CleverPhd,
Really since to see a detailed yet to the point answer.
Thanks for contributing, and being in the Moz community.
Regards,
Vijay
-
Thanks for that clarification CleverPhD, forgot to mention that.
-
This one has my vote. You have to allow them access in order to see that you don't want the pages indexed. If you block them from seeing this rule...well they won't be able to see it.
-
Just to be clear on what Logan said. You have to allow Google to crawl your site by opening up your robots.txt to Google so it can see your noindex directive that is on each of the pages. Otherwise Google will never "see" the noindex directive on your pages.
Likewise, on sitemap.xml. If you are not allowing Google to crawl the sitemap (because you are blocking it with robots.txt) then Google will not read the sitemap, find all your pages that have the noindex directive on them and then remove those pages from the index.
A great article is here
https://support.google.com/webmasters/answer/93710?hl=en&ref_topic=4598466
From the mouth of Google "Important! For the noindex meta tag to be effective, the page must not be blocked by a robots.txt file. If the page is blocked by a robots.txt file, the crawler will never see the noindex tag, and the page can still appear in search results, for example if other pages link to it."
The other point that logan makes is that Google might list your site if there are enough sites linking to it. The steps above should take care of this, as you are deindexing the page, but here is what I am thinking he is referencing
https://www.youtube.com/watch?v=KBdEwpRQRD0
Google will include a site that is blocked in robots.txt if enough pages link to it, even if they have not crawled the url.
You can go into Search Console and find all the links that they say are pointing to your site. You can also use tools like CognitiveSEO or Ahrefs, Majestic or Moz etc and gather up all of those sites to find links to your site and include those in a disavow file that you put into Search Console and tell Google to ignore all of those links to your site.
Secret bonus method. Putting a noindex directive in your robots
https://www.deepcrawl.com/knowledge/best-practice/robots-txt-noindex-the-best-kept-secret-in-seo/
This allows you to manage your noindex directives in your robots.txt. Makes it easier as you can control all your noindex directives from a central location and block whole folders at a time. This would stop Google from crawling AND indexing pages all in one page and you can just leave the rest of the site alone and not worry about if a noindex tag should or should not be on a certain page.
Good luck!
-
As mentioned by Logan,noindex meta tag
is the most effective way to remove indexed pages. It sometimes takes time, you have to submit the right sitemap.xml which cover the pages/post you wish to get removed from google index.
-
I did read that about the robots.txt and that is why I added the noindex.
I use SEO Yoast for sitemap.xml, so shouldn't all my pages be there? I believe they are because I just looked at it a couple days ago.
So are you saying I should look through my backlink profile (WMT) and try to remove any backlinks?
Would 'Fetch as Google' not ping Google to tell them to recrawl?
Thanks for your help.
-
Hi,
First things first, it's a common misconception that the robots.txt disallow: / will prevent indexing. It's only indented to prevent crawling, which is why you don't get a meta description pulled into the result snippet. If you have links pointing to that page and a disallow: / on your robots, it's still eligible for indexation.
Second, it's pretty weird that the noindex tag isn't effective, as that's the only sure-fire way to get de-indexed intentionally. I would recommend creating an XML sitemap for all URLs on that domain that are noindex'd and resubmit that in Search Console. If Google hasn't crawled your site since adding the noindex, they don't know it's there. In my experience, forcing them to recrawl via XML submission has been effective at getting noindex noticed quicker.
I would also recommend taking a look at the link profile and removing any possible links pointing to your noindex pages, this will help future attempts at indexing.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Google Search Console Still Reporting Errors After Fixes
Hello, I'm working on a website that was too bloated with content. We deleted many pages and set up redirects to newer pages. We also resolved an unreasonable amount of 400 errors on the site. I also removed several ancient sitemaps that listed content deleted years ago that Google was crawling. According to Moz and Screaming Frog, these errors have been resolved. We've submitted the fixes for validation in GSC, but the validation repeatedly fails. What could be going on here? How can we resolve these error in GSC.
Technical SEO | | tif-swedensky0 -
Google Search Console Not Sending Messages
One of our sites received a Manual Penalty for unnatural links by Google. However, we never received a message in Google Search Console or an email about the manual action. The only reason we knew about the penalty is by the obvious drop in rankings, then signing into search console to look for any manual actions, which we found. Since then, we have submitted a disavow file and a reconsideration request. However, once again we did not receive an email or message in search console that shows confirmation of the disavow or that they received the reconsideration request. The disavow file does show up after I upload it, and it says it was successfully uploaded... but no messages or emails. After many hours of investigating the various canonical versions of our website on Search Console, we found out that there were several “owners” of the various canonical versions of our site that had “could not find the email address” as a site owner. We found out that these were previous employees who no longer worked with the company and their email address was deleted. After unverifying these site owners, (all the ones that had “could not find the email address” as the site owner), the notifications, emails and messages in Search Console started to appear. However, the only place they did not appear, is the main canonical version of our site. Of course, the main canonical version of our site (https://www) is the version that we uploaded the disavow and reconsideration request. This is the canonical version of the site that we need to receive these messages to know if our reconsideration request was granted! We’ve just reuploaded the disavow file and reconsideration request to all of the other canonical versions (2 of the 3 received the message about the penalty)…. and we are currently awaiting a response. Has anybody else had problems with not receiving notifications in search console due to deleted email addresses?
Technical SEO | | Fiyyazp0 -
One robots.txt file for multiple sites?
I have 2 sites hosted with Blue Host and was told to put the robots.txt in the root folder and just use the one robots.txt for both sites. Is this right? It seems wrong. I want to block certain things on one site. Thanks for the help, Rena
Technical SEO | | renalynd270 -
Staging & Development areas should be not indexable (i.e. no followed/no index in meta robots etc)
Hi I take it if theres a staging or development area on a subdomain for a site, who's content is hence usually duplicate then this should not be indexable i.e. (no-indexed & nofollowed in metarobots) ? In order to prevent dupe content probs as well as non project related people seeing work in progress or finding accidentally in search engine listings ? Also if theres no such info in meta robots is there any other way it may have been made non-indexable, or at least dupe content prob removed by canonicalising the page to the equivalent page on the live site ? In the case in question i am finding it listed in serps when i search for the staging/dev area url, so i presume this needs urgent attention ? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
Google ranking my site abroad, how to stop?
Hi Mozzers, I have a UK based ecommerce site, that sells only to the UK. Over the last month Google has started ranking my site on foreign flavours of Google, so I keep getting traffic coming to my site from Europe, America and the far east that we could never sell to, and as a result bounce is going up and engagement is going down. They are definitely coming to the site from google searches that relate to my product type, but in regions I do not service. Is there a way to stop google doing this? I have the target set to UK in WMT, but is there anything else I can do? I worried about my UK ranking being damaged by an increasing overall bounce rate. Thanks
Technical SEO | | FDFPres0 -
Why is my site jumping around in google search ?
Hi I've been trying to get my page up in google results and I was wondering why the constant fluctuation. For example, on one day the pages is nr. 26, the next day it's nr. 65 then jumps back on say 30 and then in a few more days it's going back to 50. What's the logic behind that ? Thanks Cezar
Technical SEO | | sparts1 -
Robots.txt Sitemap with Relative Path
Hi Everyone, In robots.txt, can the sitemap be indicated with a relative path? I'm trying to roll out a robots file to ~200 websites, and they all have the same relative path for a sitemap but each is hosted on its own domain. Basically I'm trying to avoid needing to create 200 different robots.txt files just to change the domain. If I do need to do that, though, is there an easier way than just trudging through it?
Technical SEO | | MRCSearch0 -
Why are apostrophes and other characters still showing as code in my titles?
Hi, I have a WordPress-based site and overall everything is working well. However, I can't seem to figure out how to get apostrophes and other characters to display normally. Now, the problem isn't that they are displaying as code to normal visitors or up in the title bar, they are displaying as code to Google's bots as well as to SEOMOZ. Example: Normal visitor sees: About **** | **** - Metro Vancouver's IT & Web Experts Google and SEOMOZ see: About **** | **** - Metro Vancouver's IT & Web Experts I've played around with different ways of typing the title (not using character codes vs. using character codes) and nothing seems to work. Any help or explanation would be appreciated.
Technical SEO | | Function50