Robots
-
I have just noticed this in my code
name="robots" content="noindex">
And have noticed some of my keywords have dropped, could this be the reason?
-
It was everypage on the site.
I also noticed the pages that are not indexed no longer, they have no PR, is that expected?
-
Was the homepage one of the pages that included the noindex meta tag?
Even if it was, pages will not all be crawled at the same time or in a particular order. The homepage may have already been crawled before the change was made on your site, your homepage may not have even be crawled at all today if it was visited yesterday for example.
Crawling results can vary hugely based on a number of factors.
-
The only thing that does not make sense to me is if the sitemap was processes today, why is the homepage still indexed?
-
Yes because that is what caused them to take notice of the meta noindex and drop your pages from their search results.
Best of luck with it, feel free to send me a PM if your pages haven't reappeared in Google's search engine over the next few days.
-
Oh! I also noticed that in Webmaster tools that the sitemap was processed today, does that mean Googlebot has visited the website today?
-
Thanks Geoff, will do what you recommended.
I noticed in Google webmaster tools this:
Blocked URLs - 193
Downloaded - 13 hours ago
Status - 200 (success)
-
Hi Gary,
If the pages dropped from Google's index that quickly, then chances are, they will be back again almost as quick. If your website has an XML sitemap, you could try pinging this to the search engines to alert them to revisit your site as soon as possible again.
It's bad luck that the meta tag was inserted and this caused immediate negative effects, but it will be recoverable, and likely your pages should re-enter the index at the same positions as they were prior to today.
The key is to just bring Google's bot back to your website as soon as possible to recrawl, publishing a blog post could do this, creating a backlink from a high traffic site (a forum is a good example for this) are some methods of encouraging this.
Hope that helps.
-
Hi Geoff,
The developer had said it got added this morning when we rolled out a discount feature on our website, I think it was the CMS adding it automatically, however now a lot of the keywords that were ranking top 3 are no longer indexed, is it just bad luck? will Google come back?
-
If you are using a content management system, these additional meta tags can often be controlled within your administration panel.
If the meta tag is hard coded into your website header, this will appearing on every page of your website and will subsequently result in you not having any pages indexed in search engines.
As Ben points out, the noindex directive instructs search engine robots not to index that particular page. It would recommended to address this issue as quickly as possible, especially if you have a high traffic website that is getting crawled frequently.
-
Thanks for your quick reply Ben.
It does not seem to be all my pages that have fallen off, just some, the developer said that it only got added this morning by mistake.
I actually typed in the full URL into Google and it does not appear anymore, I was ranked no.2 for that particular keyword, receiving about 150 click per day, not happy!
-
Actually on second thoughts - YES. Yes it probably is the reason your terms are dropping.
-
Could be.
That's a directive that tells search engines no to include that page in their indexes.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Scary bug in search console: All our pages reported as being blocked by robots.txt after https migration
We just migrated to https and created 2 days ago a new property in search console for the https domain. Webmaster Tools account for the https domain now shows for every page in our sitemap the warning: "Sitemap contains urls which are blocked by robots.txt."Also in the dashboard of the search console it shows a red triangle with warning that our root domain would be blocked by robots.txt. 1) When I test the URLs in search console robots.txt test tool all looks fine.2) When I fetch as google and render the page it renders and indexes without problem (would not if it was really blocked in robots.txt)3) We temporarily completely emptied the robots.txt, submitted it in search console and uploaded sitemap again and same warnings even though no robots.txt was online4) We run screaming frog crawl on whole website and it indicates that there is no page blocked by robots.txt5) We carefully revised the whole robots.txt and it does not contain any row that blocks relevant content on our site or our root domain. (same robots.txt was online for last decade in http version without problem)6) In big webmaster tools I could upload the sitemap and so far no error reported.7) we resubmitted sitemaps and same issue8) I see our root domain already with https in google SERPThe site is https://www.languagecourse.netSince the site has significant traffic, if google would really interpret for any reason that our site is blocked by robots we will be in serious trouble.
Intermediate & Advanced SEO | | lcourse
This is really scary, so even if it is just a bug in search console and does not affect crawling of the site, it would be great if someone from google could have a look into the reason for this since for a site owner this really can increase cortisol to unhealthy levels.Anybody ever experienced the same problem?Anybody has an idea where we could report/post this issue?0 -
Syndicated content with meta robots 'noindex, nofollow': safe?
Hello, I manage, with a dedicated team, the development of a big news portal, with thousands of unique articles. To expand our audiences, we syndicate content to a number of partner websites. They can publish some of our articles, as long as (1) they put a rel=canonical in their duplicated article, pointing to our original article OR (2) they put a meta robots 'noindex, follow' in their duplicated article + a dofollow link to our original article. A new prospect, to partner with with us, wants to follow a different path: republish the articles with a meta robots 'noindex, nofollow' in each duplicated article + a dofollow link to our original article. This is because he doesn't want to pass pagerank/link authority to our website (as it is not explicitly included in the contract). In terms of visibility we'd have some advantages with this partnership (even without link authority to our site) so I would accept. My question is: considering that the partner website is much authoritative than ours, could this approach damage in some way the ranking of our articles? I know that the duplicated articles published on the partner website wouldn't be indexed (because of the meta robots noindex, nofollow). But Google crawler could still reach them. And, since they have no rel=canonical and the link to our original article wouldn't be followed, I don't know if this may cause confusion about the original source of the articles. In your opinion, is this approach safe from an SEO point of view? Do we have to take some measures to protect our content? Hope I explained myself well, any help would be very appreciated, Thank you,
Intermediate & Advanced SEO | | Fabio80
Fab0 -
Robots.txt - Do I block Bots from crawling the non-www version if I use www.site.com ?
my site uses is set up at http://www.site.com I have my site redirected from non- www to the www in htacess file. My question is... what should my robots.txt file look like for the non-www site? Do you block robots from crawling the site like this? Or do you leave it blank? User-agent: * Disallow: / Sitemap: http://www.morganlindsayphotography.com/sitemap.xml Sitemap: http://www.morganlindsayphotography.com/video-sitemap.xml
Intermediate & Advanced SEO | | morg454540 -
Robots.txt assistance
I want to block all the inner archive news pages of my website in robots.txt - we don't have R&D capacity to set up rel=next/prev or create a central page that all inner pages would have a canonical back to, so this is the solution. The first page I want indexed reads:
Intermediate & Advanced SEO | | theLotter
http://www.xxxx.news/?p=1 all subsequent pages that I want blocked because they don't contain any new content read:
http://www.xxxx.news/?p=2
http://www.xxxx.news/?p=3
etc.... There are currently 245 inner archived pages and I would like to set it up so that future pages will automatically be blocked since we are always writing new news pieces. Any advice about what code I should use for this? Thanks!0 -
Can't find X-Robots tag!
Hi all. I've been checking out http://www.unthankbooks.com/ as it seems to have some indexing problems. I ran a server header check, and got a 200 response. However, it also shows the following: X-Robots-Tag:
Intermediate & Advanced SEO | | Blink-SEO
noindex, nofollow It's not in the page HTML though. Could it be being picked up from somewhere else?0 -
Why are these results being showed as blocked by robots.txt?
If you perform this search, you'll see all m. results are blocked by robots.txt: http://goo.gl/PRrlI, but when I reviewed the robots.txt file: http://goo.gl/Hly28, I didn't see anything specifying to block crawlers from these pages. Any ideas why these are showing as blocked?
Intermediate & Advanced SEO | | nicole.healthline0 -
How to Disallow Tag Pages With Robot.txt
Hi i have a site which i'm dealing with that has tag pages for instant - http://www.domain.com/news/?tag=choice How can i exclude these tag pages (about 20+ being crawled and indexed by the search engines with robot.txt Also sometimes they're created dynamically so i want something which automatically excludes tage pages from being crawled and indexed. Any suggestions? Cheers, Mark
Intermediate & Advanced SEO | | monster990 -
Using 2 wildcards in the robots.txt file
I have a URL string which I don't want to be indexed. it includes the characters _Q1 ni the middle of the string. So in the robots.txt can I use 2 wildcards in the string to take out all of the URLs with that in it? So something like /_Q1. Will that pickup and block every URL with those characters in the string? Also, this is not directly of the root, but in a secondary directory, so .com/.../_Q1. So do I have to format the robots.txt as //_Q1* as it will be in the second folder or just using /_Q1 will pickup everything no matter what folder it is on? Thanks.
Intermediate & Advanced SEO | | seo1234560