Meta-Robots noFollow and Blocked by Meta-Robots
-
On my most recent campaign report, I have 2 Notices that we can't find any cause for:
Meta-Robots nofollow-
http://www.fateyes.com/the-effect-of-social-media-on-the-serps-social-signals-seo/?replytocom=92
"noindex nofollow" for the page:http://www.fateyes.com/the-effect-of-social-media-on-the-serps-social-signals-seo/
Blocked by Meta-Robots -Meta-Robots nofollow-
http://www.fateyes.com/the-effect-of-social-media-on-the-serps-social-signals-seo/?replytocom=92
"noindex nofollow" for the page:http://www.fateyes.com/the-effect-of-social-media-on-the-serps-social-signals-seo/
We are unable to locate any code whatsoever that may explain this. Any ideas anyone?
-
Thank you, Mike and James. This makes a lot of sense since this is the only blog page that has a comment. (embarrassing to say that publicly ;o) And I haven't gotten this notice for any other blog pages.
-
I agree with Mike, you do have a meta noindex nofollow tag on those pages but it is likely autogenerated due to a setting in WP to exclude any duplicate pages created b/c of URL parameters.
-
The meta robots tag set to NoIndex means that the page is blocked by Meta Robots. Not really an error to be worried about. Due to Wordpress creating duplicate content thanks to the ?replytocom= parameter you likely set it in the backend to noindex those pages.
So the actual page "http://www.fateyes.com/the-effect-of-social-media-on-the-serps-social-signals-seo/" is lacking a robots tag as far as i can see and will therefore technically be indexable but the ?replytocom= created by the comments is correctly noindex.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content/Missing Meta Description | Pages DO NOT EXISIT!
Hello all, For the last few months, Moz has been showing us that our site has roughly 2,000 duplicate content errors. Pages that were actually duplicate content, I took care of accordingly using best practice (301 redirects, canonicalization,etc.). Still remaining after these fixes were errors showing for pages that we have never created. Our homepage is www.primepay.com. An example of pages that are being shown as duplicate content is http://primepay.com/blog/%5BLink%20to%20-%20http:/www.primepay.com/en/payrollservices/payroll/payroll/payroll/online-payroll with a referring page of http://primepay.com/blog/%5BLink%20to%20-%20http:/www.primepay.com/en/payrollservices/payroll/payroll/online-payroll. Some of these are even now showing up as 403 and 404 errors. The only real page on our site within that URL strand is primepay.com/payroll or primepay.com/payroll/online-payroll. Therefore, I am not sure where Moz is getting these pages from. Another issue we are having in relation to duplicate content is that moz is showing old campaign url’s tacked on to our blog page i.e. http://primepay.com/blog?title=&page=2&utm_source=blog&utm_medium=blogCTA&utm_campaign=IRSblogpost&qt-blog_tabs=1. As of this morning, our duplicate content went from 2,000 to 18,000. I exported all of our crawl diagnostics data and looked to see what the referring pages were, and even they are not pages that we have created. When you click on these links, they take you to a random point in time from the homepage of our blog; some dating back to 2010. I checked our crawl stats in both Google and Bing’s Webmaster tool, and there are no duplicate content or 400 level errors being reporting from their crawl. My team is truly at a loss with trying to resolve this issue and any help with this matter would be greatly appreciated.
Moz Pro | | PrimePay0 -
Website blocked by Robots.txt in OSE
When viewing my client's website in OSE under the Top Pages tab, it shows that ALL pages are blocked by Robots.txt. This is extremely concerning because Google Webmaster Tools is showing me that all pages are indexed and OK. No crawl errors, no messages, no nothing. I did a "site:website.com" in Google and all of the pages of the website returned. Any thoughts? Where is OSE picking up this signal? I cannot find a blocked robots tag in the code or anything.
Moz Pro | | ConnellyPartners0 -
Blocked by Meta Robots.
Hi, I get this warning on my reporting. Blocked by Meta Robots - This page is being kept out of the search engine indexes by meta-robots. what does that means ? and how to solve that, if i using wordpress as my website engine. and about rel=canonical , in which page I should put this tag, in original page, or in copy page ? thanks for all of your answer, it will be means a lot
Moz Pro | | theconversion0 -
Robots.txt
I have a page used for a reference that lists 150 links to blog articles. I use in in a training area of my website. I now get warnings from moz that it has too many links. I decided to disallow this page in robots.text. Below is the what appears in the file. Robots.txt file for http://www.boxtheorygold.com User-agent: * Disallow: /blog-links/ My understanding is that this simply has google bypass the page and not crawl it. However, in Webmaster Tools, I used the Fetch tool to check out a couple of my blog articles. One returned an expected result. The other returned a result of "access denied" due to robots.text. Both blog article links are listed on the /blog/links/ reference page. Question: Why does google refuse to crawl the one article (using the Fetch tool) when it is not referenced at all in the robots.text file. Why is access denied? Should I have used a noindex on this page instead of robots.txt? I am fearful that robots.text may be blocking many of my blog articles. Please advise. Thanks,
Moz Pro | | Rong
Ron0 -
Does Rogerbot respect the robots.txt file for wildcards?
Hi All, Our robots.txt file has wildcards in it, which Googlebot recognizes. Can anyone tell me whether or not Rogerbot recognizes wildcards in the robots.txt file? We've done a Rogerbot site crawl since updating the robots.txt file and the pages that are set to disallow using the wildcards are still showing. BTW, Googlebot is not crawling these pages according to Webmaster Tools. Thanks in advance, Robert
Moz Pro | | AC_Pro0 -
Sites Blocking Open Site Explorer? Penguin related.
Last week I was looking at a competitors site who has a link scheme going on and I could actually check the links for each anchor text. This week they don't work at all, do you think they're blocking the rogerbot on their domains? Or is there a problem with open site explorer? http://www.opensiteexplorer.org/anchors?site=www.decks.ca If you're interested in the background, all the links are to instant-home-biz . com which then redirects decks . ca - it's a tricky technique. Pretty much all of the links are from sketchy sites like: airpr23.xelr8it.biz/ airpr23.anzaland.net/ airpr23.vacation-4-free.com/airpr23.blogfreeradio.net/airpr23.blogomatik.com/http://www.morcandirect.com/mortgages/resources2.php which I thought Penguin was supposed to catch…
Moz Pro | | BeTheBoss0 -
Missing Meta Description tags?
I just ran our first SEOMoz pro report and it's showing that every article page on our site is missing descriptions. However, it's visible on the source and Google seems to be picking them up.
Moz Pro | | notebooks
Can you please tell me why SEOMoz is makring them as missing? Are we doing something wrong here? http://notebooks.com0 -
When using Wordpress, why does it show up so many meta data errors on here and on google webmaster? A list of do's would be good?
Using wordpress is a good CMS, yet there sre so many meta data errors occuring, is there a list of what to look out for and good practices when using wordpress?
Moz Pro | | CosmikCarrot0