After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?
-
From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?
-
In the situation outline by the OP, these pages are noindexed. There’s no value to clutterig up crawl reports on these pages. Block rogerbot from non-critical parts of your site, unless you want to be alerted of issues, then don’t.
-
Thanks, I'm not concerned about the crawl depth of the search engine bots, there is nothing in your fix that would affect that, I'm curious of the decrease in crawl depth of the site with the Moz as we use that to spot issues with the site.
One of the clients I implemented the fix on went from 4.6K crawled pages to 3.4K and the fix would have removed an expected 1.2K pages.
The other client went from 5K to 3.7K and the fix would have removed an expected 1.3K pages.
TL;DR - Good News everybody, the robots.txt fix didn't reduce the crawl depth of the moz crawler!
-
I agree, unfortunately Moz doesn't have an internal disallow feature that gives you the option to feed them info on where rogerbot can and can't go. I haven't come across any issues with this approach, crawl depth by search engine bots will not be affected since the user-agent is specified.
-
Thanks for the solution! We have been coming across a similar issue with some of our sites and I although I'm not a big fan of this type of workaround, I don't see any other options and we want to focus on the real issues. You don't want to ignore the rule in case other pages that should be indexed are marked noindex by mistake.
Logan, are you still getting the depth of crawls after making this type of fix? Have any other issues arisen from this approach?
Let us know
-
Hi Nichole,
You're correct in noindexing these pages, they serve little to no value from an SEO perspective. Moz is always going to alert you of noindex tags when they find them since it's such a critical issue if that tag shows up in unexpected places. If you want to remove these issues from your crawl report, add the following directive to your robots.txt file, this will prevent Moz from crawling these URLs and therefore reporting on them:
User-agent: rogerbot
Disallow: /tag/
Disallow: /category/*edit - do not prevent all user-agents from crawling these URLs, as it will prevent search engines from seeing your noindex tag, they can't obey what they aren't permitted to see. If you want, once all tag & category pages have been removed from the index, you can update your robots.txt to remove the rogerbot directive and add the disallows for tag & category to the * user agent.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Explore more categories
-
Chat with the community about the Moz tools.
-
Discuss the SEO process with fellow marketers
-
Discuss industry events, jobs, and news!
-
Chat about tactics outside of SEO
-
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
-