Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?
-
From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?
-
In the situation outline by the OP, these pages are noindexed. There’s no value to clutterig up crawl reports on these pages. Block rogerbot from non-critical parts of your site, unless you want to be alerted of issues, then don’t.
-
Thanks, I'm not concerned about the crawl depth of the search engine bots, there is nothing in your fix that would affect that, I'm curious of the decrease in crawl depth of the site with the Moz as we use that to spot issues with the site.
One of the clients I implemented the fix on went from 4.6K crawled pages to 3.4K and the fix would have removed an expected 1.2K pages.
The other client went from 5K to 3.7K and the fix would have removed an expected 1.3K pages.
TL;DR - Good News everybody, the robots.txt fix didn't reduce the crawl depth of the moz crawler!
-
I agree, unfortunately Moz doesn't have an internal disallow feature that gives you the option to feed them info on where rogerbot can and can't go. I haven't come across any issues with this approach, crawl depth by search engine bots will not be affected since the user-agent is specified.
-
Thanks for the solution! We have been coming across a similar issue with some of our sites and I although I'm not a big fan of this type of workaround, I don't see any other options and we want to focus on the real issues. You don't want to ignore the rule in case other pages that should be indexed are marked noindex by mistake.
Logan, are you still getting the depth of crawls after making this type of fix? Have any other issues arisen from this approach?
Let us know
-
Hi Nichole,
You're correct in noindexing these pages, they serve little to no value from an SEO perspective. Moz is always going to alert you of noindex tags when they find them since it's such a critical issue if that tag shows up in unexpected places. If you want to remove these issues from your crawl report, add the following directive to your robots.txt file, this will prevent Moz from crawling these URLs and therefore reporting on them:
User-agent: rogerbot
Disallow: /tag/
Disallow: /category/*edit - do not prevent all user-agents from crawling these URLs, as it will prevent search engines from seeing your noindex tag, they can't obey what they aren't permitted to see. If you want, once all tag & category pages have been removed from the index, you can update your robots.txt to remove the rogerbot directive and add the disallows for tag & category to the * user agent.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Why doesn't moz notify me of missing image alt tags
We had a client come to us and let us know another vendor had notified them that many of the images on their site are missing alt tags / text. I know this was a big deal back in the day, but I haven't heard much about it lately. I am assuming if it doesn't even show up in the Moz site crawl, it must not be a big deal any more, but I would love to have more info about how important image alt tags are and if they are important, why Moz does not report them.
Moz Pro | | CaliberMG1 -
Crawler errors or Page Load Time? What affect more to SEO
Hello, I have a page with a forum and at this moment the moz report says that have 15.1k of issues like url too long, meta noindex, title too long etc. But this page have a load time realy sloooow with 11 seconds. I know i need fix all that errors (i'm working on this) but... What is more important for SEO? The page load or that type of error like duplicate titles etc. Thank you!
Moz Pro | | DanielExposito1 -
I got an 803 error yesterday on the Moz crawl for most of my pages. The page loads normally in the browser. We are hosted on shopify
I got an 803 error yesterday on the Moz crawl for most of my pages. The page loads normally in the browser. We are hosted on shopify, the url is www.solester.com please help us out
Moz Pro | | vasishta0 -
How Moz takes a page title is duplicate?
Suppose i have added suffix and prefix to each of my product (ex: i have two tittles like buy online t-shirt at abc.com & buy online poster at abc.com, so in this buy online and abc.com are suffix and prefix) so .. will it take these two page tittles as duplicates ?
Moz Pro | | vayush0 -
Does Rogerbot recognize rel="alternate" hreflang="x"?
Rogerbot just completed its first crawl and is reporting all kinds of duplicate content - both page content and meta title/description. The pages it is calling duplicate are used with rel="alternate" hreflang="x", but are still being labeled as dupes. The title and descriptions are usually exactly the same, so I am working on getting at least those translated into different languages. I think its getting tripped up because the product page its crawling are only in English, but the chrome of the site is in the translated languages. The URLs look like so: Original: site.com/product Detected duplicates: site.com/fr/product, site.com/de/product, site.com/zh-hans/product
Moz Pro | | sedwards0 -
What web page and domain analysis / error checking / testing tools do you use for competitor analysis? sites like webpagetest
Just wondering what everyone is using, I am looking to get as much insight and detail as I can on websites that are not currently being monitored by me... i.e. potential clients. I use tools like pagespeed, webpagetest, loadimpact and open site explorer, google adwords, ispionage, alexa, semrush and well, looking for more. I really just want to rip a website to the tiniest pieces possible in an organized and coherent manner... is there anything out there? I have tried several other's which i no longer use (compete, 4q, woopra, nuestar, to name a few), I am not sure if I know exactly what i want, i just want more.... damn the human condition. lol
Moz Pro | | atb9900 -
Links & page authority crawl
I see the links and page authority have not been updated in over a month... does anyone know how often it gets updated?
Moz Pro | | nazmiyal0 -
Crawl test tool from SEOmoz - which URLs does it actually crawl?
I am using for the first time the crawl test tool from SEOmoz and I do not really understand which URLs the tool is going to crawl. First, it says "enter any subdomain" --> why can´t I do the crawl for the root domain? Second it says "we'll crawl up to 3,000 linked-to pages" --> does that mean that the tool crawls all internal links that it can find on the given domain? Thanks for your help!
Moz Pro | | Elke.GetApp0