Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?
-
From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?
-
In the situation outline by the OP, these pages are noindexed. There’s no value to clutterig up crawl reports on these pages. Block rogerbot from non-critical parts of your site, unless you want to be alerted of issues, then don’t.
-
Thanks, I'm not concerned about the crawl depth of the search engine bots, there is nothing in your fix that would affect that, I'm curious of the decrease in crawl depth of the site with the Moz as we use that to spot issues with the site.
One of the clients I implemented the fix on went from 4.6K crawled pages to 3.4K and the fix would have removed an expected 1.2K pages.
The other client went from 5K to 3.7K and the fix would have removed an expected 1.3K pages.
TL;DR - Good News everybody, the robots.txt fix didn't reduce the crawl depth of the moz crawler!
-
I agree, unfortunately Moz doesn't have an internal disallow feature that gives you the option to feed them info on where rogerbot can and can't go. I haven't come across any issues with this approach, crawl depth by search engine bots will not be affected since the user-agent is specified.
-
Thanks for the solution! We have been coming across a similar issue with some of our sites and I although I'm not a big fan of this type of workaround, I don't see any other options and we want to focus on the real issues. You don't want to ignore the rule in case other pages that should be indexed are marked noindex by mistake.
Logan, are you still getting the depth of crawls after making this type of fix? Have any other issues arisen from this approach?
Let us know
-
Hi Nichole,
You're correct in noindexing these pages, they serve little to no value from an SEO perspective. Moz is always going to alert you of noindex tags when they find them since it's such a critical issue if that tag shows up in unexpected places. If you want to remove these issues from your crawl report, add the following directive to your robots.txt file, this will prevent Moz from crawling these URLs and therefore reporting on them:
User-agent: rogerbot
Disallow: /tag/
Disallow: /category/*edit - do not prevent all user-agents from crawling these URLs, as it will prevent search engines from seeing your noindex tag, they can't obey what they aren't permitted to see. If you want, once all tag & category pages have been removed from the index, you can update your robots.txt to remove the rogerbot directive and add the disallows for tag & category to the * user agent.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawler errors or Page Load Time? What affect more to SEO
Hello, I have a page with a forum and at this moment the moz report says that have 15.1k of issues like url too long, meta noindex, title too long etc. But this page have a load time realy sloooow with 11 seconds. I know i need fix all that errors (i'm working on this) but... What is more important for SEO? The page load or that type of error like duplicate titles etc. Thank you!
Moz Pro | | DanielExposito1 -
I need an interlinking report for my site, is there a report in Moz or another application that tell me how all of my pages are linked to other pages on my site?
I am in the process of doing a redesign for one of my sites. I need an interlinking report for my site. Is there a report in Moz or another application that tell me how all of my pages are linked to other pages on my site?
Moz Pro | | seoflorida0 -
Moz & Xenu Link Sleuth unable to crawl a website (403 error)
It could be that I am missing something really obvious however we are getting the following error when we try to use the Moz tool on a client website. (I have read through a few posts on 403 errors but none that appear to be the same problem as this) Moz Result Title 403 : Error Meta Description 403 Forbidden Meta Robots_Not present/empty_ Meta Refresh_Not present/empty_ Xenu Link Sleuth Result Broken links, ordered by link: error code: 403 (forbidden request), linked from page(s): Thanks in advance!
Moz Pro | | ZaddleMarketing0 -
How come the linking root domains doesn't download to the cvs when I try to create a "Top Pages" report?
How come the linking root domains tab doesn't download to the cvs when I try to create a "Top Pages" report?
Moz Pro | | mrmworldwidesearch0 -
How can competition outrank you if your site has better Domain/Page Authority, More links, and More Social sharing?
Say you have a site that has better Domain/page authority, more links, more social media sharing, and a lot more indexed pages (thanks to blogging) than the competition. Of course all of these metrics are based off of data from SEOMoz open site explorer tool which I am not sure if it produces accurate data. 1. Other than exact match domains or the age of a domain what would be other reasons why competition would outrank you? 2. Can anyone suggest other ways to help increase a sites domain/page authority besides creating more indexed pages, link building, etc..?
Moz Pro | | webestate0 -
In my errors I have 2 different products on the same page?
Hello, I have 2039 duplicate page errors and most of them are 2 different products on 1 page, I haven't set it up in the CMS, how has this happened? here's 2 examples, the 1st example has ghd's on the back of a different brand and the 2nd has gift packs on the back of the same brand 'rockaholic'? and what does 'norec' mean? http://www.thehairroom.co.uk/Tigi-Rockaholic-797658/ghd-straightening-irons/norec http://www.thehairroom.co.uk/Tigi-Rockaholic-797658/tigi-bed-head-gift-packs/norec Thanks Mark
Moz Pro | | smoki6660 -
Crawl Diagnostics shows two title and meta tag errors but they are false positives.
I got one hit each on "Missing Meta Description Tag" and "Title Missing or Empty" but in the source of my page they are clearly there: <title>Protein Powder | Compare and Get the Best Prices</title> <meta name="keywords" content="protein powder, whey protein, protein supplement, whey protein isolate, hydrolyzed whey" /> I understand there are conventions which may or may not be followed by Drupal (I read an earlier question where ordering and W3C conventions were suggested) but i'm not sure how to fix them given Drupal will just overwrite any hand editing the next time something is built and importantly, I can't get the crawl to work on cue - it works on the automatic once a week crawl in the main campaign summary but every time I've specifically used the Crawl Test tool it gives me a "There was an error submitting your request to the crawler. Please try again later" so I can't really test any changes. Given Google seems to be recognising the title tag - ie showing it in the results - Do I put this down as seomoz just not working? Kind Regards, Brian
Moz Pro | | btrr690 -
It won't let me print the secon or third pages of site errors.
When I place the site errors page in pdf format it won't let me print the second or any of the other webpages containing content about my site. Does any one know why?
Moz Pro | | ibex0