Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?
-
From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?
-
In the situation outline by the OP, these pages are noindexed. There’s no value to clutterig up crawl reports on these pages. Block rogerbot from non-critical parts of your site, unless you want to be alerted of issues, then don’t.
-
Thanks, I'm not concerned about the crawl depth of the search engine bots, there is nothing in your fix that would affect that, I'm curious of the decrease in crawl depth of the site with the Moz as we use that to spot issues with the site.
One of the clients I implemented the fix on went from 4.6K crawled pages to 3.4K and the fix would have removed an expected 1.2K pages.
The other client went from 5K to 3.7K and the fix would have removed an expected 1.3K pages.
TL;DR - Good News everybody, the robots.txt fix didn't reduce the crawl depth of the moz crawler!
-
I agree, unfortunately Moz doesn't have an internal disallow feature that gives you the option to feed them info on where rogerbot can and can't go. I haven't come across any issues with this approach, crawl depth by search engine bots will not be affected since the user-agent is specified.
-
Thanks for the solution! We have been coming across a similar issue with some of our sites and I although I'm not a big fan of this type of workaround, I don't see any other options and we want to focus on the real issues. You don't want to ignore the rule in case other pages that should be indexed are marked noindex by mistake.
Logan, are you still getting the depth of crawls after making this type of fix? Have any other issues arisen from this approach?
Let us know
-
Hi Nichole,
You're correct in noindexing these pages, they serve little to no value from an SEO perspective. Moz is always going to alert you of noindex tags when they find them since it's such a critical issue if that tag shows up in unexpected places. If you want to remove these issues from your crawl report, add the following directive to your robots.txt file, this will prevent Moz from crawling these URLs and therefore reporting on them:
User-agent: rogerbot
Disallow: /tag/
Disallow: /category/*edit - do not prevent all user-agents from crawling these URLs, as it will prevent search engines from seeing your noindex tag, they can't obey what they aren't permitted to see. If you want, once all tag & category pages have been removed from the index, you can update your robots.txt to remove the rogerbot directive and add the disallows for tag & category to the * user agent.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Issues with Moz producing 404 Errors from sitemap.xml files recently.
My last campaign crawl produced over 4k 404 errors resulting from Moz not being able to read some of the URLs in our sitemap.xml file. This is the first time we've seen this error and we've been running campaigns for almost 2 months now -- no changes were made to the sitemap.xml file. The file isn't UTF-8 encoded, but rather Content-Type:text/xml; charset=iso-8859-1 (which is what Moveable Type uses). Just wondering if anyone has had a similar issue?
Moz Pro | | BriceSMG0 -
Rel="canonical" tag is implemented in my product pages, but still getting canoncal error for products in Moz. What is the problem? me or MOZ?
I have included the rel="canonical" tag in all my product pages, but still getting canonical error in MOZ reports for more than 6 month ! I would like to know if my code is wrong or the MOZ report system is not working properly. Here is an example of my canonical code in line 84 rel="canonical" href="http://www.doornmore.com/slab-single-door-80-fiberglass-courtlandt-1-panel-arch-lite-glass.html" /> Thanks Shayann
Moz Pro | | Shayann0 -
What is the best way to set up my seomoz campaign with multiple landing pages
I have 30 geo targeted landing pages under the same domain. So i want to track geo targeting keywords for each landing page. given this what is the best way to use seomoz and how do i set up and structure? example of landing page structure san francisco is - http://www.relationshipcounselingcenter.org nyc is - http://www.relationshipcounselingcenter.org/new-york-city-nyc-marriage-couples-therapy/ dc- http://www.relationshipcounselingcenter.org/washington-dc-marriage-couples-therapy/ etc Much thanks I'm a newbie to seomoz tools
Moz Pro | | sevin0 -
Solution to "Exact Match Domain" issue?
My website bayjobs.com has done well for years, then virtually disappeared from google on September 28, which is the day of the emd (exact match domain) penalty. Meanwhile, similar sites like bajobs.com and bayareajobfinder.com have fared just fine for keywords "bay area jobs". I was previously around #5 for those keywords, but now not even in top 200. Does anyone have any solution for this? Thanks for any insight!
Moz Pro | | UhOh0 -
Domain / Page Authority - logarithmic
SEOmoz says their Domain / Page Authority is logarithmic, meaning that lower rankings are easier to get, higher rankings harder to get. Makes sense. But does anyone know what logarithmic equation they use? I'm using the domain and page authority as one metric in amongst other metrics in my keyword analysis. I can't have some metrics linear, others exponential and the SEOmoz one logarithmic.
Moz Pro | | eatyourveggies0 -
On the Crawl Diagnostics Summary, its reporting over 100 "Title Missing or Empty" issues, but they all check out fine?
Wondering if there Is a bug with the crawler or known timeout issues? Site speed is fast, but we do run a couple of large cron jobs out of hours, which may be the cause of any timeouts, but shouldn't the crawler report that, rather saying no title tags on 100 pages, when there are? SEOmoz newbie, so still finding my feet 🙂
Moz Pro | | sjr4x40 -
SEOMoz only crawling 5 pages of my website
Hello, I've added a new website to my SEOmoz campaign tool. It only crawls 5 pages of the site. I know the site has way more pages then this and also has a blog. Google shows at least 1000 results indexed. Am I doing something wrong? Could it be that the site is preventing a proper crawl? Thanks Bill
Moz Pro | | wparlaman0 -
The Site Explorer crawl shows errors for files/folders that do not exist.
I'm fairly certain there is ultimately something amiss on our server but the Site Explorer report on my website (www.kpmginstitutes.com) is showing thousands of folders that do not exist. Example: For my "About Us" page (www.kpmginstitutes.com/about-us.aspx), the report shows a link: www.kpmginstitutes.com/rss/industries/404-institute/404-institute/about-us.aspx. We do have "rss", "industries", "404-institute" folders but they are parallel in the architecture, not sequential as indicated in the error url. Has anyone else seen these types of error in your Site Explorer reports?
Moz Pro | | dturkington0