Is there a way to turn off Meta Noindex warnings?
-
My site is Wordpress, and indexing tags, categories, dates, etc. just ends up causing potential duplicate page content, and so using the advised SEO Yoast plug in suggestion, we have set those type listings to noindex. However now MOZ flags me with 1.7k+ noindex warnings. I know it is not hurting anything, but is there anyway to disable that warning to clean up my crawl error report?
-
Hey there! You can actually ignore an entire issue type from the top of the page. Just navigate to the issue type and click the "Ignore issue type" button in the top right corner of the page. This will ignore all pages with this issue type going forward.
Here is a screenshot to help:
-
If there is, I'd like to know too. I've done this before for about 3.5k pages. Took me a while.
-
That was my first approach, but faced with the daunting task, I thought I'd ask if there was a better faster efficient way. Guessing not. Thanks
-
Yeah, if you go into the report, there will be a tick box next to each line. Tick the one at the top to tick all of the ones on that page and then click on Ignore. You can only tick 100 at a time, though, so might have to go through a few pages worth of results.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Meta Noindex report: exclude urls and delete all warnings in once
Dear community, I have a question about "Site Crawl" > "Crawler Warnings" > "Meta Noindex" reports. I see 3.6K errors with all the same url base: /register?url={{xxxxxxxx}} The page have the following robots meta tag: 1. Can i exclude some urls from crawling by moz-spider bot? If yes, how?
Feature Requests | | bettingfans.com
2. Can i use the 'mark as fixed' or 'ignore' functionality for all urls in once? Now, i have to execute the action 36 times to 'ignore' all 3600 errors. Hope, someone can help me 🙂0 -
Will MOZ be updating there tools with regards to the new Meta Description length?
Now that Google has increased the number of characters allowed for meta descriptions/snippets when will MOZ be updating the tools to cater for the new lengths? I'm sure a lot of my Meta descriptions that are being flagged as too long with disappear once updated. Cheers Lee B
Feature Requests | | lbagley1 -
Will the Pro tool be updated to reflect the new meta description length?
Hi, I'm wondering if and when the tool will be updated for the above purpose - I'm currently testing it out with a view to purchase a subscription and have onboarded a couple of clients, but the reports are coming back with meta description length errors, but as I'm sure you know the "legal" length of these was recently increased to I believe 320 characters. Any insight here would be awesome - thanks so much!
Feature Requests | | pubcrawler0132 -
What is the best way to display historical ranking data?
I have utilized MOZ for years but have always struggled with a nice graph that illustrates historical ranking data for all tracked keywords. Can someone help me find the best solution for this in MOZ?
Feature Requests | | WebMarkets0 -
Is there any way to filter by relevancy first and then volume second? Right now I just export the results of keyword explorer and do it offline. It would be great if I could do it online
I'm trying to filter the results of a keyword search in keyword explorer by relevancy first and volume second. But the minute I select volume the relevancy is completely lost. I know I can export them and manipulate it in excel but is there a feature that allows me to do this in Moz?
Feature Requests | | Anerudh0 -
Crawl test limitaton - ways to take advantage of large sites?
Hello I have a large site (120,000+) and crawl test is limited to 3,000 pages. I want to know if you have a way to take advantage to crawl a type of this sites. Can i do a regular expression for example? Thanks!
Feature Requests | | CamiRojasE0 -
Is there a way to filter the new KW lists by KW's that are triggering certain SERP features? (Videos, Quick Answers, etc...)
I've been creating KW lists using the new Moz Pro tool and I love the quick visualization of all the different serp features those keywords are triggering in aggregate, however I'd love to be able to filter the list to see which keywords are triggering videos without having to do a SERP Analysis one at a time to see the listings. Is there a way to do that? Thanks!
Feature Requests | | digitasseo0 -
Is there a way to schedule automatic weekly .csv reports for the Tracked Keywords Overview?
Using The Custom Reports tool, I only managed to get PDF reports. It would be useful to automatically receive .csv reports by email. Any idea how?
Feature Requests | | Digital-Sun0