Dismiss crawl diagnostics error
-
Hello everyone,
Is there a way to dismiss some errors in the Crawl Diagnostics tool so they don't appear again? It happens so that some of the errors are never going to be fixed because of their nature. For example, 'Title too long' errors that point to some of the threads on my forum - it doesn't make sense to change the title of a thread posted by user just for the sake of the error disappearing from the 'Crawl Diagnostics' tool.
Otherwise the CD interface gets a little bit cluttered with errors which I will never fix anyway.
I wonder how others deal with this problem.
Thanks.
-
Ryan, I completely agree. A list, whatever it may be, is of little value, unless you can tick/check items on that list as completed.
However, overall the Crawl Diagnostics tool is very helpful.
-
I asked the help desk this exact question. In short, the answer was No.
The official explanation was the tool is designed to err on the side of offering too-much information, rather then too little.
My reply....that's not helpful. A report is useful when it provides data that is either informative or actionable. The very first time I looked at a site crawl report and noticed the "title too long" errors, I investigated each one. I resolved many issues, but many were acknowledged and no action is going to be taken.
The site involved has a forum section. The forum software automatically appends the site's title to all titles. It's not a bad thing at all. Also, the thread title are controlled by users, and sometimes they choose lengthy titles. I could adjust scripts to cut off the titles and cut-off the title @70 characters, but why? To satisfy a specific tool which is designed to assist me? That's an approach I decided against long ago.
I whole-heartedly agree. we should be able to say "hey Roger, thanks for letting me know. The first time was informative, but now you are being a pain in the ass." I still wish to be informed about NEW title-too long errors, but I wish to be able to disable the warning about these "errors" which have been acknowledged.
Until the moz team upgrades the tool, or someone creates a better crawl tool, we are stuck with this issue. I would be interested to hear if anyone has found a better tool elsewhere.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why Only Our Homepage Can Be Crawled Showing a Redirect Message as the Meta Title
Hello Everyone, So recently when we checked our domain using a Moz Crawl Test and Screaming Frog only the homepage comes up and the meta title says “You are being redirected to…”. We have several pages that used to come up and when submitting them to GSC no issues come up. The robots.txt file looks fine as well. We thought this might be ‘server’ related but it’s a little out of our field of expertise so we thought we would find out if anyone has any experience with this (ideas of reasons, how to check…etc.) or any potential suggestions. Any extra insight would be really appreciated. Please let us know if there is anything we could provide further details for that might. Looking forward to hearing from all of you! Thanks in advance. Best,
Moz Pro | | Ben-R0 -
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
Crawl Diagnostics - Historical Summary
As we've been fixing errors on our website, the crawl diagnostic graphs have been showing great results (top left to bottom right for errors). The problem is the graphs themselves aren't very pretty. I can't use them in my internal reports (all internal reports are standardised colours/formats). Is there anyway of exporting the top level summary with historic data so the graphs can be recreated in company colours? I don't want the detailed CSV breakdown of what errors occurred, but rather than on X date there were Y errors, the next month Z errors and so forth. The data must already be in the SEOMoz system in order to create the graphs themselves - I was hoping this can be made available to us if it isn't already? Does anyone know if there is already a way of doing this? I've tried to 'inspect element' and find the underlying data in the source code but to no avail, and can't see any exports that would do this. Thanks in advance Dean
Moz Pro | | FashionLux0 -
Find a 4xx or 5xx link referenced in an SEO Crawl Report
So I just got the Crawl Diagnostics report for a client site and it came back with a number of 4xx errors and even 1 5xx error. So while I can find the URL that has the problem, I cannot find the pages that have the links pointing to these non-existent or problematic pages. Normally I would just search the database for the site, but in this case I don't have access to it as the site is on a proprietary platform with no access other than to the CMS. Is there anyway to get the linking URL from the report? Thanks!
Moz Pro | | farlandlee0 -
Why does SEOMoz only crawl 1 page of my site?
My site is: www.thetravelingdutchman.com. It has quite a few pages, but for some reason SEOMoz only crawls one. Please advise. Thanks, Jasper
Moz Pro | | Japking0 -
"Issue: Duplicate Page Content " in Crawl Diagnostics - but sample pages are not related to page indicated with duplicate content
In the crawl diagnostics for my campaign, the duplicate content warnings have been increasing, but when I look at the sample pages that SEOMoz says have duplicate content, they are completely different pages from the page identified. They have different Titles, Meta Descriptions and HTML content and often are different types of pages, i.e. product page appearing as having duplicate content vs. a category page. Anyone know what could be causing this?
Moz Pro | | EBCeller0 -
How long should the weekly crawl take
Mine started yesterday afternoon and it's now almost 11pm on Sunday. 30+ hours and still not finished (and no progress indicator). 438 pages quoted as being crawled. That's not normal - right? I have made a bunch of changes based on last weeks crawl so I have been eagerly waiting for this to finish But 30 hours?.... Thanks. Mark
Moz Pro | | MarkWill0 -
Crawl Diagnostics and missing meta tags on noindex blog pages
Hi Guys/Gals We do love the Crawl Diagnostics, but do find the missing meta tags ("Missing Meta Description" Tag in this case) somewhat spammy. We use the "All in One SEO Pack" for our blog and it does stick in noindex,follow (as it should) on the pages that is of no use to us. "2008/04/page/2/" and the likes. Maybe I'm wrong but should the Diagnostics tool not respect the noindex tag and just ignore any warnings, since it should really mean that these pages are NOT included in the search index. Meaning that the other meta tags are really useless. Any thoughts?
Moz Pro | | sfseo0