Crawl Errors and Notices drop to zero
-
Hi all,
After setting up a campaign in Moz the crawl is successful and it showed the Errors and Warnings in crawl diagnostics (each one had about 40-50), but after a few days the number dropped to zero. Only the "notices" seems to stay normal, with a slight drop since the campaign set up, but not dropping to zero. I set this campaign up in a colleague's account and the same thing happened shortly after set up. I didn't find any Q&A already posted so any insight is appreciated!
-
Glad I could help!
-
Thank you for looking into this. Really appreciate it!
-
Hey Vanessa,
Every URL that is in the report you forwarded is on the blog, so it definitely looks like the noindex tag on the blog is the reason for the drop in crawl errors and warnings. If you prefer that we begin crawling the blog again, you can have that tag removed, but the tag also means that the search engines aren't indexing those pages or finding those errors any longer either.
Let me know if you have any other questions.
Chiaryn
-
Thanks for looking into this, Chiaryn. That is the correct campaign. I have a report from April 17 and I sent it to help@seomoz.org. If you can shed any light on this that would be a big help. I appreciate it!
-
Hey Vanessa,
Thanks for writing in.
I looked into your account and I think you are referring to the Sparky campaign. Unfortunately, I can only see the most recent crawl data, so I don't have a way to compare the crawls from prior to April 24th to see why the number of errors and warnings would have dropped off around that time.
I do see that we picked up a noindex, nofollow tag on the blog pages on April 16th, so it may be that we were crawling other pages on the blog that had errors and warnings before the tag was added. But once the noindex, nofollow tags were added, we weren't able to crawl those pages and report back on the errors.
If you can think of any other changes that may have taken place around April or if you have an old report that shows some of the URLs that were reported as having errors, I can look into this further for you. If you prefer not to include the error report on this public forum, you can always email it to help@seomoz.org and include my name in the subject line.
I hope this helps.
Chiaryn
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawler errors or Page Load Time? What affect more to SEO
Hello, I have a page with a forum and at this moment the moz report says that have 15.1k of issues like url too long, meta noindex, title too long etc. But this page have a load time realy sloooow with 11 seconds. I know i need fix all that errors (i'm working on this) but... What is more important for SEO? The page load or that type of error like duplicate titles etc. Thank you!
Moz Pro | | DanielExposito1 -
Big drop in Domain Authority score
Hi I am managing a website, that in May was indexed at 67 i DA - then we installed SSL and we dropped to 26 and we're still there - I do not now, how I can change my site, so that I can regain some of the DA. It is a wordpress site and we used a free SSL certificate and I have detected no problem with the 301 redirects. Do you have any insights or tips?
Moz Pro | | Stine-Dahl0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Rogerbot crawls my site and causes error as it uses urls that don't exist
Whenever the rogerbot comes back to my site for a crawl it seems to want to crawl urls that dont exist and thus causes errors to be reported... Example:- The correct url is as follows: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/ But it seems to want to crawl the following: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/?id=10330 This format doesn't exist anywhere and never has so I have no idea where its getting this url format from The user agent details I get are as follows: IP ADDRESS: 107.22.107.114
Moz Pro | | spiralsites
USER AGENT: rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, rogerbot-crawler+pr1-crawler-17@moz.com)0 -
Adjusting SEOmoz Crawling Speed
How do you adjust the SEOmoz crawling speed? SEOmoz tried to crawl 10,000 pages in 3 hours and crashed our MySQL server.
Moz Pro | | cappuccino891 -
Crawl Diagnostics Report
I'm a bit concerned about the results I'm getting from the Crawl Diagnostics Report. I've updated the site with canonical urls to remove duplicate content and when I check the site - it all displays the right values, but the report, which has just finished crawling is still showing a lot of pages as duplicate content. Simple example: http://www.domain.com http://www.domain.com/ Both of them are in the duplicate content section although both have canonical url set as: Does each crawl check the entire site from the beginning or just the pages it didn't have a chance to crawl the last time? This is just one of 333 duplicate content pages, which have canonical url pointing to the right page. Can someone please explain?
Moz Pro | | coremediadesign0 -
Moz crawling
Hi Everyone! I'm new to the SEOMoz and wanted to find out if there is a way to decrease the waiting time for the campaign crawl. I have made a lot of changes based on the first crawl and would like to see how these are reflected on the reports, but can't until the next crawl is performed. Any help would be greatly appreciated.
Moz Pro | | coremediadesign0 -
Can I change the crawl day ?
Hi All I hope there is a simple solution to this - we have a number of campaigns setup which are all crawled, and therefore updated, on different days of the week. We review these weekly and it would be much easier if they were all crawled on the same day. Is it possible to change the crawl day for some campaigns? Thanks Roy
Moz Pro | | bluelogic0