Crawl Diagnostics Error Spike
-
With the last crawl update to one of my sites there was a huge spike in errors reported. The errors jumped by 16,659 -- majority of which are under the duplicate title and duplicate content category.
When I look at the specific issues it seems that the crawler is crawling a ton of blank pages on the sites blog through pagination.
The odd thing is that the site has not been updated in a while and prior to this crawl on Jun 4th there were no reports of these blank pages.
Is this something that can be an error on the crawler side of things?
Any suggestions on next steps would be greatly appreciated. I'm adding an image of the error spike
-
This would be another issue. I would need to look at the code to give you more insight. But off the bat I assume that this is an issue regarding mislabeling the rel=next and rel=prev. They can be kind of tricky to work with on a broad based update due to the fact that they are intended to refer to specific pages. If you do not have the end page labeled Google says :
"When implemented incorrectly, such as omitting an expected rel="prev" or rel="next" designation in the series, we'll continue to index the page(s), and rely on our own heuristics to understand your content."
I would look into this first. If the answer is still elusive to you the next option would probably be finding a different set of eyes on the code to see if there are any minor oversights that you may have overlooked.
-
One last thing;
It seems that I have a game plan for addressing this issue, but as I think about this one thing has me concerned in the way Roger crawled the site.
The site has maybe a total of 100 articles, which would account for ?Page=10, but what I'm seeing is errors on ?Page=104. When you look at that page its a blank. Where is Roger coming up with that parameter?
Do you think this is a Roger issue or something else?
-
Makes sense
-
Unless you have some super secret page that is buried somewhere deep down in your site that you can ONLY get to from those pages, it wouldn't make sense to have them follow the links. All that will happen is they land on the next page, scrape it to the noindex tag and move on. They won't index and this just waste your sites bandwidth and slows everything else down. If it's a noindex it should usually be a nofollow unless you are looking to track conversions or some other specific only navigable through those pages.
-
Hey Jake;
Whats your option of using "nofollow" vs "follow" on the pages i'm blocking from indexing? Is there a reason to prevent them from following the links on these pages?
-
Cool glad we could help!
if you want to clean up your code and are posting site wide for them I would recommend the none tag
Accounts for both
noindex, nofollow
-
Thank you again for the input, the goal here is not provide accurate reporting and ensure that the site conforms to the search engines requirements.
Currently the "?page=" parameter is not blocked through . it sounds like this maybe the issue.
I will update the code to address that and see what kind of results we get with the next update. I think this is best addressed at the code level, rather then the robots.txt.
Thanks
-
Rodger crawls like the Google bot and takes his hints from the robot.txt file. So whatever Rodger is seeing is usually what the other spiders are seeing as well. From time to time I have encountered slight glitches to the SEOmoz crawler as they change and update their algorithm.
When it comes down to it, Google examines a link profile through a microscope akin to the Large Hadron Collider. where as we have to examine it through a magnifying glass from 1935.
The wonderful people here at SEOmoz are always trying to give us a better view, but it is still imperfect. I would say if all else fails and this report continues to show errors in moz then get your reports for your clients directly from webmaster tools.
-
** How do I tell Roger no to crawl these blank pages?**
Any easy solution is to block roger in robots.txt
User-agent: rogerbot
Disallow: [enter pages you do not wish to be crawled]
But a better solution would be to fix the root problem. If your only goal is to provide clean reporting to your client the above will work. If your goal is to ensure your site is crawled correctly by Google/Bing, then Jake's suggestion will work. You can help Google and Bing understand your site by telling them how to handle parameters.
I would prefer to fix the root issue though. Do the pages which are being reported as duplicate content have the "noindex" tag on them? If so, you can report the issue to the moz help desk (help@seomoz.org) so they can investigate the problem.
-
Hey Jake;
Thanks for your feedback, i did make some changes to the code (posted in the reply to Jamie). I'll take a closer look at the webmaster tools to make sure things are OK on that end.
FYI: The "rel=prev / rel=next tags" are implemented
I added code to manage
to pages that are accessed through
- /Blog/?tag=
- /Blog/category/
- /Blog/archive.aspx
As a secondary concern, with Roger now reporting all these issues in SEOMoz, I provide these reports to my clients and thus having 16k errors is not a good PR thing. How do I tell Roger no to crawl these blank pages?
-
It looks like Rodger found his way into your variable URLs!
This could definitely cause a problem if the engine crawlers are seeing this path as well. Have you made any changes to the code on your site or the URL structure lately?
Regardless, you might want to examine in your Webmaster Tools for both Google and Bing.
For Google you will want to check the blocked URL's under the Health menu. This will give you the information on what pages are and are not blocked. If you notice that the Head Match term you are looking to exclude is not listed make sure that you upload the term to the robots.txt file on your site. Other fixes for this include canonicalisation tagging or the implementation of the rel=prev / rel=next tags. There are a few other ways that are more complicated and I recommend avoiding unless absolutely necessary.
But good news everyone! Google has a few ways to go about fixing the indexation.
Bing is a little Different but just as easy. In the Bing Webmaster Tools under the Index tab, there is a tool called URL Nor<a class="cpad Subject message-low-priority-icon marginleft5 bold">malization</a> you can tell the crawlers to exclude a portion of the query string without changing anything on your database. It also automatically finds and suggests <a class="cpad Subject message-low-priority-icon marginleft5 bold">query parameters for normalization as well. This is a recent change for Bing and could account for the sudden jump in warnings.</a>
I hope this helps and you keep being awesome!
-
Hey Jamie;
In an effort to block crawling of pages on the blog that are essentially duplicating content I added coded (on (4/16) to insert :
to pages that are accessed through
/Blog/?tag=
/Blog/category/
/Blog/archive.aspx
I did not do this for
/Blog/?page=
There were no changes to the robots.txt
There were no updates to canonical tag
There were no updates to pagination
Thanks for your prompt reply
-
Can you share what changes have been made to the site? A few ways this can happen are:
-
a change to the robots.txt file
-
a change to your site's template either removing a canonical tag, a noindex tag, or altering your pagination in any way such as modifying paginated titles
-
resolving an onsite issue which prevented crawling of these pages
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404 error for unknown URL that Moz is finding in our blog
I'm receiving 404 errors on my site crawl for messinastaffing.com. They seem to be generating only from our blog posts which sit on Hubspot. I've searched high and low and can't identify why our site URL is being added at the end - I've tried every link in our blog and cannot repeat the error the crawl is finding. For instance: Referer is: http://blog.messinastaffing.com/take-charge-career-story-compelling-cover-letter/ 404 error is: http://blog.messinastaffing.com/take-charge-career-story-compelling-cover-letter/www.messinastaffing.com I agree that the 404 error URL doesn't exist but I can't identify where Moz is finding it. I have approximately 75 of these errors - one for every blog on our site. Beth Morley Vice President, Operations Messina Group Staffing Solutions
Moz Pro | | MessinaGroup
(847) 692-0613 www.messinastaffing.com0 -
503 Error or 200 OK??
So, in a Moz crawl and a Screaming From crawl, I'm getting some 503 Service Unavailable responses on the some pages. So I go to the pages in question, and the Moz bar is showing a 200 OK. The SEOBook http status checker (http://tools.seobook.com/server-header-checker/) also shows a 200 OK. What gives? The only reason I'm looking at this is because rankings plummeted a couple of weeks ago. Thanks! UPDATE So, I decided to use the mozbar to set the user agent as Googlebot and when I tried to access the pages in question I receive this message. I don't think this is an issue... anyone else have much experience here? Your access to this site has been limited Your access to this service has been temporarily limited. Please try again in a few minutes. (HTTP response code 503) Reason: Fake Google crawler automatically blocked Important note for site admins: If you are the administrator of this website note that your access has been limited because you broke one of the Wordfence firewall rules. The reason you access was limited is: "Fake Google crawler automatically blocked". If this is a false positive, meaning that your access to your own site has been limited incorrectly, then you will need to regain access to your site, go to the Wordfence "options" page, go to the section for Firewall Rules and disable the rule that caused you to be blocked. For example, if you were blocked because it was detected that you are a fake Google crawler, then disable the rule that blocks fake google crawlers. Or if you were blocked because you were accessing your site too quickly, then increase the number of accesses allowed per minute. If you're still having trouble, then simply disable the Wordfence firewall and you will still benefit from the other security features that Wordfence provides. If you are a site administrator and have been accidentally locked out, please enter your email in the box below and click "Send". If the email address you enter belongs to a known site administrator or someone set to receive Wordfence alerts, we will send you an email to help you regain access. Please read our FAQ if this does not work.
Moz Pro | | wiredseo0 -
1 week has passed: Crawled pages still N/A
Roughly one week ago I went pro, and then I created a campaing for the smallish webshop that I'm employed at, however it doesn't seem to crawl. I've check our visitors log and while we find other bots such as google, bing, yandex and so fourth, seomoz bot hasn't been visible. Perhaps I'm looking for a normal useragent, ohwell, onwards. While I thought it might take time, as a small test I added a domain that I've owned for sometime but don't really use, that target site is only 17 pages, now this site was crawled almost within the hour, and I realised that our ~5000pages on the main campaing would take some time, but wouldn't the initial 250 pages be crawled by now? I should add, that I didn't add http:// to the original Campaing, but the one that got crawled I did. I cannot seem to change this myself inorder to spot if that's the problem or not. Anyone has any ideas, should I just wait or is there something I can activly do to force it to start rolling?
Moz Pro | | Hultin0 -
Was there another issue crawling rankings on 9th October?
Again this month I don't appear to have rankings logged for 9th October. Was there an issue earlier in the month?
Moz Pro | | NathanP0 -
SEOMoz Crawl Warnings, do they really hurt rankings?
SEOMoz reports 250 crawl warnings on my site. In most cases its too long title tags, with 4 of them its missing meta description. SEOMoz says it will hurt my rankings? However, I'm sure a recent whiteboard Friday contradicted this. So what is it?
Moz Pro | | sanchez19600 -
How to remove Duplicate content due to url parameters from SEOMoz Crawl Diagnostics
Hello all I'm currently getting back over 8000 crawl errors for duplicate content pages . Its a joomla site with virtuemart and 95% of the errors are for parameters in the url that the customer can use to filter products. Google is handling them fine under webmaster tools parameters but its pretty hard to find the other duplicate content issues in SEOMoz with all of these in the way. All of the problem parameters start with ?product_type_ Should i try and use the robot.txt to stop them from being crawled and if so what would be the best way to include them in the robot.txt Any help greatly appreciated.
Moz Pro | | dfeg0 -
MOZ Crawler only crawling one page per campaign
We set up some new campaigns, and now for the last two weekly crawls, the crawler is only accessing one page per campaign. Any ideas why this is happening? PS - two weeks back we did "upgrade" the account. Could this have been an issue?
Moz Pro | | AllaO0 -
Urgent: Campaign set up 'Select Competitors' errors
Hi. Im setting up my first campaign and Im having issues with step 3: 'Select your competitors to track'. I only want to track 1 competitor: http://en.wikipedia.org/wiki/Ryan_Murphy_(writer) When I enter this and the competitor name into the form provided and click 'continue to next step' it throws an error at me: Darn, there are errors in your form! Don’t worry, Roger can’t feel pain. Competitors domain http://en.wikipedia.org/wiki/ryan_murphy_(writer) may not have a /path after the host Domain http://en.wikipedia.org/wiki/ryan_murphy_(writer) may not have a /path after the host Can anyone help me as this is urgent.
Moz Pro | | RyanSMurphy1