High Number of Crawl Errors for Blog
-
Hello All,
We have been having an issue with very high crawl errors on websites that contain blogs. Here is a screenshot of one of the sites we are dealing with: http://cl.ly/image/0i2Q2O100p2v .
Looking through the links that are turning up in the crawl errors, the majority of them (roughly 90%) are auto-generated by the blog's system. This includes category/tag links, archived links, etc. A few examples being:
http://www.mysite.com/2004/10/
http://www.mysite.com/2004/10/17/
As far as I know (please correct me if I'm wrong!), search engines will not penalize you for things like this that appear on auto-generated pages. Also, even if search engines did penalize you, I do not believe we can make a unique meta tag for auto-generate pages. Regardless, our client is very concerned seeing these high number of errors in the reports, even though we have explained the situation to him.
Would anyone have any suggestions on how to either 1) tell Moz to ignore these types of errors or 2) adjust the website so that these errors now longer appear in the reports?
Thanks so much!
- Rebecca
-
Hi Rebecca
What are the crawl errors exactly? From that report screenshot it looks like you have a variety of them, so the fixes will all be different.
Let me know, and in the meantime you might want to check out my article on Moz about setting up WordPress
-Dan
-
It is true that you will most likely not be penalized for these pages, Google is pretty good at figuring out common canonicalization problems in my opinion and would most likely not penalize you for having duplicate content. I would encourage you to dig a little deeper and see what additional problems these pages could create though.
Consider that Google will waste valuable crawl bandwidth crawling these meaningless pages, rather than focusing on the important content you want them too. If Google is crawling them, you can most likely bet that PageRank is flowing through these pages as well, diluting the link equity of your site.
Are you using Wordpress? There are a lot of great plug ins that can help you manage these pages. You could control how Google crawls these pages with your robots.txt, by placing meta robots tags on the pages using a plug in, or by placing rel=canonical tags on the pages pointing back to the page that is the original source.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why does Moz only seem to be crawling a snap shot of the site I am working with?
I was wondering if anyone can help? I am working using Moz to help improve the SEO on a website I am working with, the website contains thousands of pages, yet for some reason Moz only seems to be crawling a small snap shot of the website. I know there are particular pages that I had added a couple of weeks ago - about 300 in total - and none of these were showing on the first crawl, so I did another on-demand crawl and some of these showed up then. Despite this, it says it crawled 700ish pages, but there are getting close to 20-30ish thousand live pages on the site. Any thoughts and guidance as to why they crawling may be stopping?
Getting Started | | dsmith8020200 -
5xx Crawl Issue might not be issues at all. Help
Hi, I ran a crawl test on our website and it came back with 900 5xx potential errors. When I started opening these links 1 by 1 I could see they were actually working. So i exported the full list of 900 and went to the website: https://httpstatus.io/ pasted the links by 100 and used that. They came back with status codes of 301 / 301 / 200 which i believe means they are okay. After reading it says that my programmer may need to see if we are blocking the MOZ BOT or to slow the MOZ BOT down. I guess I'm wondering if this is not done is the site actually having these 5xx errors when Google is Crawling or is it just showing 900 errors because of MOZ BOT but actually things are okay? I know the simple answer is to get the programmer to fix the MOZ BOT issue to know for sure but getting programmers to do things take a lot of time so I'm trying to get a better idea here. Thanks for your input.
Getting Started | | Cfarcher1 -
Moz not able to crawl our site - any advice?
When I try and crawl our site through Moz it gives this message: Moz was unable to crawl your site on Aug 7, 2019. Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster. I have been through all the help and doesn't seem to be any issues. You can check the site and robots.txt here: https://myfamilyclub.co.uk/robots.txt. Anyone got any advice on where I could go to get this sorted?
Getting Started | | MyFamilClubLtd1 -
Moz unable to crawl my Zenfolio website
Hey guys, I am attempting to optimize a website for my wife's business but Moz is unable to crawl it. Zenfolio is the web hosting service (she is a photographer). The error message is: **Moz was unable to crawl your site on Apr 1, 2019. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. Read our troubleshooting guide. I did read the troubleshooting guide but nothing worked. My robots.txt file disallows a few bots, but not roger bot. Anyone have any idea what is going on? Or do I need to request server logs from Zenfolio? Thanks
Getting Started | | bpenn111 -
Does MOZ pick up every issue in one crawl?
Hi, Does MOZ pick up every error/warning in one crawl? Or does it take numerous crawls? Many thanks Lee
Getting Started | | lbagley0 -
Page Count per campaign - Crawl Usage 500,000 Pages
How to you find the page crawl count per campaign? I have 3 campaigns and Moz stats I have used 150,000 pages from 500,000. I want to check this. Thanks
Getting Started | | SJMDT0 -
Mozbot Can Not Crawl Entire Domain
I'm trying to crawl Redken.com in Moz Analytics and the Search Diagnostics is only crawling 4 pages. The domain uses a "select your country" the first time you visit, and it seems as though the bot is not getting beyond that (aka, not clicking on "USA") and is therefore not crawling the rest of the domain. There is no country specific URL other than redken.com. I've tried entering both "redken.com" and "www.redken.com" as the URL, but no luck. Any tips?
Getting Started | | LabeliumUSA0 -
Crawl test
Can anyone give me an idea how to use the MOZ crawl test results...I'm a little confused on how to read it? I have a lot of "no's"...I think this is good?
Getting Started | | sdwellers0