High Number of Crawl Errors for Blog
-
Hello All,
We have been having an issue with very high crawl errors on websites that contain blogs. Here is a screenshot of one of the sites we are dealing with: http://cl.ly/image/0i2Q2O100p2v .
Looking through the links that are turning up in the crawl errors, the majority of them (roughly 90%) are auto-generated by the blog's system. This includes category/tag links, archived links, etc. A few examples being:
http://www.mysite.com/2004/10/
http://www.mysite.com/2004/10/17/
As far as I know (please correct me if I'm wrong!), search engines will not penalize you for things like this that appear on auto-generated pages. Also, even if search engines did penalize you, I do not believe we can make a unique meta tag for auto-generate pages. Regardless, our client is very concerned seeing these high number of errors in the reports, even though we have explained the situation to him.
Would anyone have any suggestions on how to either 1) tell Moz to ignore these types of errors or 2) adjust the website so that these errors now longer appear in the reports?
Thanks so much!
- Rebecca
-
Hi Rebecca
What are the crawl errors exactly? From that report screenshot it looks like you have a variety of them, so the fixes will all be different.
Let me know, and in the meantime you might want to check out my article on Moz about setting up WordPress
-Dan
-
It is true that you will most likely not be penalized for these pages, Google is pretty good at figuring out common canonicalization problems in my opinion and would most likely not penalize you for having duplicate content. I would encourage you to dig a little deeper and see what additional problems these pages could create though.
Consider that Google will waste valuable crawl bandwidth crawling these meaningless pages, rather than focusing on the important content you want them too. If Google is crawling them, you can most likely bet that PageRank is flowing through these pages as well, diluting the link equity of your site.
Are you using Wordpress? There are a lot of great plug ins that can help you manage these pages. You could control how Google crawls these pages with your robots.txt, by placing meta robots tags on the pages using a plug in, or by placing rel=canonical tags on the pages pointing back to the page that is the original source.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why do ignored crawl issues still count as issues?
I use Cloudflare, so I can't avoid the Crawl Error for "Pages with no Meta Noindex" because of the way Cloudflare protects email addresses from harvesting (it creates a new page that has no meta noindex values). I marked this issue as "ignore" because there's nothing I can do about it, and it doesn't really affect my site's performance from an SEO standpoint. But even marked as ignore, it is still included in my site crawl issues count. Of course, I want to see that issues count drop to zero, but that can't happen if the ignored issues are counted. I don't want mark it fixed, because technically it's not fixed. KwPld
Getting Started | | troy.brophy0 -
Standard Syntax in robots.txt doesn't prevent Moz bot from crawling
A client is getting many false positive site crawl errors for things like duplicate titles and duplicate content on pages that include /tag/ in the URL. An example is https://needquest.com/place_tag/autism-spectrum-disorder/page/4/ To resolve this we have set up a disallow statement in the robots.txt file that says
Getting Started | | btreloar
Disallow: /page/ For some reason this appears not to work, as the site crawl errors continue to list pages like this. Does anyone understand why that would be and what we need to do to properly disallow crawling these pages?0 -
Scheduled update - Re-Crawl - recrawl
Can I not perform a manual update? I setup a campaign without GA as I did not have access, I got access, added the GA account to the campaign but no data is showing as I think I require an update, but have to wait 7 days? Is that right? Thanks
Getting Started | | SJMDT0 -
'not a valid url' error in campaign set up
I get the error not a valid url when I'm trying to set up a campaign. I know it's a valid url. I have tried with www, non-www, http://, https:// when I do the https it lets me start, but then I get an error that https is forwarding to http and I need to use that. When I then put in the http, I get the original error. thanks in advance for your help.
Getting Started | | HighVoltage0 -
Crawl Diagnostics Help
Hi there Where can i find my campaigns crawl diagnostics? I need to find where this information can be found and specific issues? Is this possible, i cant seem to find this info. regards Ana
Getting Started | | Starsia200000 -
Campaign.crawl-seed.bad-response
I am trying to set up a new campaign for a website, but I keep getting this error message... campaign.crawl-seed.bad-response 😞 I have no idea what the problem is. Can you tell me what I am suppose to do to fix this? The URL I am trying to set up is www.aboutplcs.com
Getting Started | | ChadC0 -
Cant download my crawl csv
When I click on the [download csv] in my crawl campaign section nothing happens.
Getting Started | | digitalmedialounge0 -
Whenever I try to access campaigns in moz pro I get an error page
I recently signed-up for a new pro account. As I was adding my first subdomain everything was fine until I was asked to link to GA, when I clicked yes I got this error message: 403 Forbidden Now every time I click on set-up campaign I get taken to a page with nothing but the 403 Forbidden text.
Getting Started | | Toptal0