Crawl Errors In Webmaster Tools
-
Hi Guys,
Searched the web in an answer to the importance of crawl errors in Webmaster tools but keep coming up with different answers.
I have been working on a clients site for the last two months and (just completed one months of link bulding), however seems I have inherited issues I wasn't aware of from the previous guy that did the site.
The site is currently at page 6 for the keyphrase 'boiler spares' with a keyword rich domain and a good onpage plan. Over the last couple of weeks he has been as high as page 4, only to be pushed back to page 8 and now settled at page 6.
The only issue I can seem to find with the site in webmaster tools is crawl errors here are the stats:-
In sitemaps : 123
Not Found : 2,079
Restricted by robots.txt 1
Unreachable: 2
I have read that ecommerce sites can often give off false negatives in terms of crawl errors from Google, however, these not found crawl errors are being linked from pages within the site.
How have others solved the issue of crawl errors on ecommerce sites? could this be the reason for the bouncing round in the rankings or is it just a competitive niche and I need to be patient?
Kind Regards
Neil
-
That's a bit of a pain.
If anything tell them to set up a custom informative 404 error page that will at least direct them somewhere else and not bail from the site.
-
Hi Kieran,
Thank you so much for your answer. The issue is I don't have access to the site admin and can only suggest changes.
Ill suggest what you have put above, and push ahead with the link building and see what happens. Told the client its a competitive niche especially this time of year, my boiler always seems to pack in before winter, does seem to have a settled around page 6, so you may be right.
Just didn't want to rule out if this was a possible penalization from Google.
Kind Regards
Neil
-
Not all errors are bad but for instance if you are getting 404 errors and the actual pages do not exist anymore then you should do 301 redirects on your .htacess to point Google to the right page. Doing this will sort out a lot of the errors. But if you are getting 404s and the pages are there then that is a bigger problem
Restricted by robots is not an error as this is telling you that you don't want somewhere crawled. Check the file to see if it is correct in what it is restricting.
How often are you pushing your sitemap out to Google. If you are as active with pages as your posts suggest I would think about submitting a new one or automating its creation.
If you are actively working the content on the sties there can often be this level of SERPS bouncing.but if you continue with the content I wouldn't worry too much about the errors for the moment and do the housekeeping above. There are very few completely spotless sites out there and the Google Webmaser tools and even the SEOMOZ tools here will always give you some level of errors.
hope this helps
Kieran
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website crawl error
Hi all, When I try to crawl a website, I got next error message: "java.lang.IllegalArgumentException: Illegal cookie name" For the moment, I found next explanation: The errors indicate that one of the web servers within the same cookie domain as the server is setting a cookie for your domain with the name "path", as well as another cookie with the name "domain" Does anyone has experience with this problem, knows what it means and knows how to solve it? Thanks in advance! Jens
Technical SEO | | WeAreDigital_BE0 -
Can you have 2 different websites on 1 webmaster tools account
Someone set up both our sites on the one webmaster tools account is this the best way to do it or should we have 2 different accounts. We are having problems with our site verification not working and our google shopping feeds not working could this be the cause.
Technical SEO | | CostumeD0 -
Webmaster Crawl errors caused by Joomla menu structure.
Webmaster Tools is reporting crawl errors for pages that do not exist due to how my Joomla menu system works. Example, I have a menu item named "Service Area" that stores 3 sub items but no actual page for Service Area. This results in a URL like domainDOTcom/service-area/service-page.html Because the Service Area menu item is constructed in a way that shows the bot it is a link, I am getting a 404 error saying it can't find domainDOTcom/service-area/ (The link is to "javasript:;") Note, the error doesn't say domainDOTcom/service-area/javascript:; it just says /service-area/ What is the best way to handle this? Can I do something in robots.txt to tell the bot that this /service-area/ should be ignored but any page after /service-area/ is good to go? Should I just mark them as fixed as it's really not a 404 a human will encounter or is it best to somehow explain this to the bot? I was advised on google forums to try this, but I'm nervous about it. Disallow: /service-area/*
Technical SEO | | dwallner
Allow: /service-area/summerlin-pool-service.
Allow: /service-area/north-las-vegas
Allow: /service-area/centennial-hills-pool-service I tried a 301 redirect of /service-area to home page but then it pulls that out of the url and my landing pages become 404's. http://www.lvpoolcleaners.com/ Thanks for any advice! Derrick0 -
Sitemap and crawl impact
If I have two links in the sitemap (for example: page1.html and page2.html) but the web-site contains more pages (page1.html, page2.html and page3.html) is this a sign for Google to not to crawl other pages? I.e. Will Google index page3.html? Consider that any page can be accessed.
Technical SEO | | ditoroin0 -
Updating Meta Tag error quickly besides submit to index in Webmaster Tools
For a conference page marketing built the meta tag didn't have correct year and date of the conference. I updated and used webmaster tools submit to index to try and get it updated in google search quickly but meta tag has not updated. Are there other avenues to get this corrected?
Technical SEO | | inhouseninja0 -
Webmaster Tools Site Map Question
I have TLD that has authority and a number of micro-sites built off of the primary domain. All sites relate to the same topic, as I am promoting a destination. The primary site and each micro-site have their own CMS installation, but the domains are mapped accordingly. www.regionalsite.com/ <- primary
Technical SEO | | VERBInteractive
www.regioanlsite.com/theme1/ <- theme 1
www.regioanlsite.com/theme2/ <- theme 2
www.regionalsite.com/theme3/ <- theme 3 Question: Should my XML site map for Webmaster Tools feed all sites off of the primary domain site map or are there penalties for this? Thanks.0 -
Crawl Errors for duplicate titles/content when canonicalised or noindexed
Hi there, I run an ecommerce store and we've recently started changing the way we handle pagination links and canonical links. We run Magento, so each category eg /shoes has a number of parameters and pages depending on the number of products in the category. For example /shoes?mode=grid will display products in grid view, /shoes?mode=grid&p=2 is page 2 in grid mode. Previously, all URL variations per category were canonicalised to /shoes. Now, we've been advised to paginate the base URLs with page number only. So /shoes has a pagination next link to /shoes?p=2, page 2 has a prev link to /shoes and a next link to /shoes?p=3. When any other parameter is introduced (such as mode=grid) we canonicalise that back to the main category URL of /shoes and put a noindex meta tag on the page. However, SEOMoz is picking up duplicate title warnings for urls like /shoes?p=2 and /shoes?mode=grid&p=2 despite the latter being canonicalised and having a noindex tag. Presumably search engines will look at the canonical and the noindex tag so this shouldn't be an issue. Is that correct, or should I be concerned by these errors? Thanks.
Technical SEO | | Fergus_Macdonald0 -
Link API returns Error 500
http://lsapi.seomoz.com/linkscape/links/nz.yahoo.com?SourceCols=4&Limit=100&Sort=domain_authority&Scope=domain_to_domain&Filter=external+follow&LinkCols=4 Hi folks any idea why the above returns Err 500 ? Seems to pertain to the domain - it works on other sites just not nz.yahoo.com Thanks!
Technical SEO | | jimbo_kemp0