GWT Crawl Error Report Not Updating?
-
GWT's crawl error report hasn't updated for me since April 25. Crawl stats are updating normally, as are robots.txt and sitemap accesses. Is anyone else experiencing this?
-
Thanks for the tip, Jacob! I was indeed back in business about a week later. I will keep seroundtable.com on my quick list for any such situations in the future.
-
Hey TP,
This is quite an old question so you may have seen movement by now. Any update?
Did a quick search of the GWMT help forums and found this:
http://productforums.google.com/forum/#!topic/webmasters/pkirhRejNYE
Several other people with the exact same issue (same date too!).
Looks like many are back in business. GWMT periodically has issues similar to this so I'd just stay calm
Barry over at http://www.seroundtable.com/ seems to cover these bugs more than most so check that site once in a while for errors like this.
Hope you're better now!
Jacob
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
GSC is reporting a lot of chopped URLs
Recently, in the last two weeks, I started seeing a lot of odd 404 errors in GSC for my site. Upon investigation, the URLs are for fairly new articles, and the URLs are chopped in various places. From missing a character at the end to missing about 10 characters at the end of the URL. (an old similar issue is that GSC reports duplicate contents on weird subdomains that we've never used like 'smtp' 'ww1' or even random ones like 'bobo'.) GSC doesn't report any 'linked from' for those odd URLs and I know for sure these links aren't on the site itself. They're definitely not errors in the CMS. The site is a long established site (started 1997-1998) and we've been subject to a lot of negative SEO. I recently had to disavow about 1000 .ru domain linking to us, with some domains containing over a million link each. Could these chopped links be a new tactic of negative SEO? How do I find these seemingly intentionally broken links to us?
Intermediate & Advanced SEO | | Lazeez2 -
What's the average rank update time after site and/or backlink changes?
What's currently the typical time, ON AVERAGE, it takes to see ranking changes when significant improvements are made to significant ranking signals on a long-established (as opposed to brand new) website? Does the rank update associated with on-page optimization happen sooner than addition of quality backlinks?
Intermediate & Advanced SEO | | JCCMoz0 -
Mobile Googlebot vs Desktop Googlebot - GWT reports - Crawl errors
Hi Everyone, I have a very specific SEO question. I am doing a site audit and one of the crawl reports is showing tons of 404's for the "smartphone" bot and with very recent crawl dates. If our website is responsive, and we do not have a mobile version of the website I do not understand why the desktop report version has tons of 404's and yet the smartphone does not. I think I am not understanding something conceptually. I think it has something to do with this little message in the Mobile crawl report. "Errors that occurred only when your site was crawled by Googlebot (errors didn't appear for desktop)." If I understand correctly, the "smartphone" report will only show URL's that are not on the desktop report. Is this correct?
Intermediate & Advanced SEO | | Carla_Dawson0 -
Duplicate site (disaster recovery) being crawled and creating two indexed search results
I have a primary domain, toptable.co.uk, and a disaster recovery site for this primary domain named uk-www.gtm.opentable.com. In the event of a disaster, toptable.co.uk would get CNAMEd (DNS alias) to the .gtm site. Naturally the .gtm disaster recover domian is an exact match to the toptable.co.uk domain. Unfortunately, Google has crawled the uk-www.gtm.opentable site, and it's showing up in search results. In most cases the gtm urls don't get redirected to toptable they actually appear as an entirely separate domain to the user. The strong feeling is that this duplicate content is hurting toptable.co.uk, especially as .gtm.ot is part of the .opentable.com domain which has significant authority. So we need a way of stopping Google from crawling gtm. There seem to be two potential fixes. Which is best for this case? use the robots.txt to block Google from crawling the .gtm site 2) canonicalize the the gtm urls to toptable.co.uk In general Google seems to recommend a canonical change but in this special case it seems robot.txt change could be best. Thanks in advance to the SEOmoz community!
Intermediate & Advanced SEO | | OpenTable0 -
How important is it to fix Server Errors?
I know it is important to fix server errors. We are trying to figure out how important because after our last build we have over 19,646 of them and since google only gives us a 1000 at a time the fastest way to tell them we have fixed them all is to use the api etc which will take time. WE are trying to decide is it more important to fix all these errors right now or focus on other issues and fix these errors when we have time, they are mostly ajax errors. Could this hurt our rankings? Any thoughts would be great!
Intermediate & Advanced SEO | | DoRM0 -
Website design agency - Penguin update could effect us?
Hi Guys, Just wanted to pick your brains here - I have a client who I have just taken on who is a small website design agency, all their clients they have built websites for over the years have the anchor text; 'website design' Will the website be effected by the new Penguin update due to the face they have thousands of links on clients websites they have built all witht he same anchor text? One idea I thought about is to build links into different pages of the website on future client websites? Any help or guidance would be much appreciated ! thank you Thanks Gareth
Intermediate & Advanced SEO | | GAZ090 -
Could you use a robots.txt file to disalow a duplicate content page from being crawled?
A website has duplicate content pages to make it easier for users to find the information from a couple spots in the site navigation. Site owner would like to keep it this way without hurting SEO. I've thought of using the robots.txt file to disallow search engines from crawling one of the pages. Would you think this is a workable/acceptable solution?
Intermediate & Advanced SEO | | gregelwell0 -
Are there any disadvantages of switching from xml sitemaps to .asp sitemaps in GWT
I have been using multiple xml sitemaps for products for over 6 months and they are indexing well with GMT. I have been having this manually amended when a product becomes obsolete or we no longer stock it. I now have the option to automate the sitemaps from a SQL feed but using .asp sitemaps that I would submit the same way in GWT. I'd like your thoughts on the Pro's and cons of this, pluses for me is realtime updates, con's I percieve GMT to prefer xml files. what do you think?
Intermediate & Advanced SEO | | robertrRSwalters0