Is there a way to see Crawl Errors older than 90 days in Webmaster Tools?
-
I had some big errors show up in November, but I can't see them anymore as the history only goes back 90 days. Is there a way to change the dates in Webmaster Tools? If not, is there another place I'd be able to get this information? We migrated our hosting to a new company around this time and the agency that handled it for us never downloaded a copy of all the redirects that were set-up on the old site.
-
What you also could do is run a crawl on your site with XENU or ScreamingFrog to find urls that return a 404 error. If they do they'll probably need a redirect. Next to that could you check the list of internal links within Google Webmaster Tools, if you know how the old structure looked like you'll be able to redirect them as well if they still show up.
-
I don’t think there is a way to move the dates but if the problem is still there it should be appearing in the issues section. If not that most probably means the problem has been resolved.
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Canonical error from Google
Moz couldn't explain this properly and I don't understand how to fix it. Google emailed this morning saying "Alternate page with proper canonical tag." Moz also kinda complains about the main URL and the main URL/index.html being duplicate. Of course they are. The main URL doesn't work without the index.html page. What am I missing? How can I fix this to eliminate this duplicate problem which to me isn't a problem?
Technical SEO | | RVForce0 -
Why Google crawl parameter URLs?
Hi SEO Masters, Google is indexing this parameter URLs - 1- xyz.com/f1/f2/page?jewelry_styles=6165-4188-4184-4192-4180-6109-4191-6110&mode=li_23&p=2&filterable_stone_shapes=4114 2- xyz.com/f1/f2/page?jewelry_styles=6165-4188-4184-4192-4180-4169-4195&mode=li_23&p=2&filterable_stone_shapes=4115&filterable_metal_types=4163 I have handled by Google parameter like this - jewelry_styles= Narrows Let Googlebot decide mode= None Representative URL p= Paginates Let Googlebot decide filterable_stone_shapes= Narrows Let Googlebot decide filterable_metal_types= Narrows Let Googlebot decide and Canonical for both pages - xyz.com/f1/f2/page?p=2 So can you suggest me why Google indexed all related pages with this - xyz.com/f1/f2/page?p=2 But I have no issue with first page - xyz.com/f1/f2/page (with any parameter). Cononical of first page is working perfectly. Thanks
Technical SEO | | Rajesh.Prajapati
Rajesh0 -
Webmaster Tools Data Discrepancies/Anomalies
Hi Im looking in GWT account for a client of mine and see that in the index status area 400+ pages are indexed so all seems ok there! But then in the sitemap area 111 pages have been submitted but only 1 indexed ! Any ideas whats going on here ? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
Google Disavow Tool
Some background: My rankings have been wildly fluctuating for the past few months for no apparent reason. When I inquired about this, many people said that even though I haven't received any penalty notice, I was probably affected by penguin. (http://moz.com/community/q/ranking-fluctuations) I recently did a link detox by LinkRemovalTools and it gave me a list of all my links, 2% were toxic and 51% were suspiscious. Should I simply disavow the 2%? There are many sites where is no contact info.
Technical SEO | | EcomLkwd0 -
404 Errors & Redirection
Hi, I'm working with someone who recently had two websites redesigned. The old permalink structure consisted of domain/year/month/date/post-name. Their developer changed the new permalink structure to domain/post-name, but apparently he didn't redirect the old URLs to the new ones so we're finding that links from external sites result in 404 errors (once I remove the date in the URL, the links work fine). Each site has 3-4 years worth of blog posts, so there are quite a few that would need to be changed. I was thinking of using the Redirection plugin - would that be the best way to fix this sitewide on both sites?Any suggestions would be appreciated. Thanks, Carolina
Technical SEO | | csmm0 -
Help with Webmaster Tools "Not Followed" Errors
I have been doing a bunch of 301 redirects on my site to address 404 pages and in each case I check the redirect to make sure it works. I have also been using tools like Xenu to make sure that I'm not linking to 404 or 301 content from my site. However on Friday I started getting "Not Followed" errors in GWT. When I check the URL that they tell me provided the error it seems to redirect correctly. One example is this... http://www.mybinding.com/.sc/ms/dd/ee/48738/Astrobrights-Pulsar-Pink-10-x-13-65lb-Cover-50pk I tried a redirect tracer and it reports the redirect correctly. Fetch as googlebot returns the correct page. Fetch as bing bot in the new bing webmaster tools shows that it redirects to the correct page but there is a small note that says "Status: Redirection limit reached". I see this on all of the redirects that I check in the bing webmaster portal. Do I have something misconfigured. Can anyone give me a hint on how to troubleshoot this type of issue. Thanks, Jeff
Technical SEO | | mybinding10 -
Correct Way to Write Meta
OK so this is a really, really basic question. However, I'm seeing some meta written differently to normal and I'm wondering if a) this is correct and b) whether there is any benefit. Normally it's like this: However, I am seeing it written like this is some places: So, the content= and name= are swapped around. I assume the people that did this were thinking that bringing the content forward would mean that Google reads keywords first. Just wondering if anybody knows whether this is good practice or not? Just spiked my interest so apologies for the basic nature of the question!
Technical SEO | | RiceMedia0 -
Crawl Tool Producing Random URL's
For some reason SEOmoz's crawl tool is returning duplicate content URL's that don't exist on my website. It is returning pages like "mydomain.com/pages/pages/pages/pages/pages/pricing" Nothing like that exists as a URL on my website. Has anyone experienced something similar to this, know what's causing it, or know how I can fix it?
Technical SEO | | MyNet0