Remove more than 1000 crawl errors from GWT in one day?
-
In google webmasters tools you have the feature "Crawl Errors". This one displays the top 1000 crawl errors google have on your site.
I have around 16k crawl errors at the moment, which all are fixed. But i can only mark 1000 of them as fixed each day/each time google crawls the site. (This as it only displays top 1000 errors. When i have marked those as fixed it won't show other errors for a while.)
Does anyone know if it's possible to mark ALL errors as fixed in one operation?
-
Google indexed around 20k useless URL's due to mediawiki's insane amounts of URL's that is generated by not using "Short URL's".
It was resolved when we moved the wiki to another location, added the short URL's.
We have just redirected everything. (301).
-
So google indexed more than 16000 pages on your site and now you do what?
Did you just remove them (404) or redirect them (301)?
-
No problem at all, had a wiki up and running without the "short URL's". So Google had ~19k errors on this one because of too long/complicated URL's. Removed it, problem solved and all errors resolved.
-
Hi Host1
It is not possible! You can only mark 1000 errors as fixed a day.
May i ask you how you fixed 16.000 errors at once?
Regards
Alsvik
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have one keyword which disappeared
My site currently ranks for over 2000 of which 92 are in the number 1 position. Im very active with this site. My Main keyword is a two word keyword ,I have never been below the position #30. Has always been at best 3rd and fourth page. I rank primarily for mostly 3 part keywords and 4 part keywords. However , I noticed about a month ago,The main keyword no longer showing up on serp tools .Moz included. At first i was getting wierd results using keyword tool .Not moz serp tool. I would get different result every time i checked it. When i tried to verify these results ,Only to find my site not at the projected page and rank. After more than a month now this particular keyword is no where in site. Im still getting traffic fom all the other keywords but im a bit confused over this. I thought my main keyword was about to be on the bottom of the 2nd page but instead seems to be completely gone.
Technical SEO | | Yellow20000 -
Moving Content From One Site To Another
Generally speaking if I am just moving a couple of articles from one site to another I need to 301 redirect those old URL's to the new ones right? And even if a webpage doesn't have any links pointing to it, it is best practice to employ 301 redirects correct? After a while, after Google etc. has crawled the new location of the content you can then delete the old URL, is that right? And if other sites are linking to the old location they should be notified of the new location but even if a page has links pointing to it, is it best practice to delete that page after Google has crawled the new and you've notified the webmaster? I've think I've got this right, I just want some clarification on this issue. Thanks.
Technical SEO | | ThridHour0 -
Salvaging links from WMT “Crawl Errors” list?
When someone links to your website, but makes a typo while doing it, those broken inbound links will show up in Google Webmaster Tools in the Crawl Errors section as “Not Found”. Often they are easy to salvage by just adding a 301 redirect in the htaccess file. But sometimes the typo is really weird, or the link source looks a little scary, and that's what I need your help with. First, let's look at the weird typo problem. If it is something easy, like they just lost the last part of the URL, ( such as www.mydomain.com/pagenam ) then I fix it in htaccess this way: RewriteCond %{HTTP_HOST} ^mydomain.com$ [OR] RewriteCond %{HTTP_HOST} ^www.mydomain.com$ RewriteRule ^pagenam$ "http://www.mydomain.com/pagename.html" [R=301,L] But what about when the last part of the URL is really screwed up? Especially with non-text characters, like these: www.mydomain.com/pagename1.htmlsale www.mydomain.com/pagename2.htmlhttp:// www.mydomain.com/pagename3.html" www.mydomain.com/pagename4.html/ How is the htaccess Rewrite Rule typed up to send these oddballs to individual pages they were supposed to go to without the typo? Second, is there a quick and easy method or tool to tell us if a linking domain is good or spammy? I have incoming broken links from sites like these: www.webutation.net titlesaurus.com www.webstatsdomain.com www.ericksontribune.com www.addondashboard.com search.wiki.gov.cn www.mixeet.com dinasdesignsgraphics.com Your help is greatly appreciated. Thanks! Greg
Technical SEO | | GregB1230 -
404s in GWT - Not sure how they are being found
We have been getting multiple 404 errors in GWT that look like this: http://www.example.com/UpdateCart. The problem is that this is not a URL that is part of our structure, it is only a piece. The actual URL has a query string on the end, so if you take the query string off, the page does not work. I can't figure out how Google is finding these pages. Could it be removing the query string? Thanks.
Technical SEO | | Colbys0 -
Strange Webmaster Tools Crawl Report
Up until recently I had robots.txt blocking the indexing of my pdf files which are all manuals for products we sell. I changed this last week to allow indexing of those files and now my webmaster tools crawl report is listing all my pdfs as not founds. What is really strange is that Webmaster Tools is listing an incorrect link structure: "domain.com/file.pdf" instead of "domain.com/manuals/file.pdf" Why is google indexing these particular pages incorrectly? My robots.txt has nothing else in it besides a disallow for an entirely different folder on my server and my htaccess is not redirecting anything in regards to my manuals folder either. Even in the case of outside links present in the crawl report supposedly linking to this 404 file when I visit these 3rd party pages they have the correct link structure. Hope someone can help because right now my not founds are up in the 500s and that can't be good 🙂 Thanks is advance!
Technical SEO | | Virage0 -
Site maintenance and crawling
Hey all, Rarely, but sometimes we require to take down our site for server maintenance, upgrades or various other system/network reasons. More often than not these downtimes are avoidable and we can redirect or eliminate the client side downtime. We have a 'down for maintenance - be back soon' page that is client facing. ANd outages are often no more than an hour tops. My question is, if the site is crawled by Bing/Google at the time of site being down, what is the best way of ensuring the indexed links are not refreshed with this maintenance content? (ie: this is what the pages look like now, so this is what the SE will index). I was thinking that add a no crawl to the robots.txt for the period of downtime and remove it once back up, but will this potentially affect results as well?
Technical SEO | | Daylan1 -
REL Canonical Error
In my crawl diagnostics it showing a Rel=Canonical error on almost every page. I'm using wordpress. Is there a default wordpress problem that would cause this?
Technical SEO | | mmaes0 -
Link API returns Error 500
http://lsapi.seomoz.com/linkscape/links/nz.yahoo.com?SourceCols=4&Limit=100&Sort=domain_authority&Scope=domain_to_domain&Filter=external+follow&LinkCols=4 Hi folks any idea why the above returns Err 500 ? Seems to pertain to the domain - it works on other sites just not nz.yahoo.com Thanks!
Technical SEO | | jimbo_kemp0