Why might Google be crawling via old sitemap, when the new one has been submitted and verified?
-
We have recently relaunched Scoutzie.com and re-submitted our new sitemap to Google. When I look on Webmaster tools, our new sitemap has been submitted just fine, but at the same time, Google is finding a lot of 404s when crawling the site. My understanding, it is still using crawling the old links, which do not exists. How can I tell Google to refresh it's index and to stop looking at all the old links?
-
Yes it should. However, as Alan mentioned below, if you still have links pointing to the 404 pages, Google will always attempt to crawl them, and will keep you informed that you have errors.
If you do have external links to those 404 pages, you can 301 redirect them to an appropriate page using .htaccess. This way you'll keep the link value and also get rid of the Webmaster Tools error.
If you don't have any links to them, then yes, Google will eventually stop trying to crawl them.
-
It's very likely that we do. Given that I cannot track down a 1000+ links that now 404, will they eventually fall out by themselves, or do I have to tell Google that everything that's 404'ed should be dropped from crawl index? Thanks!
-
What if I simply pushed the new sitemap over the old one? In other words, scoutzie.com/sitemap is the same link, except now it contains the new map. That should be okay, right?
-
you may still have links pointing to those 404 pages on your site or externally. If not then eventually they will fall out of the index
-
Hey scoutzie,
This is actually covered pretty well in Joe Robison's blog post on fixing Webmaster Tools crawl errors: http://moz.com/blog/how-to-fix-crawl-errors-in-google-webmaster-tools
I'll quote the related info:
"One frustrating thing that Google does is it will continually crawl old sitemaps that you have since deleted to check that the sitemap and URLs are in fact dead. If you have an old sitemap that you have removed from Webmaster Tools, and you don’t want being crawled, make sure you let that sitemap 404 and that you are not redirecting the sitemap to your current sitemap."
Hope this helps, good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Functionality of SEOmoz crawl page reports
I am trying to find a way to ask SEOmoz staff to answer this question because I think it is a functionality question so I checked SEOmoz pro resources. I also have had no responses in the Forum too it either. So here it is again. Thanks much for your consideration! Is it possible to configure the SEOMoz Rogerbot error-finding bot (that make the crawl diagnostic reports) to obey the instructions in the individual page headers and http://client.com/robots.txt file? For example, there is a page at http://truthbook.com/quotes/index.cfm month=5&day=14&year=2007 that has – in the header -
Moz Pro | | jimmyzig
<meta name="robots" content="noindex"> </meta name="robots" content="noindex"> This page is themed Quote of the Day page and is duplicated twice intentionally at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2004 and also at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2010 but they all have <meta name="robots" content="noindex"> in them. So Google should not see them as duplicates right. Google does not in Webmaster Tools.</meta name="robots" content="noindex"> So it should not be counted 3 times? But it seems to be? How do we gen a report of the actual pages shown in the report as dups so we can check? We do not believe Google sees it as a duplicate page but Roger appears too. Similarly, one can use http://truthbook.com/contemplative_prayer/ , here also the http://truthbook.com/robots.txt tells Google to stay clear. Yet we are showing thousands of dup. page content errors when Google Webmaster tools as shown only a few hundred configured as described. Anyone? Jim0 -
My Campaign only crawled 3 pages on my site
On my first crawl of a new campaign, the software only crawled 3 pages. XXXaceXXXscholarships.org any ideas?
Moz Pro | | Santaur0 -
Who wants to help go over my crawl diagnostics via skype?
I have run a crawl diagnostic on my site and have 194 errors and most of them are 404 errors in wordpress. Not sure why, but many of my pages had name changes (possibly a permalinks issue) but I have no idea how to fix it. I had 5 duplicate page titles, and 1 tile missing or empty. 72 crawl notices found (2 permanent redirect, 17 blocked by robots, 53 rel canonical) 19 Crawl warnings were found Who wants to have some fun?
Moz Pro | | starkSEO0 -
Does PR & DA not apply to Google?
My PR & DA are much higher than my competitors, but I noticed that they outrank me in some searches. What's up with that?
Moz Pro | | KristopherWho0 -
What does this error mean when starting a new campaign
hi just about to start a new campaign to i can monitor my website and i get this message up but not sure what this means. Roger has detected a problem: We have detected that the domain www.headlineplus.com and the domain headlineplus.com both respond to web requests and do not redirect. Having two "twin" domains that both resolve forces them to battle for SERP positions, making your SEO efforts less effective. We suggest redirecting one, then entering the other here. can anyone please let me know why this message comes up please
Moz Pro | | ClaireH-1848860 -
How can I clean up my crawl report from duplicate records?
I am viewing my Crawl Diagnostics Report. My report is filled with data which really shouldn't be there. For example I have a page: http://www.terapvp.com/forums/Ghost/ This is a main forum page. It contains a list of many threads. The list can be sorted on many values. The page is canonicalized, and has been since it was created. My crawl report shows this page listed 15 times. http://www.terapvp.com/forums/Ghost/?direction=asc http://www.terapvp.com/forums/Ghost/?direction=desc http://www.terapvp.com/forums/Ghost/?order=post_date and so forth. Each of those pages uses the same canonicalization reference shared above. I have three questions: Why is this data appearing in my crawl report? These pages are properly canonicalized. If these pages are supposed to appear in the report for some reason, how can I remove them? My desire is to focus on any pages which may have an issue which needs to be addressed. This site has about 50 forum pages and when you add an extra 15 pages per forum, it becomes a lot harder to locate actionable data. To make matters worse, these forum indexes often have many pages. So if I have a "Corvette" forum there that is 10 pages long, then there will be 150 extra pages just for that particular forum in my crawl report. Is there anything I am missing? To the best of my knowledge everything is set up according to the best SEO practices. If there is any other opinions, I would like to hear them.
Moz Pro | | RyanKent0 -
Crawl test. Bot crawled only 200 or so links when it should have crawled thousands
Hi everyone, I just recieved my crawl test report and its only given me 200 or so URL's when my site has thousands, any thoughts?
Moz Pro | | Ev840