Issues with Moz producing 404 Errors from sitemap.xml files recently.
-
My last campaign crawl produced over 4k 404 errors resulting from Moz not being able to read some of the URLs in our sitemap.xml file. This is the first time we've seen this error and we've been running campaigns for almost 2 months now -- no changes were made to the sitemap.xml file. The file isn't UTF-8 encoded, but rather Content-Type:text/xml; charset=iso-8859-1 (which is what Moveable Type uses). Just wondering if anyone has had a similar issue?
-
Hi Barb,
I am sure Joel will chime in also but just to clarify that it is probably not the utf8 encoding or lack of it that is causing the issue. At least with the sitemap urls it is simply the formatting of the xml that is being produced. As to if the other errors you are seeing are caused by the same kind of thing, if you are seeing references to the same encoded characters (%0A%09%) then the answer is most likely yes.
So the issue is not utf8 encoding related (there are plenty of non utf8 encoded sites on the web still!) but how the moz crawler is reading your links and if other tools/systems will be having the same troubles. Have you looked in google webmaster tools to see if it reports similar 404 errors from the sitemap or elsewhere? If you see similar errors in GWT then the issue is likely not restricted to the moz crawler only.
Beyond that, since for the sitemap at least the fix should be relatively simple and quite possibly the other moz errors you see will also be able to be fixed easily by making small adjustments to the templates and removing the extra line breaks/tabs which are creating the issue then it is worth doing so that these errors are removed and you can concentrate on the 'real' errors without all the noise.
-
Joel,
The latest 404 errors have the same type of issue, and are all over place in terms of referrer (none are the sitemap.xml) that I can see.
My question is, can the fact that we don't use the UTF-8 encoding in our site potentially cause issues with other reporting? This is not something we can change easily and I don't want to waste a great deal of effort sorting through "red herring" issues due to the encoding we use on the site.
thoughts?
barb
-
Thanks Joel,
We're looking into this.
barb
-
Thanks Lynn,
We are looking at that. The 4k 404 errors are gone now, but it's possible they will return.
It's a major change for us to switch to UTF-8, so it's not something that will happen anytime soon. I'll just have to be aware that it might be causing issues.
barb
-
Hey Brice,
I just to add to Lynn's great answer with the reason you're seeing the URLs the way they are and to reinforce that.
You have it formatted as such:
<loc>http://www.cmswire.com/cms/web-cms/david-hillis-10-predictions-for-web-content-management-in-2011-009588.php</loc>The crawler converts everything to URL encoding. So those line feeds and tabs will be converted to percentage tags. The reason your root domain is there is because %0A is not the proper start of a URL so RogerBot assumes it's a relative link to the domain your sitemap is on.
The encoding thing is probably not affecting this.
Cheers,
Joel. -
Hi,
It can be frustrating I know, but if you are methodical you will get to the bottom of all errors and then feel much better
Not sure why the number of 404s would have gone down, but in regards the sitemap itself the moz team might be right that utf-8 encoding could be part of the problem. I think it might be more to do with some non visible formatting/characters being added to your site map during creation. %09 is a url encoded tab and %0A is a url encoded line feed, it looks to me that these are getting into your sitemap even though they are not actually visible.
If you download your site map you will see that many (but not all) the urls look like this:
<loc>http://www.cmswire.com/cms/web-cms/david-hillis-10-predictions-for-web-content-management-in-2011-009588.php</loc>Note the new lines and the indent. Some other urls do not have this format for example:
<loc>http://www.cmswire.com/news/topic/impresspages</loc>
It would be wise to ensure both the file creating the sitemap and the sitemap itself are in utf-8, but also it could be as simple as going into the file creating the sitemap and removing those line breaks. Once that is done wait for the next crawl and see if it brings the error numbers down (it should). As for the rest of the warnings, just be methodical, identify where they are occurring and why and work through them. You will get to few or zero warnings, and you will feel good about it!
-
interesting that a new crawl just completed and now I only have 307 404 Errors and a lot of other different errors and warnings. It's frustrating to see such different things each week.
barb
-
Hi Lynn,
I did download the csv and found all the 404 errors were generate from our sitemap.xml file. Here's what the URLs look like:
referring URL is http://www.cmswire.com/sitemap.xml
You'll notice that there is odd formatting wrapping the URL (%0A%09%09%09) + the extra http://www.cmswire to the front of the URL- which does not exist in the actual sitemap.xml file if I view it separately.
Also: Moz support looked at our campaign and they thought the problem was that our sitemap wasn't UTF-8 encoded.
Any ideas?
-
Hi Brice,
What makes you think the issue is that moz cannot read the urls? In the first instance I would want to make sure that something else is not going wrong by checking the urls moz is flagging as 404s, ensuring they actually do or do not exist and if the latter finding out where the link is coming (be it the sitemap or another page on the site). You may have already done this, but if not you can get all this information by downloading the error report in csv and then filtering in excel to get data for 404 pages only.
If you have done this already then if you give us a sample or two of the urls moz is flagging along with the referring url and your sitemap url we might be able to diagnose the issue better. It would be unusual for the moz crawler to start throwing errors all of a sudden if nothing else has changed. Not saying it is impossible for it to be an error with moz, just saying that the chances are on the side of something else going on.
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content issues with file download links (diff. versions of a downloadable application)
I'm a little unsure how canonicalisation works with this case. 🙂 We have very regular updates to the application which is available as a download on our site. Obviously, with every update the version number of the file being downloaded changes; and along with it, the URL parameter included when people click the 'Download' button on our site. e.g. mysite.com/download/download.php?f=myapp.1.0.1.exe mysite.com/download/download.php?f=myapp.1.0.2.exe mysite.com/download/download.php?f=myapp.1.0.3.exe, etc In the Moz Site Crawl report all of these links are registering as Duplicate Content. There's no content per se on these pages, all they do is trigger a download of the specified file from our servers. Two questions: Are these links actually hurting our ranking/authority/etc? Would adding a canonical tag to the head of mysite.com/download/download.php solve the crawl issues? Would this catch all of the download.php URLs? i.e. Thanks! Jon
Moz Pro | | jonmc
(not super up on php, btw. So if I'm saying something completely bogus here...be kind 😉 )0 -
How do I fix duplicate title issues?
I have a sub domain that isn't even on our own site but it's resulting in a lot of errors in Moz for duplicate content, as shown here: http://cl.ly/1R081v0K0e2N. Would this affect our ranking or is it simply just errors within Moz? What measures could I take to make sure that Moz or Google doesn't associate our site with these errors? Would I have to noindex in the htaccess file for the sub domain?
Moz Pro | | MMAffiliate0 -
Lag time between MOZ crawl and report notification?
I did a lot of work to one of my sites last week and eagerly awaited this week's MOZ report to confirm that I had achieved what I was trying to do, but alas I still see the same errors and warnings in the latest report. This was supposedly generated five days AFTER I made the changes, so why are they not apparent in the new report? I am mainly referring to missing metadata, long page titles, duplicate content and duplicate title errors (due to crawl and URL issues). Why would the new crawl not have picked up that these have been corrected? Does it rely on some other crawl having updated (e.g. Google or Bing)?
Moz Pro | | Gavin.Atkinson0 -
Why Are Moz Pro Ranking SOOOO Inaccurate?
I have a set of keywords for which I have been consistently ranking in the top 10 for months. Moz Pro, a service that I pay for, has shown for the past couple of months that I don't even rank in the top 50. Google Webmaster Tools confirms that I have been ranking in the top 10 for these keywords. Am I doing something wrong or is Moz Pro more of an estimation tool? Thanks 😉
Moz Pro | | Humanovation0 -
Advice for 4000+ duplicate errors on 1st check
Hi, 1st time use of the SEOMOZ scan has thrown up a lot of duplicate errors. Seems to look like my site has a .com.au/ & .com.au/default for the same pages. We had the domain on a hosted cms solution & have now migrated to magento. We duplicated the pages, but had to redirect all of the old url's to he new magento structure. This was done via a developer adding a 301 wildcard code to the .htaccess. Would that many errors be normal for a 1st scan? Where should I look for someone to fix them? Thanks
Moz Pro | | Paul_MC0 -
5xx (Server Errors)-in Wordpress
Since going to a wordpress platform in November, I have seen many 501 server errors in the crawl report. When I click on the link in the report however, the link shows the actual page with no errors. I reviewed all the Q&A but didn't see anything related to this issue. Does anyone have an idea as to why the actual link works when I click on it but the SEOMOZ crawl bot is showing a 5XX error. Thanks for any ideas or feedback you may have.
Moz Pro | | FidelityOne0 -
What does this error mean when starting a new campaign
hi just about to start a new campaign to i can monitor my website and i get this message up but not sure what this means. Roger has detected a problem: We have detected that the domain www.headlineplus.com and the domain headlineplus.com both respond to web requests and do not redirect. Having two "twin" domains that both resolve forces them to battle for SERP positions, making your SEO efforts less effective. We suggest redirecting one, then entering the other here. can anyone please let me know why this message comes up please
Moz Pro | | ClaireH-1848860 -
GWT data v SEO Moz data
I'm sure there's a simple explanation but I can't find it - Looking through the GWT avg position data v SEOMoz ranking data and Market Samurai data there is a large discrepancy. For one term that we hold #1, and have held for a month or two GWTis showing that the avg position is 180 (and down 30). Any reason you know of why the GWT avg position data would be so out of whack with our SERP's and other data? Thanks.
Moz Pro | | MrPaulB0