Issues with Moz producing 404 Errors from sitemap.xml files recently.
-
My last campaign crawl produced over 4k 404 errors resulting from Moz not being able to read some of the URLs in our sitemap.xml file. This is the first time we've seen this error and we've been running campaigns for almost 2 months now -- no changes were made to the sitemap.xml file. The file isn't UTF-8 encoded, but rather Content-Type:text/xml; charset=iso-8859-1 (which is what Moveable Type uses). Just wondering if anyone has had a similar issue?
-
Hi Barb,
I am sure Joel will chime in also but just to clarify that it is probably not the utf8 encoding or lack of it that is causing the issue. At least with the sitemap urls it is simply the formatting of the xml that is being produced. As to if the other errors you are seeing are caused by the same kind of thing, if you are seeing references to the same encoded characters (%0A%09%) then the answer is most likely yes.
So the issue is not utf8 encoding related (there are plenty of non utf8 encoded sites on the web still!) but how the moz crawler is reading your links and if other tools/systems will be having the same troubles. Have you looked in google webmaster tools to see if it reports similar 404 errors from the sitemap or elsewhere? If you see similar errors in GWT then the issue is likely not restricted to the moz crawler only.
Beyond that, since for the sitemap at least the fix should be relatively simple and quite possibly the other moz errors you see will also be able to be fixed easily by making small adjustments to the templates and removing the extra line breaks/tabs which are creating the issue then it is worth doing so that these errors are removed and you can concentrate on the 'real' errors without all the noise.
-
Joel,
The latest 404 errors have the same type of issue, and are all over place in terms of referrer (none are the sitemap.xml) that I can see.
My question is, can the fact that we don't use the UTF-8 encoding in our site potentially cause issues with other reporting? This is not something we can change easily and I don't want to waste a great deal of effort sorting through "red herring" issues due to the encoding we use on the site.
thoughts?
barb
-
Thanks Joel,
We're looking into this.
barb
-
Thanks Lynn,
We are looking at that. The 4k 404 errors are gone now, but it's possible they will return.
It's a major change for us to switch to UTF-8, so it's not something that will happen anytime soon. I'll just have to be aware that it might be causing issues.
barb
-
Hey Brice,
I just to add to Lynn's great answer with the reason you're seeing the URLs the way they are and to reinforce that.
You have it formatted as such:
<loc>http://www.cmswire.com/cms/web-cms/david-hillis-10-predictions-for-web-content-management-in-2011-009588.php</loc>The crawler converts everything to URL encoding. So those line feeds and tabs will be converted to percentage tags. The reason your root domain is there is because %0A is not the proper start of a URL so RogerBot assumes it's a relative link to the domain your sitemap is on.
The encoding thing is probably not affecting this.
Cheers,
Joel. -
Hi,
It can be frustrating I know, but if you are methodical you will get to the bottom of all errors and then feel much better
Not sure why the number of 404s would have gone down, but in regards the sitemap itself the moz team might be right that utf-8 encoding could be part of the problem. I think it might be more to do with some non visible formatting/characters being added to your site map during creation. %09 is a url encoded tab and %0A is a url encoded line feed, it looks to me that these are getting into your sitemap even though they are not actually visible.
If you download your site map you will see that many (but not all) the urls look like this:
<loc>http://www.cmswire.com/cms/web-cms/david-hillis-10-predictions-for-web-content-management-in-2011-009588.php</loc>Note the new lines and the indent. Some other urls do not have this format for example:
<loc>http://www.cmswire.com/news/topic/impresspages</loc>
It would be wise to ensure both the file creating the sitemap and the sitemap itself are in utf-8, but also it could be as simple as going into the file creating the sitemap and removing those line breaks. Once that is done wait for the next crawl and see if it brings the error numbers down (it should). As for the rest of the warnings, just be methodical, identify where they are occurring and why and work through them. You will get to few or zero warnings, and you will feel good about it!
-
interesting that a new crawl just completed and now I only have 307 404 Errors and a lot of other different errors and warnings. It's frustrating to see such different things each week.
barb
-
Hi Lynn,
I did download the csv and found all the 404 errors were generate from our sitemap.xml file. Here's what the URLs look like:
referring URL is http://www.cmswire.com/sitemap.xml
You'll notice that there is odd formatting wrapping the URL (%0A%09%09%09) + the extra http://www.cmswire to the front of the URL- which does not exist in the actual sitemap.xml file if I view it separately.
Also: Moz support looked at our campaign and they thought the problem was that our sitemap wasn't UTF-8 encoded.
Any ideas?
-
Hi Brice,
What makes you think the issue is that moz cannot read the urls? In the first instance I would want to make sure that something else is not going wrong by checking the urls moz is flagging as 404s, ensuring they actually do or do not exist and if the latter finding out where the link is coming (be it the sitemap or another page on the site). You may have already done this, but if not you can get all this information by downloading the error report in csv and then filtering in excel to get data for 404 pages only.
If you have done this already then if you give us a sample or two of the urls moz is flagging along with the referring url and your sitemap url we might be able to diagnose the issue better. It would be unusual for the moz crawler to start throwing errors all of a sudden if nothing else has changed. Not saying it is impossible for it to be an error with moz, just saying that the chances are on the side of something else going on.
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Moz crawl duplicate pages issues
Hi According to the moz crawl on my website I have in the region of 800 pages which are considered internal duplicates. I'm a little puzzled by this, even more so as some of the pages it lists as being duplicate of another are not. For example, the moz crawler considers page B to be a duplicate of page A in the urls below: Not sure on the live link policy so ive put a space in the urls to 'unlive' them. Page A http:// nuchic.co.uk/index.php/jeans/straight-jeans.html?manufacturer=3751 Page B http:// nuchic.co.uk/index.php/catalog/category/view/s/accessories/id/92/?cat=97&manufacturer=3603 One is a filter page for Curvety Jeans and the other a filter page for Charles Clinkard Accessories. The page titles are different, the page content is different so Ive no idea why these would be considered duplicate. Thin maybe, but not duplicate. Like wise, pages B and C are considered a duplicate of page A in the following Page A http:// nuchic.co.uk/index.php/bags.html?dir=desc&manufacturer=4050&order=price Page B http:// nuchic.co.uk/index.php/catalog/category/view/s/purses/id/98/?manufacturer=4001 Page C http:// nuchic.co.uk/index.php/coats/waistcoats.html?manufacturer=4053 Again, these are product filter pages which the crawler would have found using the site filtering system, but, again, I cannot find what makes pages B and C a duplicate of A. Page A is a filtered result for Great Plains Bags (filtered from the general bags collection). Page B is the filtered results for Chic Look Purses from the Purses section and Page C is the filtered results for Apricot Waistcoats from the Waistcoat section. I'm keen to fix the duplicate content errors on the site before it goes properly live at the end of this month - that's why anyone kind enough to check the links will see a few design issues with the site - however in order to fix the problem I first need to work out what it is and I can't in this case. Can anyone else see how these pages could be considered a duplicate of each other please? Checking ive not gone mad!! Thanks, Carl
Moz Pro | | daedriccarl0 -
Understanding how moz grades webpage and Keyword stuffing
Hi friends i have few questions about how moz ranks webpage. I just started so i looked only on front page of my website And i have grade A and few problems 1. Keyword stuffing. I can not understand how moz page grade tool runks page? does it looks source and counts all keywords? I have menu with 46 countries in which have keyword like "Freight forwarding - England" and so on. May be this is the problem? because if i search in chrome i just see 9 keywords but in source 100. But removing menu does not reduce number of keywords. So i can not understand what should i remove. An we made these pages for adwords campaign us landing pages for targeted keywords. The keyword is : Kravu pārvadājumi (freight forwarding) 2. On page grade shows i have too many links on page like 103. Does it count menus? Is it so bad? 3. what moz crawls for links, Google or my page? because i see duplicated content like vervo.lv and vervo.lv/lv. But if you try to write vervo.lv/lv it redirects to vervo.lv so how it is possible to resolve? What Should i do else if redirect does not help. on the moment page have 3 languages and main language is Latvian, webpage: http://vervo.lv/ Thanks for help.
Moz Pro | | vervo0 -
Spike in 4xx Client Error After Theme Change
There is a huge Spike in 4xx Client Error after new Wordpress Theme Change. Report shows URL that doesn't exist in the Referral Page. Missing URL (404) - http://www.happyschoolsblog.com/gre-test-dates-2011/happyschools/ Referral - http://www.happyschoolsblog.com/gre-test-dates-2011/ Likewise there are over 4000 errors (4xx) with happyschools appended to the URL. I'm not sure how to fix the those errors. Thanks.
Moz Pro | | rsmb0 -
Issue: Title Element Too Long !
hello , i found this issue in seomoz campaigns. i see many links blocked by google for this reason . but something wrong here for example i see this link like that in seomoz dashboard Wagdy Hassan, Author at Seo Seek : Seo Tutorials , Seo Tools , Make Money Online http://seo-seek.com/author/wagdys/ and its already not in google. but when i open the link http://seo-seek.com/author/wagdys/ the title is " Wagdy Hassan " i fixed it 6 days ago. what is wrong with the site? also i still waiting google to put the new results .. wait for your answer, Thanks 🙂
Moz Pro | | Wagdys0 -
403 error for a member site
Perhaps a stupid question but SEOmoz registers 403 errors for pages behind a membersite (ie. they are restricted on purpose). Should I noindex these pages or just let SEOmoz register these "errors"?
Moz Pro | | Crunchii0 -
SEO moz poor rankings reporting
How are we supposed to send ranking reports to clients if the rankings are constantly reporting false metrics This is not the first time this has happened, but the rankings are WAAAAAAY off. What is the point of reporting rankings that are not accurate? What is the possible cause of this. Is there a solution? What is SEOmoz doing to improve their ranking programs. Our client actually ranked #2 for "northern Virginia HVAC contractors" when moz had us not in the top 50! How are we supposed to bank on information that is so consistently unreliable? northernvirginiahvaccontractors.png northernvirginiahvaccontractorsrank.png
Moz Pro | | imageworks-2612900 -
Issue in number of pages crawled
i wanted to figure out how our friend Roger Bot works. On the first crawl of one of my large sites, the number of pages crawled stopped at 10000 (due to the restriction on the pro account). However after a few weeks, the number of pages crawled went down to about 5500. This number seemed to be a more accurate count of the pages on our site. Today, it seems that Roger Bot has completed another crawl and the number is up to 10000 again. I know there has been no downtime on our site, and the items that we fixed on our site did not reduce or increase the number of pages we had. Just making sure there are no known issues with Roger Bot before I look deeper into our site to see if there is an issue. Thanks!
Moz Pro | | cchhita0