HTML Encoding Error
-
Okay, so this is driving me nuts because I should know how to find and fix this but for the life of me cannot. One of the sites I work for has a long-standing crawl error in Google WMT tools for the URL /a%3E that appears on nearly every page of the site. I know that a%3E is an improperly encoded > but I can't seem to find where exactly in the code its coming from. So I keep putting it off and coming back to it every week or two only to wrack my brain and give up on it after about an hour (since its not a priority and its not really hurting anything). The site in question is https://www.deckanddockboxes.com/ and some of the pages it can be found on are /small-trash-can.html, /Dock-Step-Storage-Bin.html, and /Standard-Dock-Box-Maxi.html (among others). I figured it was about time to ask for another set of eyes to look at this for me. Any help would be greatly appreciated. Thanks!
-
Could be, I suppose. But it's been happening on and off for months now. I just mostly stop caring after a bit, clear out the errors and get annoyed when I see it pop up again. Its one of those things that doesn't actually cause a problem but I can't help feeling irked by its existence. All in all, I'm perfectly fine with the solution being "Google is wrong, leave it alone"... that's basically what I've been doing anyway.
-
I did a Screaming Frog crawl of your site, but didn't see any malformed links. Maybe it was a temporary issue that just hasn't been cleared from Google's cache.
-
Sorry, I wasn't getting email notifications that people had answered. I checked with our remaining coder who said that was there on purpose (much like Highland stated) and he's going to take a look deeper into it once he has the chance but doesn't know why its showing up like that.
-
in XHTML(which he's using) and HTML5, it is proper formatting to add a closing slash to tags that don't have a closing tag. So br, hr, input, etc. all need that closing slash.
-
This is a bit of a long shot, Mike, but it's such a weird error that long shots might pay off
On your code around line 769 you have a horizontal rule inserted, which has an extra, unneeded "/" before the final ">" of the tag. I can only assume that Googlebot is thinking that's an attempt at a relative URL?
Cart is empty
*** * *** **//This may be the problem?
You wouldn't have noticed it as the horizontal rule is still appearing as expected.
Like I said, long shot, but since the cart appears on nearly every page, that could explain it.
Dying to know if that's it, so lemme know either way?
Paul
-
The page is being linked from only internal pages on the site not from any outside websites or scraper. Some of the pages WMT says the incorrect page is being crawled from are listed above.
-
Where are you seeing the error in Webmaster Tools?
If it's in the Crawl Errors section, you can click on one of the links and click the "Linked From" tab, which will show you what pages are linking to the malformed link. A lot of times these will just be external scraper sites that are just linking to your site improperly.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to get rid of bot verification errors
I have a client who sells highly technical products and has lots and lots (a couple of hundred) pdf datasheets that can be downloaded from their website. But in order to download a datasheet, a user has to register on the site. Once they are registered, they can download whatever they want (I know this isn't a good idea but this wasn't set up by us and is historical). On doing a Moz crawl of the site, it came up with a couple of hundred 401 errors. When I investigated, they are all pages where there is a button to click through to get one of these downloads. The Moz error report calls the error "Bot verification". My questions are:
Technical SEO | | mfrgolfgti
Are these really errors?
If so, what can I do to fix them?
If not, can I just tell Moz to ignore them or will this cause bigger problems?0 -
Removed .html - Now Get Duplicate Content
Hi there, I run a wordpress website and have removed the .html from my links. Moz has done a crawl and now a bunch of duplicated are coming up. Is there anything I need to do in perhaps my htaccess to help it along? Google appears to still be indexing the .html versions of my links
Technical SEO | | MrPenguin0 -
Error: Missing Meta Description Tag on pages I can't find in order to correct
This seems silly, but I have errors on blog URLs in our WordPress site that I don't know how to access because they are not in our Dashboard. We are using All in One SEO. The errors are for blog archive dates, authors and just simply 'blog'. Here are samples: http://www.fateyes.com/2012/10/
Technical SEO | | gfiedel
http://www.fateyes.com/author/gina-fiedel/
http://www.fateyes.com/blog/ Does anyone know how to input descriptions for pages like these?
Thanks!!0 -
Client error 404
I have got a lot (100+) of 404´s. I got more the last time, so I rearranged the whole site. I even changed it from .php to .html. I have went to the web hotel to delete all of the .php files from the main server. Still, I got after yesterdays crawl 404´s on my (deleted) .php sites. There is also other links that has an error, but aren't there. Maybe those pages were there before the sites remodelling, but I don't think so because .html sites is also affected. How can this be happening?
Technical SEO | | mato0 -
How to fix duplicate page content error?
SEOmoz's Crawl Diagnostics is complaining about a duplicate page error. The example of links that has duplicate page content error are http://www.equipnet.com/misc-spare-motors-and-pumps_listid_348855 http://www.equipnet.com/misc-spare-motors-and-pumps_listid_348852 These are not duplicate pages. There are some values that are different on both pages like listing # , equipnet tag # , price. I am not sure how do highlight the different things the two page has like the "Equipment Tag # and listing #". Do they resolve if i use some style attribute to highlight such values on page? Please help me with this as i am not really sure why seo is thinking that both pages have same content. Thanks !!!
Technical SEO | | RGEQUIPNET0 -
Website of only circa 20 pages drawing 1,000s of errors?
Hi, One of the websites I run is getting 1,000s of errors for duplicate title / content even though there are only approximately 20 pages. SEOMoz seems to be finding pages that seem to have duplicated themselves. For example a blog page (/blog) is appearing as /blog/blog then blog/blog/blog and so on. Anyone shed some light on why this is occurring? Thanks.
Technical SEO | | TheCarnage0 -
Why is 4XX (Client Error) shown for valid pages?
My Crawl Diagnostics Summary says I have 5,141 errors of the 4XX (Client Error) variety. Yet when I view the list of URLs they all resolve to valid pages. Here is an example.
Technical SEO | | jimaycock
http://www.ryderfleetproducts.com/ryder/af/ryder/core/content/product/srm/key/ACO 3018/pn/Wiper-Blade-Winter-18-Each/erm/productDetail.do These pages are all dynamically created from search or browse using a database where we offer 36,000 products. Can someone help me understand why these are errors.0 -
Crawl Errors In Webmaster Tools
Hi Guys, Searched the web in an answer to the importance of crawl errors in Webmaster tools but keep coming up with different answers. I have been working on a clients site for the last two months and (just completed one months of link bulding), however seems I have inherited issues I wasn't aware of from the previous guy that did the site. The site is currently at page 6 for the keyphrase 'boiler spares' with a keyword rich domain and a good onpage plan. Over the last couple of weeks he has been as high as page 4, only to be pushed back to page 8 and now settled at page 6. The only issue I can seem to find with the site in webmaster tools is crawl errors here are the stats:- In sitemaps : 123 Not Found : 2,079 Restricted by robots.txt 1 Unreachable: 2 I have read that ecommerce sites can often give off false negatives in terms of crawl errors from Google, however, these not found crawl errors are being linked from pages within the site. How have others solved the issue of crawl errors on ecommerce sites? could this be the reason for the bouncing round in the rankings or is it just a competitive niche and I need to be patient? Kind Regards Neil
Technical SEO | | optimiz10