4xx (not found) errors seem spurious, caused by a "\" added to the URL
-
Hi SEOmoz folks
We're getting a lot of 404 (not found) errors in our weekly crawl.
However the weird thing is that the URLs in question all have the same issue.
They are all a valid URL with a backsalsh ("") added. In URL encoding, this is an extra %5C at the end of the URL.
Even weirder, we do not have any such URLs in our (Wordpress-based) website.
Any insight on how to get rid of this issue?
Thanks
-
No, Google Webmaster tools do not list an error here.
Its indeed an SEOmoz bug. Ryan, thanks for trying though!
-
My request is for a real link that I can click on and view the page.
In most cases where someone described an issue to me, either a key piece of information was left out or missed. If you cannot share that information, I understand. In the interest of being helpful, I wanted to ask.
It is entirely possible this is a crawler issue, but it is also possible the crawler is functioning perfectly and Google's crawler will produce the same result. That is my concern.
-
Well actualy I did already. The example I gave above is exactly that, only I replaced the real URL with "URL".
In a bit greater detail, the referring page is actually URL1 and this page contains the javascript
item = '
- text';
which produces 404 errors for URL2 in the SEOmoz crawl report.
-
It is entirely possible the issue is with the SEOmoz crawler. I would like to see it improved as well.
I am concerned the root issue may actually be with your site. Would you be willing to share an example of a link which is flagged in your report along with the referring page?
-
Thanks for the tips. After drilling down on the referer, this looks like an SEOmoz bug.
We are using a wordpress plugin called "collapsing archives" which creates LEGAL archive links with a javascript snippet like this:
item = '
- text';
As you can see this is totally legal javascript. But it seems SEOmoz is scanning the javascript without interpretation and picking up the escaped quotation mark ' after the URL and interpreting it as an additional \ at the end of the URL.
Since the plugin is behaving legally and works well - we want to keep using it. What's the chance that SEOmoz will fix the bug?
-
Many people do not realize when you add the backslash character, you change the URL. You can actually present a different web page for the URL with the trailing slash.
A popular cause of the problem is linking. If you check your weekly crawl report, there will be a column called Referrer. That is the source of the link. Check the referring page and find the link. Fix the link (i.e. remove the trailing slash) and the problem will go away on the next crawl. Of course, you want to determine how the link appeared and ensure it doesn't happen again.
-
If I had to have a guess I'd look into any javascript on the page that is perhaps adding or pointing to the URL with backslash.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I fix 5xx errors on pdf's?
Hi all, I'm having a huge (and very frustrating) issue-- I have 437 pages with 5xx errors and have no idea how to go about fixing them. All of them are links to pdf's, and when I click on the link on the Site Crawl page, it opens just fine. I've been trying to find the answer to how to fix this issue for weeks with no luck at all. I tried linking the pdf's to their parent page, tried renaming the links to be more concise and crawler friendly, and I've searched all types of articles involving pdf's and 5xx errors, all with no luck. Does anyone have any idea why these pages are getting this error and how I can fix them? Huge thanks in advance! Daniela S.
Moz Pro | | laurelleafep0 -
403 error for a member site
Perhaps a stupid question but SEOmoz registers 403 errors for pages behind a membersite (ie. they are restricted on purpose). Should I noindex these pages or just let SEOmoz register these "errors"?
Moz Pro | | Crunchii0 -
"Loading" error in Open Site Explorer
The "top pages" keeps hanging while trying to retrieve social metrics "FB Shares" "Tweets" and "Google +1". I've tried in Chrome and Firefox. Can I just turn these off? ose-hanging.jpg
Moz Pro | | cpaddock0 -
Duplicate content error?
I am getting a duplicate content error for the following pages: http://www.bluelinkerp.com/products/accounting/index.asp http://www.bluelinkerp.com/products/accounting/ But, of course, the 2nd link is just an automatic redirect to the index file, is it not? Why is it thinking it is a different URL? See image. NJfxA.png
Moz Pro | | BlueLinkERP0 -
Mozcape API Batching URLs LIMIT
Guys, there's an example to batching URLs using PHP: http://apiwiki.seomoz.org/php Which is the maximum number of URLs I can add to that batch?
Moz Pro | | Srvwiz0 -
Why do I keep getting "more than one canonical URL tag" on-page factor when, in fact, there is always only one?
The following are pages that SEOMOZ says have "more than one canonical URL tag" but they all have only one. Can someone help me understand this?http://www.lasercenterny.com/Laser-Hair-Removal-Binghamton/tabid/1950/Default.aspxhttp://www.lasercenterny.com/Hair-Removal-Binghamton-NY/tabid/1949/Default.aspxhttp://www.lasercenterny.com/Hair-Removal-Binghamton/tabid/1948/Default.aspx
Moz Pro | | SmartWebPros0 -
Open Site Explorer Missing URL's
I see a link to my site on a couple different url's, but they are not listed in OSE. The links have been active for a long time too. Does OSE not track all inbound links from all sites? Thanks, Stephen
Moz Pro | | stats440 -
"Issue: Duplicate Page Content " in Crawl Diagnostics - but these pages are noindex
Hello guys, our site is nearly perfect - according to SEOmoz campaign overview. But, it shows me 5200 Errors, more then 2500 Pages with Duplicate Content plus more then 2500 Duplicated Page Titles. All these pages are sites to edit profiles. So I set them "noindex, follow" with meta robots. It works pretty good, these pages aren't indexed in the search engines. But why the SEOmoz tools list them as errors? Is there a good reason for it? Or is this just a little bug with the toolset? The URLs which are listet as duplicated are http://www.rimondo.com/horse-edit/?id=1007 (edit the IDs to see more...) http://www.rimondo.com/movie-edit/?id=10653 (edit the IDs to see more...) The crawling picture is still running, so maybe the errors will be gone away in some time...? Kind regards
Moz Pro | | mdoegel0