4xx (not found) errors seem spurious, caused by a "\" added to the URL
-
Hi SEOmoz folks
We're getting a lot of 404 (not found) errors in our weekly crawl.
However the weird thing is that the URLs in question all have the same issue.
They are all a valid URL with a backsalsh ("") added. In URL encoding, this is an extra %5C at the end of the URL.
Even weirder, we do not have any such URLs in our (Wordpress-based) website.
Any insight on how to get rid of this issue?
Thanks
-
No, Google Webmaster tools do not list an error here.
Its indeed an SEOmoz bug. Ryan, thanks for trying though!
-
My request is for a real link that I can click on and view the page.
In most cases where someone described an issue to me, either a key piece of information was left out or missed. If you cannot share that information, I understand. In the interest of being helpful, I wanted to ask.
It is entirely possible this is a crawler issue, but it is also possible the crawler is functioning perfectly and Google's crawler will produce the same result. That is my concern.
-
Well actualy I did already. The example I gave above is exactly that, only I replaced the real URL with "URL".
In a bit greater detail, the referring page is actually URL1 and this page contains the javascript
item = '
- text';
which produces 404 errors for URL2 in the SEOmoz crawl report.
-
It is entirely possible the issue is with the SEOmoz crawler. I would like to see it improved as well.
I am concerned the root issue may actually be with your site. Would you be willing to share an example of a link which is flagged in your report along with the referring page?
-
Thanks for the tips. After drilling down on the referer, this looks like an SEOmoz bug.
We are using a wordpress plugin called "collapsing archives" which creates LEGAL archive links with a javascript snippet like this:
item = '
- text';
As you can see this is totally legal javascript. But it seems SEOmoz is scanning the javascript without interpretation and picking up the escaped quotation mark ' after the URL and interpreting it as an additional \ at the end of the URL.
Since the plugin is behaving legally and works well - we want to keep using it. What's the chance that SEOmoz will fix the bug?
-
Many people do not realize when you add the backslash character, you change the URL. You can actually present a different web page for the URL with the trailing slash.
A popular cause of the problem is linking. If you check your weekly crawl report, there will be a column called Referrer. That is the source of the link. Check the referring page and find the link. Fix the link (i.e. remove the trailing slash) and the problem will go away on the next crawl. Of course, you want to determine how the link appeared and ensure it doesn't happen again.
-
If I had to have a guess I'd look into any javascript on the page that is perhaps adding or pointing to the URL with backslash.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Recovering rankings after a botched url change
Hi there, I have for a long time had a bicycle maintenance website at madegood.org. Over the years the film branch of this business has taken off and moved in a slightly different direction, so I thought in March I decided to move madegood.org to madegoobikes.com, and create a new website for my film business at madegood.com. I thought I did a good job of telling google about my change of domain, but my rankings completely died, so about a month I moved madegoodbikes.com back to madegood.org. So far I haven't seen any sign of a recovery in my rankings, I'm getting almost no visits. I've check all my top pages on OSE and everything seems to be in place. https://moz.com/researchtools/ose/pages?site=http%3A%2F%2Fwww.madegood.org%2F&no_redirects=0&sort=page_authority&filter=all&page=1 Is it normal to wait over a month for my rankings to recover, or is there anything else I should be doing? Any tips/ideas/advice whatsoever will of huge help!
Moz Pro | | madegood0 -
Can someone kindly explain what 'Crawl Issue Found: No rel="canonical" Tags' means? Is this a critical error and how can it be rectified?
Can someone kindly explain what 'Crawl Issue Found: No rel="canonical" Tags' means? Is this a critical error and how can it be rectified?
Moz Pro | | JoshMcLean0 -
How come the linking root domains doesn't download to the cvs when I try to create a "Top Pages" report?
How come the linking root domains tab doesn't download to the cvs when I try to create a "Top Pages" report?
Moz Pro | | mrmworldwidesearch0 -
El "Crawl Diagnostics Summary" me indica que tengo contenido duplicado
Como El "Crawl Diagnostics Summary"pero la verdad es que son las mismas paginas con el wwww y sin el www. que puedo hacer para quitar esta situacionde error.
Moz Pro | | arteweb20 -
Dead links-urls
What is the quickest way to get Google to clean up dead
Moz Pro | | 1step2heaven
link? I have 74,000 dead links reported back, i have added a robot txt to
disallow and added on Google list remove from my webmaster tool 4 months ago.
The same dead links also show on the open site explores. Thanks0 -
90% of our sites that are designed are in wordpress and the report brings up "duplicate" content errors. I presume this is down to a conical error?
We are looking at getting the Agency version of SEOMoz and are based in the UK Could you please tell me what would be the best way to correct this issue as this appears to be a problem with all our clients websites. an example would be www.fsgenergy.co.uk Would you also be able to suggest the best SEO plugin to use with SEOMOz ? Many thanks Paul
Moz Pro | | KloodLtd1 -
SEO Web Crawler - Referrer Lists XML Sitemap URL
Hello!, I recently ran the crawl tool on a client site. Opening up the file, I noticed that the referring URLs listed are my XML sitemaps and not (X)HTML pages. Any reason or thoughts behind why this is happening? Thanks!
Moz Pro | | MorpheusMedia0 -
Confounding "Accessible to Engines" error?
Most of the pages on our site "Accessible to Engines" test in the SEOmoz reports. We cannot find any problem with the code and it's largely identical to the few pages that come up with an "A" score. One item that may be a reason is that we use meta http-equiv="refresh" content="600; For example in www.weatherzone.com.au/nsw/sydney/sydney We use this to fresh dynamic content on our site. Do search engines penalise pages that use this form of page refresh? Alternatively, is there a known bug in the SEOmoz "Accessible to Engines" report? Many thanks
Moz Pro | | weatherzone0