Weird client errors . . .
-
SeoMoz is reporting a number of weird client errors. The 404 links all look like the following:
http://www.bluelinkerp.com/http%3A/www.bluelinkerp.com/corporate/cases/Nella.asp
What might be causing these weird links to be picked up? I couldn't find any way within the SEOmoz interface to track down the source of these links . . .
-
You can give it a try (up to 3000 URLs) at http://pro.seomoz.org/tools/crawl-test
-
I think I have resolved the issue where the links are. Waiting for another crawl to see if the errors disappear . ..
-
Hi David,
If you download the CSV, you'll be able to see the referring link. My guess is that there's a badly-formed internal link in your site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages with Duplicate Content Error
Hello, the result of renewed content appeared in the scan results in my Shopify Store. But these products are unique. Why am I getting this error? Can anyone please help to explain why? screenshot-analytics.moz.com-2021.10.28-19_53_09.png
Moz Pro | | gokimedia0 -
50,000 4xx errors listed in MOZ report :(
HI, I've just had a look at a customers MOZ report and discovered nearly 50,000 errors listed!! The site is a non dynamic, non database driven site so where these have come from is beyond me. Example - (Both links open in a new window)
Moz Pro | | Skips
LIVE PAGE - http://lunnonwaste.com/licenses/Heatherland-Limited-H&S-Policy-Document-JAN-15.pdf
NON EXISTANT PAGE - http://lunnonwaste.com/licenses/licenses/Heatherland-Limited-H&S-Policy-Document-JAN-15.pdf - if you hover over any links you'll see they're ALL ......./licenses/licenses/.......... which doesn't exist! It's the file system that seems to be the problem - .........../licenses/licenses/........... and this gets added onto (............./licenses/licenses/licenses/licenses/.............. , up to the 50,000 page errors LOL The problem is self replicating. It's very odd and not something I've ever seen before, NON of these 'extra' pages are listed on the server, so where are they coming from? Any suggestions or help would be gratefully appreciated1 -
Multiple Countries, Same Language: Receiving Duplicate Page & Content Errors
Hello! I have a site that serves three English-speaking countries, and is using subfolders for each country version: United Kingdom: https://site.com/uk/ Canada: https://site.com/ca/ United States & other English-speaking countries: https://site.com/en/ The site displayed is dependent on where the user is located, and users can also change the country version by using a drop-down flag navigation element in the navigation bar. If a user switches versions using the flag, the first URL of the new language version includes a language parameter in the URL, like: https://site.com/uk/blog?language=en-gb In the Moz crawl diagnostics report, this site is getting dinged for lots of duplicate content because the crawler is finding both versions of each country's site, with and without the language parameter. However, the site has rel="canonical" tags set up on both URL versions and none of the URLs containing the "?language=" parameter are getting indexed. So...my questions: 1. Are the Duplicate Title and Content errors found by the Moz crawl diagnostic really an issue? 2. If they are, how can I best clean this up? Additional notes: the site currently has no sitemaps (XML or HTML), and is not yet using the hreflang tag. I intend to create sitemaps for each country version, like: .com/en/sitemap.xml .com/ca/sitemap.xml .com/uk/sitemap.xml I thought about putting a 'nofollow' tag on the flag navigation element, but since no sitemaps are in place I didn't want to accidentally cut off crawler access to alternate versions. Thanks for your help!
Moz Pro | | Allie_Williams0 -
Rogerbot crawls my site and causes error as it uses urls that don't exist
Whenever the rogerbot comes back to my site for a crawl it seems to want to crawl urls that dont exist and thus causes errors to be reported... Example:- The correct url is as follows: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/ But it seems to want to crawl the following: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/?id=10330 This format doesn't exist anywhere and never has so I have no idea where its getting this url format from The user agent details I get are as follows: IP ADDRESS: 107.22.107.114
Moz Pro | | spiralsites
USER AGENT: rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, rogerbot-crawler+pr1-crawler-17@moz.com)0 -
Can someone explain the attached on page error message?
Two weeks ago, I took down our mobile site and switched to one on dudamobile. However, for some reason, I keep getting this error/improvement message on my on page report. I do not see how I can have a cannocial problem with a page that is no longer there. Can someone explain that and what I can do about it? Thanks! eQTNRD6.png
Moz Pro | | KempRugeLawGroup0 -
4XX (Client Error) Report - Referrer URL Feature Request
Why not allow me to click on the individual 404 errors in the 4XX (Client Error) report and allow me to actually see the where the broken links are coming from (as a hyperlinked url) so I can locate/click and fix them? Providing the referring url w/hyperlink will stop me from having to jump between the report and searching in site explorer, or downloading the complete error report and sorting.
Moz Pro | | WaterGuy0 -
Joined yesterday, today crawl errors (incorrectly) shows as zero...
Hi. We set up our SEOMoz account yesterday, and the initial crawl showed up a number of errors and warnings which we were in the process of looking at and resolving. I log into SEOMoz today and it's showing 0 errors, Pages Crawled: 0 | Limit: 10,000 Last Crawl Completed: Nov. 27th, 2012 Next Crawl Starts: Dec. 4th, 2012errors, warnings and notices show as 0, and the issues found yesterday show only in the change indicators.There's no way of getting to the results seen yesterday other than waiting a week?We were hoping to continue working through the found issues!
Moz Pro | | WorldText0 -
Crawl Diagnostics bringing 20k+ errors as duplicate content due to session ids
Signed up to the trial version of Seomoz today just to check it out as I have decided I'm going to do my own SEO rather than outsource it (been let down a few times!). So far I like the look of things and have a feeling I am going to learn a lot and get results. However I have just stumbled on something. After Seomoz dones it's crawl diagnostics run on the site (www.deviltronics.com) it is showing 20,000+ plus errors. From what I can see almost 99% of this is being picked up as erros for duplicate content due to session id's, so i am not sure what to do! I have done a "site:www.deviltronics.com" on google and this certainly doesn't pick up the session id's/duplicate content. So could this just be an issue with the Seomoz bot. If so how can I get Seomoz to ignore these on the crawl? Can I get my developer to add some code somewhere. Help will be much appreciated. Asif
Moz Pro | | blagger0