Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Moz-Specific 404 Errors Jumped with URLs that don't exist
-
Hello,
I'm going to try and be as specific as possible concerning this weird issue, but I'd rather not say specific info about the site unless you think it's pertinent.
So to summarize, we have a website that's owned by a company that is a division of another company. For reference, we'll say that:
OURSITE.com is owned by COMPANY1 which is owned by AGENCY1
This morning, we got about 7,000 new errors in MOZ only (these errors are not in Search Console) for URLs with the company name or the agency name at the end of the url.
So, let's say one post is: OURSITE.com/the-article/
This morning we have an error in MOZ for URLs
OURSITE.com/the-article/COMPANY1
OURSITE.com/the-article/AGENCY1
x 7000+ articles we have created. Every single post ever created is now an error in MOZ because of these two URL additions that seem to come out of nowhere.
These URLs are not in our Sitemaps, they are not in Google... They simply don't exist and yet MOZ created an an error with them. Unless they exist and I don't see them.
Obviously there's a link to each company and agency site on the site in the about us section, but that's it.
-
Not a problem! It't great that Moz's crawler picked up on this issue as it could have caused some problems over time, if it were allowed to get out of control
-
Just wanted to update quickly. The mistakes in the email links as well as the links to the two company sites proved to be the problem. After recrawling the sites, the 7,000+ errors are gone.
It's interesting because I was about to get very upset with Moz, thinking their bot had caused me half a day of headaches for nothing. Turned out they picked up an error before any other system did that would likely have done a lot of damage given that they were all contact links meant to improve transparency.
Hopefully, we caught and fixed the problem in time. In any case, thanks for your help effectdigital.
-
A more common issue than you might think and strongly likely to be a culprit
-
I've just come up on something....
In an attempt three days ago to be more transparent (it's a news site), we added "send me an email" links to each author's bio as well as links to the Company and the Agency in the footer.
Except these links weren't inserted correctly in the footer, and half the authors didn't get the right links either.
So instead of it being a "mailto" link, it was just the email which when you hovered over was the url of the page with the author email at the end... the same thing that's happening in the errors.
Same for the footer links. They weren't done correctly and sending users to OURSITE.com/AGENCY1 instead of AGENCY1's website. I've made the changes and put in the correct links. I have asked for a recrawl to see if that changes anything.
-
At this point that doesn't really matter the main thing is to analyse the referrer URL to see if there genuinely are any hidden malformed links
-
It is assuredly very weird, we just have to determine if Rogerbot has gone crazy in this Summer heat or if something went wrong with your link architecture somehow
-
Yeah that will tell you to look on the referring URL, to see if you can track down a malformed link to the error URL look in the coding
-
Other update here..
I've checked about 50 of these errors and they all say the same stats about the problem URL page.
307 words, 22 Page Authority.
I don't know if it matters, just putting it out there.
-
True, but it's as if something is creating faux URLs of a current article. Adding company names and emails to the end of the URL... It's very weird.
-
The referring URL in this case is the original url without the added element in the permalink.
So
URL: OURSITE.com/the-article/COMPANY1
Referring URL: OURSITE.com/the-article/
Does that give any more info?
-
No need to freak out though as you say "author@oursite.com" implying they are business emails (not personal emails) so you shouldn't have to worry about a data breach or anything. That is annoying though
-
The ones you want are... URL and Referring URL I believe. "URL" should be the 404 pages, "Referring URL" would be the pages that could potentially be creating your problems
-
UPDATE HERE:
I've just noticed that it is also adding the email of the author to the URL and creating an error with that as well.
So, there are three types of errors per post:
OURSITE.com/the-article/COMPANY1
-
Do you mean downloading the CSV of the issue? I tried that and it gives me the following:
Issue Type,Status,Affected Pages,Issue Grouping Identifier,URL,Referring URL,Redirect Location,Status Code,Page Speed,Title,Meta Description,Page Authority,URL Length,Title Pixel Length,Title Character Count.
Which isn't really useful as it relates to the 404 page.
I'm new to Moz, is there a direct line to an in-house resource that could tell us if it's a Rogerbot issue?
-
If you can export the data from Moz and it contains both a link source (the page the link is on) as well as a link target (the created broken URLs) then you might be able to isolate more easily, if it's you or if it's Rogerbot. If Moz UI doesn't give you that data, you'll have to ask if it's at all possible to get it from a staff member, they will likely pick this up and direct you to email (perfectly normal)
-
Thanks for the feedback. You're right about the 404 part, I should have phrased it differently. As you figured out, I meant that we are getting 404s for URLs that were never intended to exist and that we don't know how/why they are there.
We are investigating part 1, but my hope is that it is part 2.
Thanks again for taking the time to respond.
-
404s are usually for pages that 'don't exist' so that's pretty usual. This is either:
-
somewhere on your site, links are being malformed leading to these duff pages (which may be happening invisibly, unless you look deep into the base / modified source code). Google simply hasn't picked up on the error yet
-
something is wrong with Rogerbot and he's compiling hyperlinks incorrectly, thus running off to thousands of URLs that don't exist
At this juncture it could be either one, I am sure someone from Moz will be able to help you further
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Does Moz Pro include Moz Local
My client has bought about six Moz Local accounts and are pleased with results. We have not yet used your Moz Pro program. The client might be interested in switching to the Moz Pro if those Moz Local accounts can be included into it. Please let me know as soon as possible. Thanks!
Moz Pro | | gallowaywebteam0 -
Pages with Duplicate Content Error
Hello, the result of renewed content appeared in the scan results in my Shopify Store. But these products are unique. Why am I getting this error? Can anyone please help to explain why? screenshot-analytics.moz.com-2021.10.28-19_53_09.png
Moz Pro | | gokimedia0 -
How to overcome Connection Timeout Status Error?
My website contains 110+ pages in which 70 are CONNECTION TIMEOUT while checking in Screaming Frog. Can someone help me in getting this solved? My website Home Page Sanctum Consulting.
Moz Pro | | Manifeat90 -
Pages with Temporary Redirects on pages that don't exist!
Hi There Another obvious question to some I hope. I ran my first report using the Moz crawler and I have a bunch of pages with temporary redirects as a medium level issue showing up. Trouble is the pages don't exist so they are being redirected to my custom 404 page. So for example I have a URL in the report being called up from lord only knows where!: www.domain.com/pdf/home.aspx This doesn't exist, I have only 1 home.aspx page and it's in the root directory! but it is giving a temp redirect to my 404 page as I would expect but that then leads to a MOZ error as outlined. So basically you could randomize any url up and it would give this error so I am trying to work out how I deal with it before Google starts to notice or before a competitor starts to throw all kinds at my site generating these errors. Any steering on this would be much appreciated!
Moz Pro | | Raptor-crew0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
How you can manipulate your MOZ DA
I have become frustrated at MOZ in the last few months, none of my backlinks have made it into the index. Old back links. Long story short, I figured out the issue and I figured out how anyone can manipulate their DA. I wrote a blog post about it here, http://blog.dh42.com/manipulate-moz/
Moz Pro | | LesleyPaone1 -
My page has a 302 redirect and I don't know how to get rid of it!
I use Internet Officer tool to see the 302 redirect but I check the redirects in the CPanel and there are none. In the .htaccess there are none either. I don't know where else to look 😞 The url is http://servicioshosting.com Can you guys help me? I can't set up a campaign because Google can't crawl the website. I can't setup the Facebook OpenGraph because of the redirect. error.jpg
Moz Pro | | vanessacolina0 -
Error 403
I'm getting this message "We were unable to grade that page. We received a response code of 403. URL content not parseable" when using the On-Page Report Card. Does anyone know how to go about fixing this? I feel like I've tried everything.
Moz Pro | | Sean_McDonnell0