Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Moz-Specific 404 Errors Jumped with URLs that don't exist
-
Hello,
I'm going to try and be as specific as possible concerning this weird issue, but I'd rather not say specific info about the site unless you think it's pertinent.
So to summarize, we have a website that's owned by a company that is a division of another company. For reference, we'll say that:
OURSITE.com is owned by COMPANY1 which is owned by AGENCY1
This morning, we got about 7,000 new errors in MOZ only (these errors are not in Search Console) for URLs with the company name or the agency name at the end of the url.
So, let's say one post is: OURSITE.com/the-article/
This morning we have an error in MOZ for URLs
OURSITE.com/the-article/COMPANY1
OURSITE.com/the-article/AGENCY1
x 7000+ articles we have created. Every single post ever created is now an error in MOZ because of these two URL additions that seem to come out of nowhere.
These URLs are not in our Sitemaps, they are not in Google... They simply don't exist and yet MOZ created an an error with them. Unless they exist and I don't see them.
Obviously there's a link to each company and agency site on the site in the about us section, but that's it.
-
Not a problem! It't great that Moz's crawler picked up on this issue as it could have caused some problems over time, if it were allowed to get out of control
-
Just wanted to update quickly. The mistakes in the email links as well as the links to the two company sites proved to be the problem. After recrawling the sites, the 7,000+ errors are gone.
It's interesting because I was about to get very upset with Moz, thinking their bot had caused me half a day of headaches for nothing. Turned out they picked up an error before any other system did that would likely have done a lot of damage given that they were all contact links meant to improve transparency.
Hopefully, we caught and fixed the problem in time. In any case, thanks for your help effectdigital.
-
A more common issue than you might think and strongly likely to be a culprit
-
I've just come up on something....
In an attempt three days ago to be more transparent (it's a news site), we added "send me an email" links to each author's bio as well as links to the Company and the Agency in the footer.
Except these links weren't inserted correctly in the footer, and half the authors didn't get the right links either.
So instead of it being a "mailto" link, it was just the email which when you hovered over was the url of the page with the author email at the end... the same thing that's happening in the errors.
Same for the footer links. They weren't done correctly and sending users to OURSITE.com/AGENCY1 instead of AGENCY1's website. I've made the changes and put in the correct links. I have asked for a recrawl to see if that changes anything.
-
At this point that doesn't really matter the main thing is to analyse the referrer URL to see if there genuinely are any hidden malformed links
-
It is assuredly very weird, we just have to determine if Rogerbot has gone crazy in this Summer heat or if something went wrong with your link architecture somehow
-
Yeah that will tell you to look on the referring URL, to see if you can track down a malformed link to the error URL look in the coding
-
Other update here..
I've checked about 50 of these errors and they all say the same stats about the problem URL page.
307 words, 22 Page Authority.
I don't know if it matters, just putting it out there.
-
True, but it's as if something is creating faux URLs of a current article. Adding company names and emails to the end of the URL... It's very weird.
-
The referring URL in this case is the original url without the added element in the permalink.
So
URL: OURSITE.com/the-article/COMPANY1
Referring URL: OURSITE.com/the-article/
Does that give any more info?
-
No need to freak out though as you say "author@oursite.com" implying they are business emails (not personal emails) so you shouldn't have to worry about a data breach or anything. That is annoying though
-
The ones you want are... URL and Referring URL I believe. "URL" should be the 404 pages, "Referring URL" would be the pages that could potentially be creating your problems
-
UPDATE HERE:
I've just noticed that it is also adding the email of the author to the URL and creating an error with that as well.
So, there are three types of errors per post:
OURSITE.com/the-article/COMPANY1
-
Do you mean downloading the CSV of the issue? I tried that and it gives me the following:
Issue Type,Status,Affected Pages,Issue Grouping Identifier,URL,Referring URL,Redirect Location,Status Code,Page Speed,Title,Meta Description,Page Authority,URL Length,Title Pixel Length,Title Character Count.
Which isn't really useful as it relates to the 404 page.
I'm new to Moz, is there a direct line to an in-house resource that could tell us if it's a Rogerbot issue?
-
If you can export the data from Moz and it contains both a link source (the page the link is on) as well as a link target (the created broken URLs) then you might be able to isolate more easily, if it's you or if it's Rogerbot. If Moz UI doesn't give you that data, you'll have to ask if it's at all possible to get it from a staff member, they will likely pick this up and direct you to email (perfectly normal)
-
Thanks for the feedback. You're right about the 404 part, I should have phrased it differently. As you figured out, I meant that we are getting 404s for URLs that were never intended to exist and that we don't know how/why they are there.
We are investigating part 1, but my hope is that it is part 2.
Thanks again for taking the time to respond.
-
404s are usually for pages that 'don't exist' so that's pretty usual. This is either:
-
somewhere on your site, links are being malformed leading to these duff pages (which may be happening invisibly, unless you look deep into the base / modified source code). Google simply hasn't picked up on the error yet
-
something is wrong with Rogerbot and he's compiling hyperlinks incorrectly, thus running off to thousands of URLs that don't exist
At this juncture it could be either one, I am sure someone from Moz will be able to help you further
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Link Tracking List Error
"I have been maintaining 5 directories of backlinks in the 'Link Tracking List' section for several months. However, I am unable to locate any of these links at this time. Additionally, the link from my MOZ profile is currently broken and redirects to an error page, no to Elche Se Mueve. Given the premium pricing of MOZ's services, these persistent errors are unacceptable."
Moz Pro | | Alberto D.0 -
What's the best way to search keywords for Youtube using Moz Keyword explorer?
I want to optimize my youtube channel using identified keywords, but I'm concerned that the keywords I'm identifying work well for SERP's but might not be how people search in Youtube. How do a distinguish my keywords to be targeted for Youtube?
Moz Pro | | Dustless0 -
Robots.txt blocking Moz
Moz are reporting the robots.txt file is blocking them from crawling one of our websites. But as far as we can see this file is exactly the same as the robots.txt files on other websites that Moz is crawling without problems. We have never come up against this before, even with this site. Our stats show Rogerbot attempting to crawl our site, but it receives a 404 error. Can anyone enlighten us to the problem please? http://www.wychwoodflooring.com -Christina
Moz Pro | | ChristinaRadisic0 -
Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?
From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?
Moz Pro | | NichGunn0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
What's the best way to eliminate "429 : Received HTTP status 429" errors?
My company website is built on WordPress. It receives very few crawl errors, but it do regularly receive a few (typically 1-2 per crawl) "429 : Received HTTP status 429" errors through Moz. Based on my research, my understand is that my server is essentially telling Moz to cool it with the requests. That means it could be doing the same for search engines' bots and even visitors, right? This creates two questions for me, which I would greatly appreciate your help with: Are "429 : Received HTTP status 429" errors harmful for my SEO? I imagine the answer is "yes" because Moz flags them as high priority issues in my crawl report. What can I do to eliminate "429 : Received HTTP status 429" errors? Any insight you can offer is greatly appreciated! Thanks,
Moz Pro | | ryanjcormier
Ryan0 -
I am looking for SEO tips specifically for magazine site's
I have a client who has a website that is based on a magazine. They make their money through advertisement I am primarily an inbound marketer I would be very grateful if anyone out there has any tips for a site that has been around for quite a while ( over 10 years) we are transforming the site from HTML into WordPress then hosting it with a fast managed WordPress host using CDN. I feel the lack of links is an obvious place to start however if there's anything specific to magazine based websites I would be more than grateful to hear your opinions. Thank you all in advance. Sincerely, Thomas von Zickell
Moz Pro | | BlueprintMarketing0 -
How do you guys/gals define a 'row?'
I have a question about calls to the API and how these are measured. I noticed that the URL Metrics calls allow a batch of multiple URLs. We're in a position where we need link data for multiple websites; can we request a single row of data with link information for multiple URLs, or do we need to request a unique row for each URL?
Moz Pro | | ssimburg0