404 issues
-
Hello,
Some time ago, something like a month and a half) I have removed all 404 errors from the google index and the webmaster tools have removed them already, however yesterday moz found the same 404 errors that i have removed from indexing (tose pages are deleted or redirected by the site developer).
What could be an issue here and why webmaster tools are not registering those 404 errors but moz analytics does.
And the other question is if those pages do not exist can i track where the placed? I tried dowloading moz crawl test, but the refering source was not provided.
I would highly appreciate anyones help.
Thank you
-
Hello!
I took a look at your sites and we last crawled them in July so we haven't discovered any new links to follow to your site to update our index. We will only store indexed data for up to 80 days unless re-crawled. Using Raven Tools and others is a better way to look at your full profile while OSE looks for fresh, diverse, and important links.
Hope this helps!
-
Rikomuttik-
If you do, indeed, have inbound links from other sites pointing over to your site, this is a really good thing. This often happens when people want to link to an article, image or other page within the site.
I'd do a crawl on Open Site Explorer, or within Google's Webmaster Tools, and see if you can find any inbound links that are pointing to non-existing pages, and your system is giving 404 errors.
Then, I'd use a 301 redirect to take the inbound link and direct it to the proper location. Or, if you really want, you can re-create the page at the older location, but just make sure that you're not creating duplicate content on the site.
Hope this helps!
- Jeff
-
Hey Jeff,
Thank you for your answers.
I assume the issue might be within the point 3 from your suggestion list, is there something i could do find automatically this or perhaps just to go through all of the backlinks and somehow to find it old links like this?
Or there is nothing I can really do about it and just to ignore? Will it not hurst the website?
Thank you!
-
Rikomuttik-
There are a couple of reasons why 404 errors might be showing up again, even if you've fixed them in Google.
1. Moz might be using an older crawl that still lists pages that don't exist on the site.
2. If you're using a soft 404 error handling (i.e. all 404 errors redirect to the home page), then Google might not see the errors, but perhaps Moz does?
3. It's possible that you have inbound links from other sites that point to your site, that is throwing 404 errors, and that's what it being seen by other search engines.
4. It's also possible that the 301 redirects you might have set up in your .htaccess file have been changed, or are no longer working?
5. It's also possible that an older page that used to be on the site, was removed, was added back in, and that page has links that go to 404 pages?
Hope this helps?
-- Jeff
-
I really think that Moz lacks in this area. For finding things like this I use Raven Tools, they have a really good crawler. It clearly shows all of the pages that reference an error. My thoughts would be that either you missed some links to the pages or the redirects are not working properly, I would suspect that if Moz finds them, Google or Bing will find them.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Redirecting issue
Please I have a domain name miaroseworld.com I redirected (301 redirect) it to one of my domain names and I am having issues with the website so I decided to redirect to to my new site but moz is still showing redirecty to the previous websites
Technical SEO | | UniversalBlog
Even when I change the redirect on search console it still showing redirecting to the previous site.0 -
Submitted URL has crawl issue - Submitted URL seems to be a Soft 404 - but all looks fine
Google Search Console is showing some pages up as "Submitted URL has crawl issue" but they look fine to me. I have set them as fixed but after a month they were finally re-crawled and google states the issue persists. Examples are: https://www.rscpp.co.uk/counselling/175809/psychology-alcester-lanes-end.html
Technical SEO | | TommyNewmanCEO
https://www.rscpp.co.uk/browse/location-index/889/index-of-therapy-in-hanger-lane.html
https://www.rscpp.co.uk/counselling/274646/psychology-waltham-forest-sexual-problems.html There's also some "Submitted URL seems to be a Soft 404": https://www.rscpp.co.uk/counselling/112585/counselling-moseley-depression.html I also have more which are "pending", but again I couldn't see a problem with them in the first place. I'm at a bit of a loss as to what to do next. Any advice? Thanks in advance.0 -
Duplicate content and 404 errors
I apologize in advance, but I am an SEO novice and my understanding of code is very limited. Moz has issued a lot (several hundred) of duplicate content and 404 error flags on the ecommerce site my company takes care of. For the duplicate content, some of the pages it says are duplicates don't even seem similar to me. additionally, a lot of them are static pages we embed images of size charts that we use as popups on item pages. it says these issues are high priority but how bad is this? Is this just an issue because if a page has similar content the engine spider won't know which one to index? also, what is the best way to handle these urls bringing back 404 errors? I should probably have a developer look at these issues but I wanted to ask the extremely knowledgeable Moz community before I do 🙂
Technical SEO | | AliMac260 -
.htaccess Redirect 301 issues
I have completely rewritten my web site, adding structure to the file directories. Subsequently added was Redirect information within the .htaccess file. The following example ...
Technical SEO | | Cyberace
Redirect 301 /armaflex.html http://www.just-insulation.com/002-brands/armaflex.html
Returns this response in the URL bar of ...
http://www.just-insulation.com/002-brands/armaflex.html?file=armaflex
I am at a loss to understand why the suffix "?file=armaflex" is added The following code is inserted at the top of the file ...
RewriteEngine On redirect html pages to the root domain RewriteRule ^index.html$ / [NC,R,L] Force www. prefix in URLs and redirect non-www to www RewriteCond %{http_host} ^just-insulation.com [NC]
RewriteRule ^(.*)$ http://www.just-insulation.com/ [R=301,NC] Any advice would be most welcome.0 -
404 Errors & Redirection
Hi, I'm working with someone who recently had two websites redesigned. The old permalink structure consisted of domain/year/month/date/post-name. Their developer changed the new permalink structure to domain/post-name, but apparently he didn't redirect the old URLs to the new ones so we're finding that links from external sites result in 404 errors (once I remove the date in the URL, the links work fine). Each site has 3-4 years worth of blog posts, so there are quite a few that would need to be changed. I was thinking of using the Redirection plugin - would that be the best way to fix this sitewide on both sites?Any suggestions would be appreciated. Thanks, Carolina
Technical SEO | | csmm0 -
Robots.txt issue - site resubmission needed?
We recently had an issue when a load of new files were transferred from our dev server to the live site, which unfortunately included the dev site's robots.txt file which had a disallow:/ instruction. Bad! Luckily I spotted it quickly and the file has been replaced. The extent of the damage seems to be that some descriptions aren't displaying and we're getting a message about robots.txt in the SERPs for a few keywords. I've done a site: search and generally it seems to be OK for 99% of our pages. Our positions don't seem to be affected right now but obviously it's not great for the CTRs on those keywords affected. My question is whether there is anything I can do to bring the updated robots.txt file to Google's attention? Or should we just wait and sit it out? Thanks in advance for your answers!
Technical SEO | | GBC0 -
Facebook Like button issue
In looking through my top pages in Google Analytics, my #2 page (oddly enough) looked like this "/?fb_xd_fragment=". Apparently, this is because we added the Facebook Like button to many of our pages. But I'm worried these show very skewed PageView data and lower Time Spent on each page. The average time on this page is 5 seconds whereas the average sitewide time is much higher. Further, it shows 9,000 pageviews coming from only 250 Unique Visitors. I'm sure this is messing with our SEO. Is there a fix for this? Should I even be worried about it? I heard that I can remove it from my GA stat reporting, but I don't want it to be causing problems in the background. Please advise..my boss wants to keep the Facebook Like button the pages as it has brought us some good response. The page that this is on is: www.accupos.com Maybe there's an alternate version of the Facebook Like that we don't know about... I would appreciate any help on this DM
Technical SEO | | DerekM880 -
Multiple URLs in CMS - duplicate content issue?
So about a month ago, we finally ported our site over to a content management system called Umbraco. Overall, it's okay, and certainly better than what we had before (i.e. nothing - just static pages). However, I did discover a problem with the URL management within the system. We had a number of pages that existed as follows: sparkenergy.com/state/name However, they exist now within certain folders, like so: sparkenergy.com/about-us/service-map/name So we had an aliasing system set up whereby you could call the URL basically whatever you want, so that allowed us to retain the old URL structure. However, we have found that the alias does not override, but just adds another option to finding a page. Which means the same pages can open under at least two different URLs, such as http://www.sparkenergy.com/state/texas and http://www.sparkenergy.com/about-us/service-map/texas. I've tried pointing to the aliased URL in other parts of the site with the rel canonical tag, without success. How much of a problem is this with respect to duplicate content? Should we bite the bullet, remove the aliased URLs and do 301s to the new folder structure?
Technical SEO | | ufmedia0