404 Errors - How to get rid of them?
-
Hi,
I am starting an SEO job on an academic site that has been completely redone.
The SEOMoz crawl detected three 404 Errors to pages that cannot be found anywhere on either Joomla or the server.
What can I do to solve this?
Thanks!!
-
Thanks a lot for answering. This was very helpful.
M.
-
I totally agree. Best way forward. By the way, well done on only having 3 404 errors on re-doing a whole website
-
Export the SEOmoz crawl report to Excel. The "referrer" field will show the URL of the web page which contains the bad link. Locate the link and either fix it or remove it. Those three 404 errors will then be resolved.
-
Hi Michal,
It could sound a bit odd, but if there are only three pages marked as 404 error pages don't spend (to much) time on it. Especially if you can't find them yourself then you're users probably also won't find them and so Google won't make this a problem. We had 404 errors ourselves which couldn't be found also but they don't make a difference in my opinion.
A good source of information to find the pages who link to your current 404 pages is in my opinion the Crawl Errors within Google Webmaster Tools. If Google also found the same three pages they will give you the linked from pages.
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Search Console - Mobile Usability Errors
A site I'm looking at for a client had 100's of pages flagged as having Mobile Usability errors in Search Console. I found that the theme uses parameters in the URLs of some of theme resources (.js/.css) to identify the version strings. These were then being blocked by a rule in the robots.txt: "Disallow: /*?" I've removed this rule, and now when I inspect URLs and test the live versions of the page they are now being reported as mobile friendly. I then submitted validation requests in Search Console for both of the errors ("Text to small" and "Clickable Elements too close") My problem now, is that the validation has completed and the pages are still being reported as having the errors. I've double checked and they're find if I inspect them individually. Does anyone else have experience clearing these issues in Search Console? Any ideas what's going on here!
Technical SEO | | DougRoberts1 -
Once on https should Moz still be picking up errors on http
Hello, Should Moz be picking up http errors still if the sites on https? Or has the https not been done properly? I'm getting duplicate errors amoung other things. Cheers, Ruth
Technical SEO | | Ruth-birdcage1 -
When rogerbot tried to crawl my site it gets a 404\. Why?
When rogerbot tries to craw my site it tries http://website.com. My website then tries to redirect to http://www.website.com and is throwing a 404 and ends up not getting crawled. It also throws a 404 when trying to read my robots.txt file for some reason. We allow rogerbot user agent so unsure whats happening here. Is there something weird going on when trying to access my site without the 'www' that is causing the 404? Any insight is helpful here. Thanks,
Technical SEO | | BlakeBooth0 -
Mobile site not getting indexed
My site is www.findyogi.com - a shopping comparison site The mobile site is hosted at m.findyogi.com I fixed my sitemap and attribution to mobile site in May last week. My mobile site pages are getting de-indexed since then. Website - www.findyogi.com/mobiles/motorola/motorola-moto-g-16gb-b95ef8/price - indexed Mobile - m.findyogi.com/mobiles/motorola/motorola-moto-g-16gb-b95ef8/price - _not indexed. _ Google is crawling my website and mobile site normally. What am I am doing wrong?
Technical SEO | | namansr0 -
I physically changed my URL and now I have two...How do I get rid of the old one?
Hi, I physically changed my URL as something else and now Google thinks I have two duplicate pages (I know not to do this in the future). e.g. I had www.example.com/i-like-seo.aspx and changed it to: www.example.com/i-love-seo.aspx Google sees this as two pages now and my CMS system is only showing one page (The new page) Also, SEOMOZ is seeing two pages and further more sees them both as having two different amounts of inbound links? When I change content on the new url page, the old url page also updates. I'm really confused as to what has happened here and don't know how to get rid of the old url so that Google doesn't think that I have duplicate content. Any help to what has happened or how to fix it would be so helpful and appreciated. Many thanks.
Technical SEO | | CoGri0 -
404 Errors & Redirection
Hi, I'm working with someone who recently had two websites redesigned. The old permalink structure consisted of domain/year/month/date/post-name. Their developer changed the new permalink structure to domain/post-name, but apparently he didn't redirect the old URLs to the new ones so we're finding that links from external sites result in 404 errors (once I remove the date in the URL, the links work fine). Each site has 3-4 years worth of blog posts, so there are quite a few that would need to be changed. I was thinking of using the Redirection plugin - would that be the best way to fix this sitewide on both sites?Any suggestions would be appreciated. Thanks, Carolina
Technical SEO | | csmm0 -
Best practices for migrating an html sitemap? Or just get rid of it all together?
We are migrating a very large site to a new CMS and I'm trying to determine the best way to handle all the links (~15k) in our html sitemap. The developers don't see the purpose of using an html sitemap anymore and I have yet to come up with a good reason why we should migrate rather than just get rid of the sitemap since it is not very useful to users. The html sitemap was created about 6 years ago when page rank sculpting was a high priority. Currently, since we already have an XML sitemap, I'm not sure that there's really a need for a html sitemap, other than to maintain all the internal links. How valuable are the internal links found in an html sitemap? And will it be a problem if we remove these from our link profile? 15,000 links sounds significant, but they only account for less than .5% of our internal links. What do all you think?
Technical SEO | | BostonWright0 -
Pdf page titles and descriptions errors
In my weekly crawl report I suddenly have a huge number of title not found and missing page descriptions errors for all of my pdf files. The pdfs do have a page title (defined in the file/properties tab) However these page titles are not being picked up by the crawler or google. Any ideas how do I fix this? ( I am using the Adrobat 9 Distiller)
Technical SEO | | PerriCline0