Dealing with broken internal links/404s. What's best practice?
-
I've just started working on a website that has generated lots (100s) of broken internal links. Essentially specific pages have been removed over time and nobody has been keeping an eye on what internal links might have been affected. Most of these are internal links that are embedded in content which hasn't been updated following the page's deletion.
What's my best way to approach fixing these broken links?
My plan is currently to redirect where appropriate (from a specific service page that doesn't exist to the overall service category maybe?) but there are lots of pages that don't have a similar or equivalent page. I presume I'll need to go through the content removing the links or replacing them where possible.
My example is a specific staff member who no longer works there and is linked to from a category page, should i be redirecting from the old staff member and updating the anchor text, or just straight up replacing the whole thing to link to the right person?
In most cases, these pages don't rank and I can't think of many that have any external websites linking to them.
I'm over thinking all of this?
Please help!
-
Thank you, you answered all of my questions and some more I didn't ask...but should have!
The notable alumni page is a great idea, and not one I'd thought of.
It's going to be a lengthy process, but I'm no happy that I know I'm doing the right things.
Thank you again!
-
Those pages have been deleted for a reason so rather than redirecting to a page that's not relevant or one that's the main category page why not just get rid of the link? this will increase the link equity of the good links coming from that page that go to places that users want to go to.
Certainly when someone leaves you need to redirect 'Job title' to the new owner of that job title. That's what users will want. But sending people around in circles on your site is pointless and will potentially cause them frustration and to leave. If you're trying to find more info and you end up going 'up a level' and not to a deeper and more detailed level then that always makes me bounce because I figure the website can't answer my question or give me the info I need.
Think about the user. Pagerank sculpting is dead. But it's still important to make sure there is always a path for a user to follow to get the info they need. If there is not then delete the entire sentence and the original link. It's only going to help strengthen the link equity flow throughout your site.
Simplify - don't complicate. That would be my advice. And remember to ask for a crawl of the updated page and it's direct links to get a quick index and see whether it helps your pages rank.
You don't necessarily need links to internal pages from external sources. That's not the reason they are not ranking. They are not ranking because people aren't navigating to them (no implicit user feedback signals) or they don't entice people to click from their SERP entry. So look at updating the title tags and meta descriptions and doing what you suggest, where appropriate change the anchor text to the right thing and link to the right place or just delete the link.
I have a really useful 'notable alumni' page for great people who used to work at our practice but now don't. You'll need their permission to keep them on your site and it helps if they had a nice page with good DA and PA that links with anchor text to - for example - a product or service.
But google HATES 404's so get rid of them all as soon as you can and watch the pages that remain creep up the rankings.
-
If the 404 link has an equivalent on site, I'd update the link to point at the new equivalent. Even if the page isn't ranking, there's potential for a visitor to get to the page, so why not send them to proper content if you have it.
There's also a Moz Blog post regarding 404 pages: https://moz.com/blog/are-404-pages-always-bad-for-seo
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to use Google search console's 'Name change' tool?
Hi There, I'm having trouble performing a 'Name change' for a new website (rebrand and domain change) in Google Search console. Because the 301 redirects are in place (a requirement of the name change tool), Google can no longer verify the site, which means I can't complete the name change? To me, step two (301 redirect) conflicts with step there (site verification) - or is there a way to perform a 301 redirect and have the tool verify the old site? Any pointers in the right direction would be much appreciated. Cheers Ben
Technical SEO | | cmscss0 -
How google bot see's two the same rel canonicals?
Hi, I have a website where all the original URL's have a rel canonical back to themselves. This is kinda like a fail safe mode. It is because if a parameter occurs, then the URL with the parameter will have a canonical back to the original URL. For example this url: https://www.example.com/something/page/1/ has this canonical: https://www.example.com/something/page/1/ which is the same since it's an original URL This url https://www.example.com/something/page/1/?parameter has this canonical https://www.example.com/something/page/1/ like i said before, parameters have a rel canonical back to their original url's. SO: https://www.example.com/something/page/1/?parameter and this https://www.example.com/something/page/1/ both have the same canonical which is this https://www.example.com/something/page/1/ Im telling you all that because when roger bot tried to crawl my website, it gave back duplicates. This happened because it was reading the canonical (https://www.example.com/something/page/1/) of the original url (https://www.example.com/something/page/1/) and the canonical (https://www.example.com/something/page/1/) of the url with the parameter (https://www.example.com/something/page/1/?parameter) and saw that both were point to the same canonical (https://www.example.com/something/page/1/)... So, i would like to know if google bot treats canonicals the same way. Because if it does then im full of duplicates 😄 thanks.
Technical SEO | | dos06590 -
What's with the redirects?
Hi there,
Technical SEO | | HeadStud
I have a strange issue where pages are redirecting to the homepage.Let me explain - my website is http://thedj.com.au Now when I type in www.thedj.com.au/payments it redirects to https://thedj.com.au (even though it should be going to the page https://thedj.com.au/payments). Any idea why this is and how to fix? My htaccess file is below: BEGIN HTTPS Redirection Plugin <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteRule ^home.htm$ https://thedj.com.au/ [R=301,L]
RewriteRule ^photos.htm$ http://photos.thedj.com.au/ [R=301,L]
RewriteRule ^contacts.htm$ https://thedj.com.au/contact-us/ [R=301,L]
RewriteRule ^booking.htm$ https://thedj.com.au/book-dj/ [R=301,L]
RewriteRule ^downloads.htm$ https://thedj.com.au/downloads/ [R=301,L]
RewriteRule ^payonline.htm$ https://thedj.com.au/payments/ [R=301,L]
RewriteRule ^price.htm$ https://thedj.com.au/pricing/ [R=301,L]
RewriteRule ^questions.htm$ https://thedj.com.au/faq/ [R=301,L]
RewriteRule ^links.htm$ https://thedj.com.au/links/ [R=301,L]
RewriteRule ^thankyous/index.htm$ https://thedj.com.au/testimonials/ [R=301,L]
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://thedj.com.au/ [L,R=301]</ifmodule> END HTTPS Redirection Plugin BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress RewriteCond %{HTTP_HOST} ^mrdj.net.au$ [OR]
RewriteCond %{HTTP_HOST} ^www.mrdj.net.au$
RewriteRule ^/?$ "https://thedj.com.au/" [R=301,L] RewriteCond %{HTTP_HOST} ^mrdj.com.au$ [OR]
RewriteCond %{HTTP_HOST} ^www.mrdj.com.au$
RewriteRule ^/?$ "https://thedj.com.au/" [R=301,L] RewriteCond %{HTTP_HOST} ^thedjs.com.au$ [OR]
RewriteCond %{HTTP_HOST} ^www.thedjs.com.au$
RewriteRule ^/?$ "https://thedj.com.au/" [R=301,L] RewriteCond %{HTTP_HOST} ^theperthweddingdjs.com$ [OR]
RewriteCond %{HTTP_HOST} ^www.theperthweddingdjs.com$
RewriteRule ^/?$ "https://thedj.com.au/" [R=301,L] RewriteCond %{HTTP_HOST} ^thedjs.net.au$ [OR]
RewriteCond %{HTTP_HOST} ^www.thedjs.net.au$
RewriteRule ^/?$ "https://thedj.com.au" [R=301,L]0 -
Pro's & contra's: http vs https
Hi there, We are planning to take the step and go from http to https. The main reason to do this, is to mean trustfull to our clients. And of course the rumours that it would be better for ranking (in the future). We have a large e-commerce site. A part of this site ia already HTTPS. I've read a lot of info about pro's and contra's, also this MOZ article: http://moz.com/blog/seo-tips-https-ssl
Technical SEO | | Leonie-Kramer
But i want to know some experience from others who already done this. What did you encountered when changing to HTTPS, did you had ranking drops, or loss of links etc? I want to make a list form pro's and contra's and things we have to do in advance. Thanx, Leonie0 -
Blocked URL's by robots.txt
In Google Webmaster Tools shows me 10,936 Blocked URL's by robots.txt and it is very strange when you go to the "Index Status" section where shows that since April 2012 robots.txt blocked many URL's. You can see more precise on the image attached (chart WMT) I can not explain why I have blocked URL's ? because I have nothing in robots.txt.
Technical SEO | | meralucian37
My robots.txt is like this: User-agent: * I thought I was penalized by Penguin in April 2012 because constantly i'am losing visitors now reaching over 40%. It may be a different penalty? Any help is welcome because i'm already so saturated. Mera robotstxt.jpg0 -
Broken link
I know SEO Moz has a lot of info about 404 301 302 etc but I am trying to figure out easy way to fix two of the broken links from flash. I am redirecting following links with wordpress redirect plug in http://soobumimphotography.com/gallery.php?GalleryID=126&GalleryName=Wedding&OrderNum=1 http://soobumimphotography.com/gallery.php?GalleryID=126&GalleryName=Wedding&OrderNum=1 What would be the best way to solve this? Is there anyway I can remove those?
Technical SEO | | BistosAmerica0 -
Why aren't certain links showing in SEOMOZ?
Hi, I have been trying to understand our page rank and domains that are linking to us. When I look at the list of linking domains, I see some bigger ones are missing and I don't know why. For example, we are in the Yahoo Directory with a link to trophycentral.com, but SEOMOZ is not showing the link. If SEOMOZ is not seeing it, my guess is Google is not either, which concerns me. There are several onther high page rank domains also not showing. Anyone have any idea why? Thanks! BTW, our domain is trophycentral.com
Technical SEO | | trophycentraltrophiesandawards0 -
404's and duplicate content.
I have real estate based websites that add new pages when new listings are added to the market and then deletes pages when the property is sold. My concern is that there are a significant amount of 404's created and the listing pages that are added are going to be the same as others in my market who use the same IDX provider. I can go with a different IDX provider that uses IFrame which doesn't create new pages but I used a IFrame before and my time on site was 3min w/ 2.5 pgs per visit and now it's 7.5 pg/visit with 6+min on the site. The new pages create new content daily so is fresh content and better on site metrics (with the 404's) better or less 404's, no dup content and shorter onsite metrics better? Any thoughts on this issue? Any advice would be appreciated
Technical SEO | | AnthonyLasVegas0