What is the best way to handle links that lead to a 404 page
-
Hi Team Moz,
I am working through a site cutover with an entirely new URL structure and have a bunch of pages that could not, would not or just plain don't redirect to new pages.
Steps I have taken:
-
Multiple new sitemaps submitted with new URLs and the indexing looks solid
-
used webmasters to remove urls with natural result listings that did not redirect and produce urls
-
Completely built out new ppc campaigns with new URL structures
-
contacted few major link partners
Now here is my question:
I have a pages that produce 404s that are linked to in forums, slick deals and stuff like that which will not be redirected. Is disavowing these links the correct thing to do?
-
-
Hi,
Definitely don't use disavow unless you think that the links are poor quality and could harm your site, or are actively harming it right now. That is what disavow is for, not for removing your 404 pages.
There is no harm waiting for Google to remove the 404 pages on its own, especially if you have used its URL removal tool as well. If there are any good links in the backlink profile of the 404ing pages, do attempt to contact the webmaster and have them changed - most people are more than happy to do this.
-
If the links are good ones, 301 redirect to a good page, you don't have to have a blank page at that url.
if they are bad links just leave them. if that are 404'ing then they can do you no harm.
The only 404's that can do you harm are ones from your own internal links, because it means you have link juice leaks. fix any if you have them
-
Edit the link backs you were getting to the 404 pages and point it to the new pages. Another option is to host a blank page (with header and footer) on the 404 page and 301 redirect it to the new pages. the page rank/ link profile will get passed to the new page.
-
Well, the correct / best thing to do would be to try and get all of those links edited and pointed to live pages. That said, if you don't know who posted the links or have no way to get in touch with those who do, then it can be very awkward to achieve - still, link reclamation can be a great way to help with new links, seeing as they are already pointing to your site.
-Andy
-
If you feel the links are harming you or your SEO efforts in anyway, you can go ahead and disavow them. However, the disavow link does not remove the links so it does not help with 404 errors, but will ignore them when it comes to your rankings.
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404 Error Pages being picked up as duplicate content
Hi, I recently noticed an increase in duplicate content, but all of the pages are 404 error pages. For instance, Moz site crawl says this page: https://www.allconnect.com/sc-internet/internet.html has 43 duplicates and all the duplicates are also 404 pages (https://www.allconnect.com/Coxstatic.html for instance is a duplicate of this page). Looking for insight on how to fix this issue, do I add an rel=canonical tag to these 60 error pages that points to the original error page? Thanks!
Technical SEO | | kfallconnect0 -
What is the best way to correct GWT telling me I have mobile usability errors in Image directories
In GWT, I wish to remove / resolve the following errors Mobile Usability > Viewport not configured Mobile Usability > Small font size Mobile Usability > Touch elements too close The domain www.sandpiperbeacon.com is responsive, and passes the mobile usability test. A new issue I noticed, is that GWT is reporting 200+ errors just for image index pages such as http://www.sandpiperbeacon.com/images/special-events/ for example. Website users cannot access these pages (without editing the URL manually) so I don't consider these usability issues. BUT, I hate to see 200+ errors, especially when Google itself says "Websites with mobile usability issues will be demoted in mobile search results." I could set the image directories to dissalow in Robots.txt, but I do not want the images to stop appearing in image search, so this seems like a flawed solution. I cannot be the only person experiencing this, but I have been unable to find any suggestions online.
Technical SEO | | RobertoGusto0 -
Best Way to Break Down Paginated Content?
(Sorry for my english) I have lots of user reviews on my website and in some cases, there are more than a thousand reviews for a single product/service. I am looking for the best way to break down these reviews in several sub-pages. Here are the options I thought of: 1. Break down reviews into multiple pages / URL http://www.mysite.com/blue-widget-review-page1
Technical SEO | | sbrault74
http://www.mysite.com/blue-widget-review-page2
etc... In this case, each page would be indexed by search engines. Pros: all the reviews are getting indexed Cons: It will be harder to rank for "blue widget review" as their will be many similar pages 2. Break down reviews into multiple pages / URL with noindex + canonical tag http://www.mysite.com/blue-widget-review-page1
http://www.mysite.com/blue-widget-review-page2
etc... In this case, each page would be set to noindex and the canonical tag would point to the first review page. Pros: only one URL can potentially rank for "blue widget review" Cons: Subpages are not indexed 3. Load all the reviews into one page and handle pagination using Javascript reviews, reviews, reviews
more reviews, more reviews, more reviews
etc... Each page would be loaded in a different which would be shown or hidden using Javascript when browsing through the pages. Could that be considered as cloaking?!? Pros: all the reviews are getting indexed Cons: large page size (kb) - maybe too large for search engines? 4. Load only the first page and load sub-pages dynamically using AJAX Display only the first review page on initial load. I would use AJAX to load additional reviews into the . It would be similar to some blog commenting systems where you have to click on "Load more comments" to see all the comments. Pros: Fast initial loading time + faster loading time for subpages = better user experience Cons: Only the first review page is indexed by search engines ========================================================= My main competitor who's achieving great rankings (no black hat of course) is using technique #3. What's your opinion?0 -
Best way to fix a whole bunch of 500 server errors that Google has indexed?
I got a notification from Google Webmaster tools saying that they've found a whole bunch of server errors. It looks like it is because an earlier version of the site I'm doing some work for had those URLs, but the new site does not. In any case, there are now thousands of these pages in their index that error out. If I wanted to simply remove them all from the index, which is my best option: Disallow all 1,000 or so pages in the robots.txt ? Put the meta noindex in the headers of each of those pages ? Rel canonical to a relevant page ? Redirect to a relevant page ? Wait for Google to just figure it out and remove them naturally ? Submit each URL to the GWT removal tool ? Something else ? Thanks a lot for the help...
Technical SEO | | jim_shook0 -
What is the best way to find stranded pages?
I have a client that has a site that has had a number of people in charge of it. All of these people have very different opinions about what should be on the site itself. When I look at their website on the server I see pages that do not have any obvious navigation to them. What is the best way to find out the internal linking structure of a site and see if these pages truly are stranded?
Technical SEO | | anjonr0 -
Which is The Best Way to Handle Query Parameters?
Hi mozzers, I would like to know the best way to handle query parameters. Say my site is example.com. Here are two scenarios. Scenario #1: Duplicate content example.com/category?page=1
Technical SEO | | jombay
example.com/category?order=updated_at+DESC
example.com/category
example.com/category?page=1&sr=blog-header All have the same content. Scenario #2: Pagination example.com/category?page=1
example.com/category?page=2 and so on. What is the best way to solve both? Do I need to use Rel=next and Rel=prev or is it better to use Google Webmaster tools parameter handling? Right now I am concerned about Google traffic only. For solving the duplicate content issue, do we need to use canonical tags on each such URL's? I am not using WordPress. My site is built on Ruby on Rails platform. Thanks!0 -
Best way to handle different views of the same page?
Say I have a page: mydomain.com/page But I also have different views: /?sort=alpha /print-version /?session_ID=2892 etc. All same content, more or less. Should the subsequent pages have ROBOTS meta tag with noindex? Should I use canonical? Both? Thanks!
Technical SEO | | ChatterBlock0 -
Dealing with 404 pages
I built a blog on my root domain while I worked on another part of the site at .....co.uk/alpha I was really careful not to have any links go to alpha - but it seems google found and indexed it. The problem is that part of alpha was a copy of the blog - so now soon we have a lot of duplicate content. The /alpha part is now ready to be taken over to the root domain, the initial plan was to then delete /alpha. But now that its indexed I'm worried that Ill have all these 404 pages. I'm not sure what to do.. I know I can just do a 301 redirect for all those pages to go to the other ones in case a link comes on but I need to delete those pages as the server is already very slow. Or does a 301 redirect mean that I don't need those pages anymore? Will those pages still get indexed by google as separate pages? Please assist.
Technical SEO | | borderbound0