Old pages still crawled by SE returning 404s. Better to put 301 or block with robots.txt ?
-
Hello guys,
A client of ours has thousand of pages returning 404 visibile on googl webmaster tools. These are all old pages which don't exist anymore but Google keeps on detecting them. These pages belong to sections of the site which don't exist anymore. They are not linked externally and didn't provide much value even when they existed
What do u suggest us to do:
(a) do nothing
(b) redirect all these URL/folders to the homepage through a 301
(c) block these pages through the robots.txt.
Are we inappropriately using part of the crawling budget set by Search Engines by not doing anything ?
thx
-
Hi Matteo.
The first step I would suggest is determining the source of the links to these 404 pages. If these links are internal to your website, they should be removed or updated.
The next step I would recommend is to ensure your site has a helpful 404 page. The page should offer your site's navigation along with a search function so users can locate relevant content on your site.
I realize that thousands of broken links may seem overwhelming. It is a mess which should be cleaned up. How you proceed is dependent upon how much you value SEO. If your ranking is important and you want to be the best, you will have someone investigate every link and make the appropriate adjustments such as 301 redirecting them to the most appropriate page on your site, or allowing the link to continue to the 404 page.
It's a search engine's job to help users find content. 404s are a natural part of the web. There is nothing inherently wrong with having some 404 pages. Having thousands of pages really shows your site has significant issues. Google's algorithms are not revealed publicly but it's logical to believe they may consider sites with a high percentage of 404 pages less trustworthy. This is my belief but not necessarily that of the SEO community.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt & Disallow: /*? Question!
Hi, I have a site where they have: Disallow: /*? Problem is we need the following indexed: ?utm_source=google_shopping What would the best solution be? I have read: User-agent: *
Intermediate & Advanced SEO | | vetofunk
Allow: ?utm_source=google_shopping
Disallow: /*? Any ideas?0 -
Redesigning a website and losing the .html from pages! .301 needed?
I have redesigned a customers website, i kept all pages with the same name however they have gone from domain.com/pagename.html to domain.com/pagename (lost the .html) will these pages automatically be picked up as the same or do i need to do a 301 direct. If i need to do a redirect is there a faster way? As there's about 250 pages! Thank you
Intermediate & Advanced SEO | | AdvimateLtd0 -
Can we put long-tail keywords at footer menu and create landing pages for same?
When we cannot rank for multiple keywords; can we try creating landing pages for some long-tail keywords and put all such landing pages at footer menu to rank for search queries? Will this helps and sound spammy to Google?
Intermediate & Advanced SEO | | vtmoz0 -
Images Returning 404 Error Codes. 301 Redirects?
We're working with a site that has gone through a lot of changes over the years - ownership, complete site redesigns, different platforms, etc. - and we are finding that there are both a lot of pages and individual images that are returning 404 error codes in the Moz crawls. We're doing 301 redirects for the pages, but what would the best course of action be for the images? The images obviously don't exist on the site anymore and are therefore returning the 404 error codes. Should we do a 301 redirect to another similar image that is on the site now or redirect the images to an actual page? Or is there another solution that I'm not considering (besides doing nothing)? We'll go through the site to make sure that there aren't any pages within the site that are still linking to those images, which is probably where the 404 errors are coming from. Based on feedback below it sounds like once we do that, leaving them alone is a good option.
Intermediate & Advanced SEO | | garrettkite0 -
Robots.txt Blocked Most Site URLs Because of Canonical
Had a bit of a "Gotcha" in Magento. We had Yoast Canonical Links extension which worked well , but then we installed Mageworx SEO Suite.. which broke Canonical Links. Unfortunately it started putting www.mysite.com/catalog/product/view/id/516/ as the Canonical Link - and all URLs with /catalog/productview/* is blocked in Robots.txt So unfortunately We told Google that the correct page is also a blocked page. they haven't been removed as far as I can see but traffic has certainly dropped. We have also , at the same time had some Site changes grouping some pages & having 301 redirects. Resubmitted site map & did a fetch as google. Any other ideas? And Idea how long it will take to become unblocked?
Intermediate & Advanced SEO | | s_EOgi_Bear0 -
Home page url 301 redirect suggestion
Hello, In our site we have already done 301 redirect from http:// to http://www. However, the home page links are still coming in 2 ways http://www.mycarhelpline.com/ http://www.mycarhelpline.com/index.php?option=com_newcar&view=search&Itemid=2 Need suggestion We have already use rel canonical is another 301 redirect to be used for maintaining the home page pr from seo point of view. Does google still takes both urls as separate url and finds duplicate content
Intermediate & Advanced SEO | | Modi0 -
Is there a way to redirect pages from an old site?
I have no access to an old wordpress site of a client's, but have parked the domain on their new site, gone into webmaster central and requested a change of address and wait... the old domain still shows in the search listings in place of the new site domain and the log files show 404 errors from links to the old site which go nowhere - can anyone suggest a way of managing this on the new site - is there a workaround to what should have been done - 301 redirects on the old site before it was taken down. many thanks
Intermediate & Advanced SEO | | Highlandgael0 -
Using 2 wildcards in the robots.txt file
I have a URL string which I don't want to be indexed. it includes the characters _Q1 ni the middle of the string. So in the robots.txt can I use 2 wildcards in the string to take out all of the URLs with that in it? So something like /_Q1. Will that pickup and block every URL with those characters in the string? Also, this is not directly of the root, but in a secondary directory, so .com/.../_Q1. So do I have to format the robots.txt as //_Q1* as it will be in the second folder or just using /_Q1 will pickup everything no matter what folder it is on? Thanks.
Intermediate & Advanced SEO | | seo1234560