Dealing with 404 pages
-
I built a blog on my root domain while I worked on another part of the site at .....co.uk/alpha I was really careful not to have any links go to alpha - but it seems google found and indexed it. The problem is that part of alpha was a copy of the blog - so now soon we have a lot of duplicate content. The /alpha part is now ready to be taken over to the root domain, the initial plan was to then delete /alpha. But now that its indexed I'm worried that Ill have all these 404 pages. I'm not sure what to do.. I know I can just do a 301 redirect for all those pages to go to the other ones in case a link comes on but I need to delete those pages as the server is already very slow. Or does a 301 redirect mean that I don't need those pages anymore? Will those pages still get indexed by google as separate pages? Please assist.
-
after a 301 redirect can I delete the pages and the databases/folders associated with them?
Yes. Think of a 301 redirect like mail forwarding. If you have an address, 1000 main street and then move to a new address you would leave a forward order (e.g. 301 redirect) with the post office. Once that is done, you can bulldozer the house (e.g.. delete the webpage/database) and the mail should still be forwarded properly.
How does one create a 301 redirect?
The method of creating a 301 redirect varies based on your server setup. If you have a LAMP setup with cPanel, there is a Redirect tool. Otherwise I would suggest contacting your host and ask how to create a redirect based on your particular setup.
-
Ryan,
Two things.
First - after a 301 redirect can I delete the pages and the databases/folders associated with them?
Second - How does one create a 301 redirect?
-
Hi Ryan,
Agree with you, but I thought to provide alternate solution to the problem. I know it is difficult and not chosen one.
But as I said that if he can't get any traffic from it then and then only it can delete the pages for index. Plus as he told earlier in question that mistakenly alpha folder was indexed so lines as per you said in the comment "That tool was designed to remove content which is damaging to businesses such as when confidential or personal information is indexed by mistake." and Its contradictory statement too "The indexed content are pages you want in the index but simply have the wrong URL - The wrong URL means the different page.
Anyways will definitely go with your solution but sometimes two options helps you to choose better one.
Thanks
-
Semil, your answer is a working solution but I would like to share why it is not a best practice.
Once the /alpha pages were indexed you could have traffic on them. You cannot possibly know who has linked to those pages, e-mailed links, bookmarked them, etc. By providing a simple 301 the change will be completely seamless to users. All their links and bookmarks will still work. Additionally if any website did link to your /alpha pages, you will retain the link.
The site will also benefit because it is already indexed by Google. You will not have to wait for Google to index your pages. This means more traffic for the site.
The 301 is very quick and easy to implement. If you are simply moving from the /alpha directory to your main site then a single 301 redirect can cover your entire site.
I will offer a simple best practice of SEO (my belief which not everyone agrees with) which I do my best to follow. NEVER EVER EVER use the robots.txt file unless you have exhausted every other possibility. The robots.txt file is an inferior solution that many people latch on to because it is quick and easy. In your case, there is no need to adjust your robots.txt file at all. The original poster stated an intention to delete the /alpha pages. Those pages will no longer exist. Why block URLs which don't exist? It doesn't offer any benefit.
Also, it makes no sense to use the Google removal tool. That tool was designed to remove content which is damaging to businesses such as when confidential or personal information is indexed by mistake. The indexed content are pages you want in the index but simply have the wrong URL. The 301 redirect will allow your pages to remain in the index and for the URL to be properly updated. In order for the 301 to work correctly, you would need to NOT block the /alpha pages with robots.txt.
The solution you shared would work, but it is not as friendly all around.
-
Whoops! Thanks for correcting my answer...
-
The reason behind not using 301 is alpha is not a page or folder you want to create for your users so I don't want to put 301. Its indexed that's it. Are you getting any traffic from it ?
No, then why you need to redirect. Remove the page and ask search engine to remove that page from index. That is all.
-
Thanks Dan,
Is there a way of blocking an entire folder or do I have to add each link?
-
How can I ask them to remove it from webmaster? How can I ask everything on the /alpha folder not to be indexed - or do I have to write each link out?
Why do you think my case isn't good for 301 redirects?
-
You have to be very careful from the start, but now Google indexed your alpha. So dont worry about the thing.
Using 301 is something which I dont like to do on your case. Ask google to remove that urls from indexing from GWT, and put robots.txt to prevent alpha to be indexed.
Thanks,
-
You can perform the 301 redirect and you will not need those pages anymore. Using the redirect would be a superior SEO solution over using the robots.txt file. Since the content is already indexed, it will stay indexed and Google will update each page over the next 30 days as it crawls your site.
If you block /alpha with robots.txt, Google will still retain the pages in their index, users will experience 404s and your new pages wont start to be properly indexed until Google drops the existing pages which takes a while. The redirect is better for everyone.
-
Hi
If you do not want them in the index you should block them in your robots.txt file like so:
-
-
-
-
- -
-
-
-
User-agent: *
Allow: /
Disallow: /alpha
-Dan
PS - Some documentation on robots.txt
-
-
-
-
- -
-
-
-
EDIT: I left my answer, but don't listen to it. Do what Ryan says
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should search pages be indexed?
Hey guys, I've always believed that search pages should be no-indexed but now I'm wondering if there is an argument to index them? Appreciate any thoughts!
Technical SEO | | RebekahVP0 -
404 Errors for Form Generated Pages - No index, no follow or 301 redirect
Hi there I wonder if someone can help me out and provide the best solution for a problem with form generated pages. I have blocked the search results pages from being indexed by using the 'no index' tag, and I wondered if I should take this approach for the following pages. I have seen a huge increase in 404 errors since the new site structure and forms being filled in. This is because every time a form is filled in, this generates a new page, which only Google Search Console is reporting as a 404. Whilst some 404's can be explained and resolved, I wondered what is best to prevent Google from crawling these pages, like this: mydomain.com/webapp/wcs/stores/servlet/TopCategoriesDisplay?langId=-1&storeId=90&catalogId=1008&homePage=Y Implement 301 redirect using rules, which will mean that all these pages will redirect to the homepage. Whilst in theory this will protect any linked to pages, it does not resolve this issue of why GSC is recording as 404's in the first place. Also could come across to Google as 100,000+ redirected links, which might look spammy. Place No index tag on these pages too, so they will not get picked up, in the same way the search result pages are not being indexed. Block in robots - this will prevent any 'result' pages being crawled, which will improve the crawl time currently being taken up. However, I'm not entirely sure if the block will be possible? I would need to block anything after the domain/webapp/wcs/stores/servlet/TopCategoriesDisplay?. Hopefully this is possible? The no index tag will take time to set up, as needs to be scheduled in with development team, but the robots.txt will be an quicker fix as this can be done in GSC. I really appreciate any feedback on this one. Many thanks
Technical SEO | | Ric_McHale0 -
Redirect for Soft 404 or 404?
I have a client site that displays properties from the MLS. Once these properties sell they're removed from the MLS and they stop showing up on her site. This would result in a 404 error, but right now any property that's not being found is being 301 redirected back to the property page. I see how this makes sense for a user, but Google is saying there's an increase in Soft 404 errors and I've read that this could negatively affect organic traffic. Should I keep the redirect for removed properties or should I have it serve a 404 with a message that the house you're looking for may have sold and link to the property page? Is it better to have Soft 404 errors or 404 errors?
Technical SEO | | JaredDetroit0 -
404 errors is webmaster - should I 301 all pages?
Currently working on a retail site that shows over 1200 404 errors coming from urls that are from products that were on the site, but have now been removed as they are seasonal/out of stock. What is the best way of dealing with this situation ongoing? I am aware of the fact that these 404s are being marked as url errors in Google Webmaster. Should I redirect these 404s to a more appropriate live page or should I leave them as they are and not redirect them? I am concerned that Google may give the site a penalty as these 404s are growing (as the site is a online retail store and has products removed from its page results regularly). I thought Google was able to recognise 404s and after a set period of time would push them out of the error report. Also is there a tool out there that on mass I can run all the 404s urls through to see their individual page strength and the number of links that point at each one? Thanks.
Technical SEO | | Oxfordcomma0 -
Duplicate page titles
Hi, I have a Joomla 2.5 site and I use categoryblogs. So I have a page with "reviews". All the reviews are shown on this page and there are about 15 pages of it. In my SEOMoz crawl result I get 71 errors ! about "duplicate titles". How can I diminish this? I don't know how to show all the reviews in a proper way other than what I have accomplished with categoryblog. Patrick
Technical SEO | | paddydaddy0 -
Numerous 404 errors on crawl diagnostics (non existent pages)..
As new as them come to SEO so please be gentle.... I have a wordpress site setup for my photography business. Looking at my crawl diagnostics I see several 4xx (client error) alerts. These all show up to non existent pages on my site IE: | http://www.robertswanigan.com/happy-birthday-sara/109,97,105,108,116,111,58,104,116,116,112,58,47,47,109,97,105,108,116,111,58,105,110,102,111,64,114,111,98,101,114,116,115,119,97,110,105,103,97,110,46,99,111,109 | Totally lost on what could be causing this. Thanks in advance for any help!
Technical SEO | | Swanny8110 -
What if 404 Error not possible?
Hi Everyone, I get an 404 error in my page if the URL is simply wrong, but for some parameters, like if a page has been deleted, or has expired, I get an error page indicating that the ID is wrong, but no 404 error. It is for me very difficult to program a function in php that solve the problem and modify the .htaccess with the mod_rewrite. I ask the developer of the system to give a look, but I am not sure if I will get an answer soon. I can control the content of the deleted/expired page, but the URL will be very similar to those that are ok (actually the url could has been fine, but now expired). Thinking of solutions I can set the expired/deleted pages as noindex, would it help to avoid duplicated title/description/content problem? If an user goes to i.e., mywebsite.com/1-article/details.html I can set the head section to noindex if it has expired. Would it be good enough? Other question, is it possible anyhow to set the pages as 404 without having to do it directly in the .htacess, so avoiding the mod_rewrite problems that I am having? Some magical tag in the head section of the page? Many thanks in advance for your help, Best Regards, Daniel
Technical SEO | | te_c0 -
Hundreds of 404 Pages, What Should I do?
Hi, My client just had there website redeveloped within wordpress. I just ran a crawl errors test for their website using Google Webmasters. I discovered that the client has about six hundred, 404 pages. Most of the error pages originated from their previous image gallery. I already have a custom 404 page set-up, but is there something else I should be doing? Is it worth while to 301 redirect every single page within the .htaccess file, or will Google filter these pages out of its index naturally? Thanks Mozers!
Technical SEO | | calindaniel0