Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Removing Content 301 vs 410 question
-
Hello,
I was hoping to get the SEOmoz community’s advice on how to remove content most effectively from a large website.
I just read a very thought-provoking thread in which Dr. Pete and Kerry22 answered a question about how to cut content in order to recover from Panda. (http://www.seomoz.org/q/panda-recovery-what-is-the-best-way-to-shrink-your-index-and-make-google-aware).
Kerry22 mentioned a process in which 410s would be totally visible to googlebot so that it would easily recognize the removal of content. The conversation implied that it is not just important to remove the content, but also to give google the ability to recrawl that content to indeed confirm the content was removed (as opposed to just recrawling the site and not finding the content anywhere).
This really made lots of sense to me and also struck a personal chord… Our website was hit by a later Panda refresh back in March 2012, and ever since then we have been aggressive about cutting content and doing what we can to improve user experience.
When we cut pages, though, we used a different approach, doing all of the below steps:
1. We cut the pages
2. We set up permanent 301 redirects for all of them immediately.
3. And at the same time, we would always remove from our site all links pointing to these pages (to make sure users didn’t stumble upon the removed pages.When we cut the content pages, we would either delete them or unpublish them, causing them to 404 or 401, but this is probably a moot point since we gave them 301 redirects every time anyway. We thought we could signal to Google that we removed the content while avoiding generating lots of errors that way…
I see that this is basically the exact opposite of Dr. Pete's advice and opposite what Kerry22 used in order to get a recovery, and meanwhile here we are still trying to help our site recover. We've been feeling that our site should no longer be under the shadow of Panda.
So here is what I'm wondering, and I'd be very appreciative of advice or answers for the following questions:
1. Is it possible that Google still thinks we have this content on our site, and we continue to suffer from Panda because of this?
Could there be a residual taint caused by the way we removed it, or is it all water under the bridge at this point because Google would have figured out we removed it (albeit not in a preferred way)?2. If there’s a possibility our former cutting process has caused lasting issues and affected how Google sees us, what can we do now (if anything) to correct the damage we did?
Thank you in advance for your help,
Eric -
Thanks Dr Peter! I agree with you! Just wanted to feel shure about it.
Yes, Gary, you can personalize also a 410 page.
-
You should be able to customize a 410 just like you do a 404. The problem is that most platforms don't do that, by default, so you get the old-school status code page. That should be configurable, though, on almost all modern platforms.
-
From a commerce perspective the biggest problem I have with the 410 is the user experience. If I tag a URL with a 410 when someone request the page they get a white page that says GONE. They never even get the chance to see the store and maybe search for a similar product.
Would it work if I built a landing page that returns a 410 and then used the 301 to redirect the bad URL to the landing page? It would make the customer happy, they would be in the store with a message to search for something else. But would Google really associate the 410 with the redirected URL?
-
Hi Sandra, don't worry about 404s volume because they won't hurt your rankings.
About your issue I understand that you want to be really clear with your users and don't hurt their experience on the site. So create a custom 404 which changes its content depending of what page is returning it. If it's one of your old product you can return a message or an article of why you decided to remove them and propose some alternatives. For all other errors you can just return a search box or related products to the one you lost.
301 IMHO are not the way to go, if an url is gone it has not being redirected anywhere, so a 301 will result in a bad UX 99% of the time.
-
Hello,
I have a related question about 301 vs 410.
I have a client who wants to delete a whole category of product from one site. It's a big amount of product, so a big amount of urls, but this product is not working very well. So the decision is not SEO-related but more as a business decision. It's not for Panda.
If we think about the communication with the user, the best option would be to have a landing page explaining that we decided to remove that product.
Then the question is, do we do a redirect 301 of all those urls to this landing page? I am afraid that a big redirect like this, going from many urls to a single one (even if this is not created to rank on google) can be seen dodgy by Google. Am I right?
Or do I do a 410 for those pages, and I personalize the 410 landing only for these urls in order to communicate with the user (is that even possible?). But I am afraid, because we'll have much 4XX Errors in WMT, and this may have influence to the rankings!
So I don't know what to do! It's a must that we delete this content and that we communicate it well with the users.
Thanks for your help,
-
100% agreed - 403 isn't really an appropriate alternative to 404. I know SEOs who claim that 410s are stronger/faster, but I haven't seen great evidence in the past couple of years. It's harmless to try 410s, but I wouldn't expect miracles.
-
Hi Eric, I'll try to answer your further question even if I'm not an oracle like Pete
First of all thanks Pete to underline that you need to give google just one response since you can't give them both 301 and 404, I was assuming that and I didn't focus on that part of Eric's answer.
Second. Eric, If your purpose is to give google the ability of recrawl the old content to let them see it has disappeared you want to give them a 404 or a 410 which are respectively not found and permanently not found. Before it was a difference but now they've almost the same value under google's eyes (further reading). In that way google can access your page and see that those contents are now gone.
In the case of 403 the access is denied to anyone both google and humans, so in that case google won't be able to access and recrawl it. If your theory is based (and I think you're in the good way) upon the thing that google needs to recrawl your content and see it ahs really gone, 403 is not the response you should give it.
-
Hey there mememax - thank you for the reply! Reading your post and thinking back to our methodology, yes I think in hindsight we were a bit too afraid about generating errors when we removed content - we should have considered the underlying meaning of the different statuses more carefully. I appreciate your advice.
Eric
-
Hello Dr. Pete – thank you for the great info and advice!
I do have one follow-up question if that's ok – as we move forward cutting undesirable content and generate 4xx status for those pages, is there a difference in impact/effectiveness between a 403 and a 404? We use a CMS and un-publishing a page creates a 403 “Access denied” message. Deleting a page will generate a 404. I would love to hear your opinion about any practical differences from a Googlebot standpoint… does a 404 carry more weight when it comes to content removal, or are they the same to Googlebot? If there’s a difference and the 404 is better, we’ll go the 404 route moving forward.
Thanks again for all your help,
Eric
-
Let me jump in and clarify one small detail. If you delete a page, which would naturally result in a 404, but then 301-redirect that page/URL, there is no 404. I understand the confusion, but ultimately you can only have one HTTP status code. So, if the page properly 301s, it will never return a 404, even if it's technically deleted.
If the page 301s to a page that looks like a "not found" sort of page (content-wise), Google could consider that a "soft 404". Typically, though, once the 301 is in place, the 404 is moot.
For any change in status, the removal of crawl paths could slow Google re-processing those pages. Even if you delete a page, Google has to re-crawl it to see the 404. Now, if it's a high-authority page or has inbound (external) links, it could get re-crawled even if you cut the internal links. If it's a deep, low-value page, though, it may take Google a long time to get back and see those new signals. So, sometimes we recommend keeping the paths open.
There are other ways to kick Google to re-crawl, such as having an XML sitemap open with those pages in them (but removing the internal links). These signals aren't as powerful, but they can help the process along.
As to your specific questions:
(1) It's very tricky, in practice, especially at large-scale. I think step 1 is to dig into your index/cache (slice and dice with the site: operator) and see if Google has removed these pages. There are cases where massive 301s, etc. can look fishy to Google, but usually, once a page is gone, it's gone. If Google has redirected/removed these pages, and you're still penalized, then you may be fixing the wrong problem or possibly haven't gone far enough.
(2) It really depends on the issue. If you cut too deep and somehow cut off crawl paths or stranded inbound links, then you may need to re-establish some links/pages. If you 301'ed a lot of low-value content (and possibly bad links), you may actually need to cut some of those 301s and let those pages die off. I agree with @mememax that sometimes a helathy combination of 301s/404s is a better bet - pages go away, and 404s are normal if there's really no good alternative to the page that's gone.
-
Hi Eric, in my experience I've always found 4** better than 301 to solve this kind of issues.
Many people uses this response too much just because they want to show google that their site don't have any 404.
Just think about it a little, a 301 is a permanent redirect, a content which has just moved from one place to another. If you got a content you want to get rid of, do you want to give google the message "hey that low quality content is not where you found it but now it's here", no. You wan't to give google the message that the low quality content has been improved or removed. And a 404 is the right message to give him if you deleted that content.
It's prefectly normal to have 404s in a website, many 404 won't hurt your rankings, only if those pages were ranking already so users will receive a 404 instead and if some external sites were linking there in that case you may consider a 301.
While I think that google has a sort of a black list (and a white list too) I don't think that it has a memory of bad sites he encounters, if you fix your issues you'll start to rank again.
The issue you may have is not that you're site may be tainted but that maybe you still have some issues here and there which you didn't fix. As it seems Googlers said that Panda is now part of the algo so if you fix your issues you won't need any upgrade to start re ranking.
Hope this may have helped!! G luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
410 or 301 after URL update?
Hi there, A site i'm working on atm has a thousand "not found" errors on google console (of course, I'm sure there are thousands more it's not showing us!). The issue is a lot of them seem to come from a URL change. Damage has been done, the URLs have been changed and I can't stop that... but as you can imagine, i'm keen to fix as many as humanly possible. I don't want to go mad with 301s - but for external links in, this seems like the best solution? On the other hand, Google is reading internal links that simply aren't there anymore. Is it better to hunt down the new page and 301-it anyway? OR should I 410 and grit my teeth while google crawls and recrawls it, warning me that this page really doesn't exist? Essentially I guess I'm asking, how many 301s are too many and will affect our DA? And what's the best solution for dealing with mass 404 errors - many of which aren't attached or linked to from any other pages anymore? Thanks for any insights 🙂
Intermediate & Advanced SEO | | Fubra0 -
Same content, different languages. Duplicate content issue? | international SEO
Hi, If the "content" is the same, but is written in different languages, will Google see the articles as duplicate content?
Intermediate & Advanced SEO | | chalet
If google won't see it as duplicate content. What is the profit of implementing the alternate lang tag?Kind regards,Jeroen0 -
Does removal of internal redirects(301) help in SEO
I am planning to completely remove 301 redirects manually by replacing such links with actual live pages/links. So there will be no redirects internally in the website. Will this boost our SEO efforts? Auto redirects will be there for incoming links to non-existing pages. Thanks, Satish
Intermediate & Advanced SEO | | vtmoz0 -
6 .htaccess Rewrites: Remove index.html, Remove .html, Force non-www, Force Trailing Slash
i've to give some information about my website Environment 1. i have static webpage in the root. 2. Wordpress installed in sub-dictionary www.domain.com/blog/ 3. I have two .htaccess , one in the root and one in the wordpress
Intermediate & Advanced SEO | | NeatIT
folder. i want to www to non on all URLs Remove index.html from url Remove all .html extension / Re-direct 301 to url
without .html extension Add trailing slash to the static webpages / Re-direct 301 from non-trailing slash Force trailing slash to the Wordpress Webpages / Re-direct 301 from non-trailing slash Some examples domain.tld/index.html >> domain.tld/ domain.tld/file.html >> domain.tld/file/ domain.tld/file.html/ >> domain.tld/file/ domain.tld/wordpress/post-name >> domain.tld/wordpress/post-name/ My code in ROOT htaccess is <ifmodule mod_rewrite.c="">Options +FollowSymLinks -MultiViews RewriteEngine On
RewriteBase / #removing trailing slash
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)/$ $1 [R=301,L] #www to non
RewriteCond %{HTTP_HOST} ^www.(([a-z0-9_]+.)?domain.com)$ [NC]
RewriteRule .? http://%1%{REQUEST_URI} [R=301,L] #html
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^([^.]+)$ $1.html [NC,L] #index redirect
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index.html\ HTTP/
RewriteRule ^index.html$ http://domain.com/ [R=301,L]
RewriteCond %{THE_REQUEST} .html
RewriteRule ^(.*).html$ /$1 [R=301,L]</ifmodule> The above code do 1. redirect www to non-www
2. Remove trailing slash at the end (if exists)
3. Remove index.html
4. Remove all .html
5. Redirect 301 to filename but doesn't add trailing slash at the end0 -
Duplicate Content www vs. non-www and best practices
I have a customer who had prior help on his website and I noticed a 301 redirect in his .htaccess Rule for duplicate content removal : www.domain.com vs domain.com RewriteCond %{HTTP_HOST} ^MY-CUSTOMER-SITE.com [NC]
Intermediate & Advanced SEO | | EnvoyWeb
RewriteRule (.*) http://www.MY-CUSTOMER-SITE.com/$1 [R=301,L,NC] The result of this rule is that i type MY-CUSTOMER-SITE.com in the browser and it redirects to www.MY-CUSTOMER-SITE.com I wonder if this is causing issues in SERPS. If I have some inbound links pointing to www.MY-CUSTOMER-SITE.com and some pointing to MY-CUSTOMER-SITE.com, I would think that this rewrite isn't necessary as it would seem that Googlebot is smart enough to know that these aren't two sites. -----Can you comment on whether this is a best practice for all domains?
-----I've run a report for backlinks. If my thought is true that there are some pointing to www.www.MY-CUSTOMER-SITE.com and some to the www.MY-CUSTOMER-SITE.com, is there any value in addressing this?0 -
Brackets vs Encoded URLs: The "Same" in Google's eyes, or dup content?
Hello, This is the first time I've asked a question here, but I would really appreciate the advice of the community - thank you, thank you! Scenario: Internal linking is pointing to two different versions of a URL, one with brackets [] and the other version with the brackets encoded as %5B%5D Version 1: http://www.site.com/test?hello**[]=all&howdy[]=all&ciao[]=all
Intermediate & Advanced SEO | | mirabile
Version 2: http://www.site.com/test?hello%5B%5D**=all&howdy**%5B%5D**=all&ciao**%5B%5D**=all Question: Will search engines view these as duplicate content? Technically there is a difference in characters, but it's only because one version encodes the brackets, and the other does not (See: http://www.w3schools.com/tags/ref_urlencode.asp) We are asking the developer to encode ALL URLs because this seems cleaner but they are telling us that Google will see zero difference. We aren't sure if this is true, since engines can get so _hung up on even one single difference in character. _ We don't want to unnecessarily fracture the internal link structure of the site, so again - any feedback is welcome, thank you. 🙂0 -
How to Remove Joomla Canonical and Duplicate Page Content
I've attempted to follow advice from the Q&A section. Currently on the site www.cherrycreekspine.com, I've edited the .htaccess file to help with 301s - all pages redirect to www.cherrycreekspine.com. Secondly, I'd added the canonical statement in the header of the web pages. I have cut the Duplicate Page Content in half ... now I have a remaining 40 pages to fix up. This is my practice site to try and understand what SEOmoz can do for me. I've looked at some of your videos on Youtube ... I feel like I'm scrambling around to the Q&A and the internet to understand this product. I'm reading the beginners guide.... any other resources would be helpful.
Intermediate & Advanced SEO | | deskstudio0 -
Robots.txt & url removal vs. noindex, follow?
When de-indexing pages from google, what are the pros & cons of each of the below two options: robots.txt & requesting url removal from google webmasters Use the noindex, follow meta tag on all doctor profile pages Keep the URLs in the Sitemap file so that Google will recrawl them and find the noindex meta tag make sure that they're not disallowed by the robots.txt file
Intermediate & Advanced SEO | | nicole.healthline0