Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Is it a good idea to remove old blogs?
-
So I have a site right now that isn't ranking well, and we are trying everything to help it out. One of my areas of concern is we have A LOT of old blogs that were not well written and honestly are not overly relevant. None of them rank for anything, and could be causing a lot of duplicate content issues. Our newer blogs are doing better and written in a more Q&A type format and it seems to be doing better.
So my thought is basically wipe out all the blogs from 2010-2012 -- probably 450+ blog posts.
What do you guys think?
-
You may find this case study helpful of a blog that decided to exactly that:
http://www.koozai.com/blog/search-marketing/deleted-900-blog-posts-happened-next/
-
It depends on what you mean by "remove."
If the content of all those old blogs truly is poor, I'd strongly consider going through 1 by 1 and seeing how you can re-write, expand upon, and improve the overall blog post. Can you tackle the subject from another angle? Are there images, videos, or even visual assets you can add to the post to make it more intriguing and sharable?
Then, you can seek out some credible places to strategically place your blog content for additional exposure and maybe even a link. Be careful here, however. I'm not talking about forum and comment spam, but there may be some active communities that are open to unique and valuable content. Do your research first.
When going through each post 1 by 1, you'll undoubtedly find blog posts that are simply "too far gone" or not relevant enough to keep. Essentially, it wouldn't even be worth your time to re-write them. In this case, find another page on your website that's MOST SIMILAR to the blog post. This may be in topic, but also could be an author's page, another blog post that is valuable, a contact page, etc. Then perform 301 redirects of the crap blog posts to those pages.
Not only are you salvaging any little value those blog posts may have had, but you're also preventing crawl and index issues by telling the search engine bots where that content is now (assuming it was indexed in the first place).
This is an incredibly long content process and should take you months. Especially if there's a lot of content that's good enough to be re-written, expanded upon, and added to. However making that content relevant and useful is the best thing you can do. It's a long process, but if your best content writers need a project, this would be it.
To recap: **1) **Go through each blog post 1 by 1, determine what's good enough to edit, what's "too far gone." 2) Re-write, edit, add to (content and images/videos) and re-promote them socially and to appropriate audiences and communities. 3) For the posts that were "too far gone," 301 redirect them to the most relevant posts and pages that are remaining "live."
Again, I can say firsthand that this is a LONG process. I've done it for a client in the past. However, the return was well worth the work. And by doing it this way and not just deleting posts, you're preventing yourself a lot of crawl/index headaches with the search engines.
-
we have A LOT of old blogs that were not well written and honestly are not overly relevant.
Wow.... it is great to hear someone looking at their content and deciding that he can kick it up a notch. I have seen a lot of people would never, ever, pull the kill switch on an old blog post. In fact they are still out there hiring people to write stuff that is really crappy.
If this was my site I would first check to be sure that I don't have a penguin or unnatural links problem. If you think you are OK there, here is what I would do.
-
I would look at those blog posts to see if any of them have any traffic, link or revenue value. Value is defined as... A) Traffic from any search engine or other quality source, B) valuable links, C) viewing by current website visitors, D) traffic who enter through those pages making any income through ads or purchases.
-
If any of them pass the value test above then I would improve that page. I would put a nice amount of work into that page.
-
Next I would look at each of those blog posts and see if any have content value. That means an idea that could be developed into valuable content... or valuable content that could be simply rewritten to a higher standard. Valuable content is defined as a topic that might pull traffic from search or be consumed by current site visitors.
-
If any pass the valuable content test then I would improve them. I would make them kickass.
-
After you have done the above, I would pull the plug on everything else.... or if I was feeling charitable I would offer them to a competitor.
Salutes to you for having the courage to clean some slates.
-
-
I would run them through Copyscape to check for plagiarism/duplicate content issues. After that, I would check for referral traffic. If there are some pages that draw enough traffic, you might not want to remove them. Finally, round it off with a page level link audit. Majestic can give you a pretty good idea of where they stand.
The pages that don't make the cut should be set to throw 410 status codes. If you still don't like the content on pages with good links and/or referral traffic, 301 those to better content on the same subject.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Redirect old image that has backlinks
Hi Moz Community! I'm doing an audit of a website and did a backlink analysis. In the backlink analysis, there is an image that has 66 backlinks but the image doesn't exist on the website anymore (it was on a website that was created in 2011 - 2 web launches ago). I don't believe a 301 redirect will work for an image that doesn't exist anymore. How would I redirect the image URL (it's WordPress so we have a specific URL that other websites are linking to but get 404 errors) without going to each individual website and requesting they change the URL link? Any advice or recommendations would be great. Thanks!
Intermediate & Advanced SEO | | BradChandler1 -
6 .htaccess Rewrites: Remove index.html, Remove .html, Force non-www, Force Trailing Slash
i've to give some information about my website Environment 1. i have static webpage in the root. 2. Wordpress installed in sub-dictionary www.domain.com/blog/ 3. I have two .htaccess , one in the root and one in the wordpress
Intermediate & Advanced SEO | | NeatIT
folder. i want to www to non on all URLs Remove index.html from url Remove all .html extension / Re-direct 301 to url
without .html extension Add trailing slash to the static webpages / Re-direct 301 from non-trailing slash Force trailing slash to the Wordpress Webpages / Re-direct 301 from non-trailing slash Some examples domain.tld/index.html >> domain.tld/ domain.tld/file.html >> domain.tld/file/ domain.tld/file.html/ >> domain.tld/file/ domain.tld/wordpress/post-name >> domain.tld/wordpress/post-name/ My code in ROOT htaccess is <ifmodule mod_rewrite.c="">Options +FollowSymLinks -MultiViews RewriteEngine On
RewriteBase / #removing trailing slash
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)/$ $1 [R=301,L] #www to non
RewriteCond %{HTTP_HOST} ^www.(([a-z0-9_]+.)?domain.com)$ [NC]
RewriteRule .? http://%1%{REQUEST_URI} [R=301,L] #html
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^([^.]+)$ $1.html [NC,L] #index redirect
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index.html\ HTTP/
RewriteRule ^index.html$ http://domain.com/ [R=301,L]
RewriteCond %{THE_REQUEST} .html
RewriteRule ^(.*).html$ /$1 [R=301,L]</ifmodule> The above code do 1. redirect www to non-www
2. Remove trailing slash at the end (if exists)
3. Remove index.html
4. Remove all .html
5. Redirect 301 to filename but doesn't add trailing slash at the end0 -
Mass Removal Request from Google Index
Hi, I am trying to cleanse a news website. When this website was first made, the people that set it up copied all kinds of articles they had as a newspaper, including tests, internal communication, and drafts. This site has lots of junk, but this kind of junk was on the initial backup, aka before 1st-June-2012. So, removing all mixed content prior to that date, we can have pure articles starting June 1st, 2012! Therefore My dynamic sitemap now contains only articles with release date between 1st-June-2012 and now Any article that has release date prior to 1st-June-2012 returns a custom 404 page with "noindex" metatag, instead of the actual content of the article. The question is how I can remove from the google index all this junk as fast as possible that is not on the site anymore, but still appears in google results? I know that for individual URLs I need to request removal from this link
Intermediate & Advanced SEO | | ioannisa
https://www.google.com/webmasters/tools/removals The problem is doing this in bulk, as there are tens of thousands of URLs I want to remove. Should I put the articles back to the sitemap so the search engines crawl the sitemap and see all the 404? I believe this is very wrong. As far as I know this will cause problems because search engines will try to access non existent content that is declared as existent by the sitemap, and return errors on the webmasters tools. Should I submit a DELETED ITEMS SITEMAP using the <expires>tag? I think this is for custom search engines only, and not for the generic google search engine.
https://developers.google.com/custom-search/docs/indexing#on-demand-indexing</expires> The site unfortunatelly doesn't use any kind of "folder" hierarchy in its URLs, but instead the ugly GET params, and a kind of folder based pattern is impossible since all articles (removed junk and actual articles) are of the form:
http://www.example.com/docid=123456 So, how can I bulk remove from the google index all the junk... relatively fast?0 -
Wordpress Blog in 2 languages. How to SEO or structure it?
Hi Moz community, I have got a wordpress blog currently in the spanish language. I want to create the same blog content but in english version. (manually translate it to english instead of using translation service such as Google Translate). How should i structure the blog for SEO? How will it work? Any structure markups i should know about? Any examples? Thanks
Intermediate & Advanced SEO | | WayneRooney0 -
OMG. RAND IS ATTACKED! (in a blog post)
I posted a link to Rand's recent Moz Blog in another forum. One of the users posted a link to this article as a counter point. Thoughts? [title edited by staff for clarity]
Intermediate & Advanced SEO | | AWCthreads2 -
How does Google know if a backlink is good or not?
Hi, What does Google look at when assessing a backlink? How important is it to get a backlink from a website with relevant content? Ex: 1. Domain/Page Auth 80, website is not relevant. Does not use any of the words in your target term in any area of the website. 2. Domain/Page Auth 40, website is relevant. Uses the words in your target term multiple times across website. Which website example would benefit your SERP's more if you gained a backlink? (and if you can say, how much more would it benefit - low, medium, high).
Intermediate & Advanced SEO | | activitysuper0 -
Should pages of old news articles be indexed?
My website published about 3 news articles a day and is set up so that old news articles can be accessed through a "back" button with articles going to page 2 then page 3 then page 4, etc... as new articles push them down. The pages include a link to the article and a short snippet. I was thinking I would want Google to index the first 3 pages of articles, but after that the pages are not worthwhile. Could these pages harm me and should they be noindexed and/or added as a canonical URL to the main news page - or is leaving them as is fine because they are so deep into the site that Google won't see them, but I also won't be penalized for having week content? Thanks for the help!
Intermediate & Advanced SEO | | theLotter0 -
How to deal with old, indexed hashbang URLs?
I inherited a site that used to be in Flash and used hashbang URLs (i.e. www.example.com/#!page-name-here). We're now off of Flash and have a "normal" URL structure that looks something like this: www.example.com/page-name-here Here's the problem: Google still has thousands of the old hashbang (#!) URLs in its index. These URLs still work because the web server doesn't actually read anything that comes after the hash. So, when the web server sees this URL www.example.com/#!page-name-here, it basically renders this page www.example.com/# while keeping the full URL structure intact (www.example.com/#!page-name-here). Hopefully, that makes sense. So, in Google you'll see this URL indexed (www.example.com/#!page-name-here), but if you click it you essentially are taken to our homepage content (even though the URL isn't exactly the canonical homepage URL...which s/b www.example.com/). My big fear here is a duplicate content penalty for our homepage. Essentially, I'm afraid that Google is seeing thousands of versions of our homepage. Even though the hashbang URLs are different, the content (ie. title, meta descrip, page content) is exactly the same for all of them. Obviously, this is a typical SEO no-no. And, I've recently seen the homepage drop like a rock for a search of our brand name which has ranked #1 for months. Now, admittedly we've made a bunch of changes during this whole site migration, but this #! URL problem just bothers me. I think it could be a major cause of our homepage tanking for brand queries. So, why not just 301 redirect all of the #! URLs? Well, the server won't accept traditional 301s for the #! URLs because the # seems to screw everything up (server doesn't acknowledge what comes after the #). I "think" our only option here is to try and add some 301 redirects via Javascript. Yeah, I know that spiders have a love/hate (well, mostly hate) relationship w/ Javascript, but I think that's our only resort.....unless, someone here has a better way? If you've dealt with hashbang URLs before, I'd LOVE to hear your advice on how to deal w/ this issue. Best, -G
Intermediate & Advanced SEO | | Celts180