Keeping SEO benefit of an old URL by changing content
-
We have a blog written in Oct 2012 that accounts for 30-40% of our traffic (174K pageviews per year/80% bounce rate). We are considering updating the content but are concerned that it will fall off the search engine's map if the content is updated to include information that is not exactly the same, but relevant. The URL would be the same and the original blog content would be shortened with a link to the full blog. The new content would include other FDA products under investigation. Here is the blog: http://myadvocates.com/blog/fda-issues-warning-about-so-called-brain-supplement-prevagen
-
There is a risk, but if the content is relevant its a small one. you could rank better also.
-
When you say "the original blog content would be shortened with a link to the full blog" can you explain that a little bit more? You mean an article like this http://myadvocates.com/blog/fda-issues-warning-about-so-called-brain-supplement-prevagen would be only partial and it would link to the full version somewhere else? And the destination new content is what would contain more FDA products?
Also, it is likely you are getting most of your traffic from only a handful of posts. I would do an 80/20 analysis of your blog content (how I show step by step here) or just eyeball your analytics. To mitigate most of your risk you could leave the top 5-10 posts for traffic the same.
As Cyrus points out here you should be careful with how much you change the contents of the page. Google is good at understanding semantics, which means they are more likely to decide not to pass value through redirects because of changes if those changes are too different than the original content. They also understand document structure, so if you're changing from a full post to another format you should be careful with that too.
-
Keep what you have, but add an update paragraph at the end, this should limit your risk.
-
Google is very good with semantically relevant information and updating posts like this with new results and further information often also provides a fresh content boost. With the volume you're talking about, I'd recommend rolling out the new content in phases so that you can see how it impacts results and decide best for your situation. Cheers!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How would you improve our URL structure?
Hi Mozzers, I have a question about the URL structure on our website (www.ikwilzitzakken.nl). We now have a main category with "zitzakken" (beanbags). We also have different brands, types and colours. Now we have URL's like this: <a>https://www.ikwilzitzakken.nl/zitzakken/vetsak/vetsak-fs600-flokati-zitzak/_381_w_3544_3862_NL_1</a> which seems long and not clean. Please don't look at the query at the end, we can't do anything about that in our CMS. In english this would be: https://www.iwantbeanbags.nl/beanbags/vetsak/vetsak-fs600-flokati-beanbag/_381_w_3544_3862_NL_1 How would you optimise this? We do have good rankings (this one ranks #1 for example), but I think our overall structure could be way better. Would love your thoughts about this.
On-Page Optimization | | TheOnlineWarp0 -
Lost SEO contract, new SEO wants us to do the following - can you explain why?
1. Make prokem.co.uk the master domain rather than prokem-corrosion-protection.com 2. Ensure each http URL is 301 redirected to its https counterpart via htaccess rather than in plesk 3. 301 redirect each www.prokem-corrosion-protection.com URL to its co.uk counterpart via htaccess. I can provide a list of pages to redirect as there are a number of duplicate pages that will need removing. It probably makes sense to implement these other changes at the same time: Remove all of the canonical tags currently on the site. Leverage browser caching by following Google’s page speed recommendations - https://developers.google.com/speed/docs/insights/LeverageBrowserCaching Losslessly compress all of the website’s images. Combine and minify the website’s JavaScript
On-Page Optimization | | Simon_VO0 -
Some Content The Same
Hello. I am about to publish some landing pages that target different industries that we are trying to market to. X for Accountants
On-Page Optimization | | smithandco
X for Financial Advisors
X for Fitness Trainers
X for X While a good portion of the content is unique on each page "the benefits of using X for accountants" some of the content on the page is duplicate which explains more about how our software works (the features), this will be the same content on every page. Is this considered duplicate content? What should I be aware of in term of Google rankings and penalties? Thanks,
David0 -
URL SEO: Better directory structure vs. exact keyword phrase
I am trying to understand how to best optimise a url for a page to rank high for specific keywords. Example: a top keyword search is "rental properties in new york". Question is does this keyword need to appear as this exact phrase in the url or should it be broken up into different directories for a better structure e.g.: www.abc.com/en/properties/new-york/rental OR www.abc.com/en/rental-properties-in-new-york Which will help the page rank higher (given all other things on the page are exactly the same)? Thanks!
On-Page Optimization | | MH190 -
Similar URLs
I'm making a site of LSAT explanations. The content is very meaningful for LSAT students. I'm less sure the urls and headings are meaningful for Google. I'll give you an example. Here are two URLs and heading for two separate pages: http://lsathacks.com/explanations/lsat-69/logical-reasoning-1/q-10/ - LSAT 69, Logical Reasoning I, Q 10 http://lsathacks.com/explanations/lsat-69/logical-reasoning-2/q10/ - LSAT 69, Logical Reasoning II, Q10 There are two logical reasoning sections on LSAT 69. For the first url is for question 10 from section 1, the second URL is for question 10 from the second LR section. I noticed that google.com only displays 23 urls when I search "site:http://lsathacks.com". A couple of days ago it displayed over 120 (i.e. the entire site). 1. Am I hurting myself with this structure, even if it makes sense for users? 2. What could I do to avoid it? I'll eventually have thousands of pages of explanations. They'll all be very similar in terms of how I would categorize them to a human, e.g. "LSAT 52, logic games question 12" I should note that the content of each page is very different. But url, title and h1 is similar. Edit: I could, for example, add a random keyword to differentiate titles and urls (but not H1). For example: http://lsathacks.com/explanations/lsat-69/logical-reasoning-2/q10-car-efficiency/ LSAT 69, Logical Reasoning I, Q 10, Car efficiency But the url is already fairly long as is. Would that be a good idea?
On-Page Optimization | | graemeblake0 -
Duplicat contents on wordpress
I ran a crawl error and found that I have many pages with "tag" i.e. http://www.soobumimphotography.com/tag/70-200-2-8-is/ What's the best way to deal with this problems? Is it worth to visit all of them and fix? Delete? Could you give me some suggestions?
On-Page Optimization | | BistosAmerica0 -
New CMS system - 100,000 old urls - use robots.txt to block?
Hello. My website has recently switched to a new CMS system. Over the last 10 years or so, we've used 3 different CMS systems on our current domain. As expected, this has resulted in lots of urls. Up until this most recent iteration, we were unable to 301 redirect or use any page-level indexation techniques like rel 'canonical' Using SEOmoz's tools and GWMT, I've been able to locate and redirect all pertinent, page-rank bearing, "older" urls to their new counterparts..however, according to Google Webmaster tools 'Not Found' report, there are literally over 100,000 additional urls out there it's trying to find. My question is, is there an advantage to using robots.txt to stop search engines from looking for some of these older directories? Currently, we allow everything - only using page level robots tags to disallow where necessary. Thanks!
On-Page Optimization | | Blenny0 -
German SEO
Just a quickie, Does anybody know of any strong German SEO agencies? Many Thanks Sean
On-Page Optimization | | Yozzer0