I'm updating content that is out of date. What is the best way to handle if I want to keep old content as well?
-
So here is the situation.
I'm working on a site that offers "Best Of" Top 10 list type content. They have a list that ranks very well but is out of date.
They'd like to create a new list for 2014, but have the old list exist. Ideally the new list would replace the old list in search results.
Here's what I'm thinking, but let me know if you think theres a better way to handle this:
-
Put a "View New List" banner on the old page
-
Make sure all internal links point to the new page
-
Rel=canonical tag on the old list pointing to the new list
Does this seem like a reasonable way to handle this?
-
-
Thanks Anthony, thats a great solution..
-
You could handle this similar to how you'd handle any sort of seasonal/yearly content.
Ideally, I'd try to have this set-up.
domain.com/top-10-list is your main piece of content. This is the page that currently ranks well. I would update that URL with your new, updated content. The content that was originally on that list, I would put on a new URL like domain.com/top-10-list/2011 or whatever year the original article was created. Then, on the primary page, I would link to all previous/archive lists. You could update the page every year to keep it fresh.
This would keep all rankings/authority focused on your original URL. If your original URL doesn't work with this plan, for instance if it was domain.com/2011-top-10-list, I would consider 301ing that to a newly created page that is focused on being an evergreen list.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Complicated Duplicate Content Question...but it's fun, so please help.
Quick background: I have a page that is absolutely terrible, but it has links and it's a category page so it ranks. I have a landing page which is significantly - a bizillion times - better, but it is omitted in the search results for the most important query we need. I'm considering switching the content of the two pages, but I have no idea what they will do. I'm not sure if it will cause duplicate content issues or what will happen. Here are the two urls: Terrible page that ranks (not well but it's what comes up eventually) https://kemprugegreen.com/personal-injury/ Far better page that keeps getting omitted: https://kemprugegreen.com/location/tampa/tampa-personal-injury-attorney/ Any suggestions (other than just wait on google to stop omitting the page, because that's just not going to happen) would be greatly appreciated. Thanks, Ruben
Intermediate & Advanced SEO | | KempRugeLawGroup0 -
Whats the best way to implement rel = “next/prev” if we have filters?
Hi everyone, The filtered view results in paginated content and has different urls: example: https://modli.co/dresses.html?category=45&price=13%2C71&size=25 Look at what it says in search engine land: http://searchengineland.com/implementing-pagination-attributes-correctly-for-google-114970 Look at Advanced Techniques paragraph. do you agree? it seem like google will index the page multiple times for every filter variant. Thanks, Yehoshua
Intermediate & Advanced SEO | | Yehoshua0 -
Content From One Domain Mysteriously Indexing Under a Different Domain's URL
I've pulled out all the stops and so far this seems like a very technical issue with either Googlebot or our servers. I highly encourage and appreciate responses from those with knowledge of technical SEO/website problems. First some background info: Three websites, http://www.americanmuscle.com, m.americanmuscle.com and http://www.extremeterrain.com as well as all of their sub-domains could potentially be involved. AmericanMuscle sells Mustang parts, Extremeterrain is Jeep-only. Sometime recently, Google has been crawling our americanmuscle.com pages and serving them in the SERPs under an extremeterrain sub-domain, services.extremeterrain.com. You can see for yourself below. Total # of services.extremeterrain.com pages in Google's index: http://screencast.com/t/Dvqhk1TqBtoK When you click the cached version of there supposed pages, you see an americanmuscle page (some desktop, some mobile, none of which exist on extremeterrain.com😞 http://screencast.com/t/FkUgz8NGfFe All of these links give you a 404 when clicked... Many of these pages I've checked have cached multiple times while still being a 404 link--googlebot apparently has re-crawled many times so this is not a one-time fluke. The services. sub-domain serves both AM and XT and lives on the same server as our m.americanmuscle website, but answer to different ports. services.extremeterrain is never used to feed AM data, so why Google is associating the two is a mystery to me. the mobile americanmuscle website is set to only respond on a different port than services. and only responds to AM mobile sub-domains, not googlebot or any other user-agent. Any ideas? As one could imagine this is not an ideal scenario for either website.
Intermediate & Advanced SEO | | andrewv0 -
Starting Over with a new site - Do's and Don'ts?
After six months, we've decided to start over with a new website. Here's what I'm thinking. Please offer any constructive Do's or Don'ts if you see that I'm about to make a mistake. Our original site,(call it mysite.com ) we have come to the conclusion, is never going to make a come back on Google. It seems to us a better investment to start over, then to to simply keep hoping. Quite honestly, we're freakin' tired of trying to fix this. We don't want to screw with it any more. We are creative people, and would much rather be building a new race car rather than trying to overhaul the engine in the old one. We have the matching .net domain, mysite.net, which has been aged about 6 years with some fairly general content on a single page. There are zero links to mysite.net, and it was really only used by us for FTP traffic -- nothing in the SERPS for mysite.net. Mysite.NET will be a complete redesign. All content and images will be totally redone. Content will be new, excellent writing, unique, and targeted. Although the subject matter will be similar to mysite.COM, the content, descriptions, keywords, images -- all will be brand spankin' new. We will have a clean slate to begin the long painful link building process.We will put in the time, and bite the bullet until mysite.NET rules Google once again. We'll change the URL in all of our Adwords campaigns mysite.net. My questions are: 1. Mysite.com still gets some ok traffic from Bing. Can I leave mysite.com substantially intact, or does it need to go? 2. If I have "bad links" pointing to mysite.com/123.html what would happen if I 301 that page to mysite.NET/abc.html ? Does the "bad link juice" get passed on to the clean site? It would be a better experience for users who know our URL if they could be redirected to the new site. 3. Should we put Mysite.net on a different server in a different clean IP block? Or doesn't matter? We're willing to spend for the new server if it would help 4. What have I forgotten? Cheers, all
Intermediate & Advanced SEO | | DarrenX0 -
Advice needed on how to handle alleged duplicate content and titles
Hi I wonder if anyone can advise on something that's got me scratching my head. The following are examples of urls which are deemed to have duplicate content and title tags. This causes around 8000 errors, which (for the most part) are valid urls because they provide different views on market data. e.g. #1 is the summary, while #2 is 'Holdings and Sector weightings'. #3 is odd because it's crawling the anchored link. I didn't think hashes were crawled? I'd like some advice on how best to handle these, because, really they're just queries against a master url and I'd like to remove the noise around duplicate errors so that I can focus on some other true duplicate url issues we have. Here's some example urls on the same page which are deemed as duplicates. 1) http://markets.ft.com/Research/Markets/Tearsheets/Summary?s=IVPM:LSE http://markets.ft.com/Research/Markets/Tearsheets/Holdings-and-sectors-weighting?s=IVPM:LSE http://markets.ft.com/Research/Markets/Tearsheets/Summary?s=IVPM:LSE&widgets=1 What's the best way to handle this?
Intermediate & Advanced SEO | | SearchPM0 -
Can you see the 'indexing rules' that are in place for your own site?
By 'index rules' I mean the stipulations that constitute whether or not a given page will be indexed. If you can see them - how?
Intermediate & Advanced SEO | | Visually0 -
Competitior 'scraped' entire site - pretty much - what to do?
I just discovered a competitor in the insurance lead generation space has completely copied my client's site's architecture, page names, titles, even the form, tweaking a word or two here or there to prevent 100% 'scraping'. We put a lot of time into the site, only to have everything 'stolen'. What can we do about this? My client is very upset. I looked into filing a 'scraper' report through Google but the slight modifications to content technically don't make it a 'scraped' site. Please advise to what course of action we can take, if any. Thanks,
Intermediate & Advanced SEO | | seagreen
Greg0