Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
What should be done with old news articles?
-
Hello,
We have a portal website that gives information about the industry we work in. This website includes various articles, tips, info, reviews and more about the industry.We also have a news section that was previously indexed in Google news but is not for the past few month.The site was hit by Panda over a year ago and one of the things we have been thinking of doing is removing pages that are irrelavant/do not provide added value to the site.Some of these pages are old news articles posted over 3-4 years ago and that have had hardly any traffic to.All the news articles on the site are under a /archive/ folder sorted by month and year, so for example a url for a news item from April 2010 would be /archive/042010/article-nameMy question is do you think removing such news articles would benefit the site helping it get out of Panda (many other things have been done in the site as well), if not what is the best suggested way to keep these articles on the site in a way which Google indexes them and treats them well.thx
-
Basically I don't see a reason to remove old news articles from a site, as it makes sense to still have an archive present. The only reason I could think of to remove them is if they are duplicate versions of texts that have originally been published somewhere else. Or if the quality is really crap...
-
if the articles are good - then there just might be value to the user . Depending on the niche / industry those old articles could be very important.
Google dosen't like those as you probably have a lot of impression but no clicks (so mainly no traffic) or maybe the "score" is bad (bounce rate - not Google analytics bounce rate, but Google's bounce rate - if they bounce to serps that is).
Since you got hit by panda, in my opinion, I see two options:
1. No index those old pages. The users can still get tho those by navigation, site search etc but google won't see them. Google is fine with having content (old, poor, thin etc) if it's not in the index. I work with a site that has several million pages and 80% is no index - everything is fine now (they also got hit by Panda).
2. Merge those pages into rich, cool, fresh topic pages (see new york time topic pages sample - search for it - I think there is also an seomoz post - a whiteboard friday about it). This is a good approach and if you manage to merge those old pages with some new content you will be fine. Topic pages are great as an anti panda tool !
If you merge the pages into topic pages do that based on a simple flow:
1. identify a group of pages that covers the same topic.
2. identify the page that has the highest authority of all.
3. Change this page into the topic page - keep the url.
4. Merge the other into this page (based on your new topic page structure and flow)
5. 301 redirect the others to this one
6. build a separat xml sitemaps with all those pages and load it up to WMT. Monitor it.
7. Build some links to some of those landing pages, get some minimum social signals to those - to a few (depending on the number). Build an index typoe of page with those topic pages or some of them (user friendly one/ ones) and use those as target to build some links to send the 'love'.
Hope it helps - just some ideas.
-
I do think that any site should remove pages that are not valuable to users.
I would look for the articles that have external links pointed at them and 301 those to something relevant. The rest, you could simply remove and let them return a 404 status. Just make sure all internal links pointing at them are gone. You don't want to lead people to a 404 page.
You could consider putting /archive/ in your robots.txt file if you think the pages have some value to users, but not to the engines. Or putting a no index tag on each page in that section.
If you want to keep the articles on the site, available to both google and users, you have to make sure they meet some of this basic criteria.
- Mostly Unique Content
- Moderate length.
- Good content to ad ratio.
- Content the focus on the page (top/center)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to de-index old URLs after redesigning the website?
Thank you for reading. After redesigning my website (5 months ago) in my crawl reports (Moz, Search Console) I still get tons of 404 pages which all seems to be the URLs from my previous website (same root domain). It would be nonsense to 301 redirect them as there are to many URLs. (or would it be nonsense?) What is the best way to deal with this issue?
Intermediate & Advanced SEO | | Chemometec0 -
Moving to a new site while keeping old site live
For reasons I won't get into here, I need to move most of my site to a new domain (DOMAIN B) while keeping every single current detail on the old domain (DOMAIN A) as it is. Meaning, there will be 2 live websites that have mostly the same content, but I want the content to appear to search engines as though it now belongs to DOMAIN B. Weird situation. I know. I've run around in circles trying to figure out the best course of action. What do you think is the best way of going about this? Do I simply point DOMAIN A's canonical tags to the copied content on DOMAIN B and call it good? Should I ask sites that link to DOMAIN A to change their links to DOMAIN B, or start fresh and cut my losses? Should I still file a change of address with GWT, even though I'm not going to 301 redirect anything?
Intermediate & Advanced SEO | | kdaniels0 -
How to structure articles on a website.
Hi All, Key to a successful website is quality content - so the Gods of Google tell me. Embrace your audience with quality feature rich articles on your products or services, hints and tips, how to, etc. So you build your article page with all the correct criteria; Long Tail Keyword or phrases hitting the URL, heading, 1st sentance, etc. My question is this
Intermediate & Advanced SEO | | Mark_Ch
Let's say you have 30 articles, where would you place the 30 articles for SEO purposes and user experiences. My thought are:
1] on the home page create a column with a clear heading "Useful articles" and populate the column with links to all 30 articles.
or
2] throughout your website create link references to the articles as part of natural information flow.
or
3] Create a banner or impact logo on the all pages to entice your audience to click and land on dedicated "articles page" Thanks Mark0 -
Article Marketing / Article Posting
I am working on the SEO on a few different websites and I have built out an article marketing campaign so that I can get high quality backlinks for my website. I have been writing the content myself and I have been manually building out the top Web 2.0, Article Directory, and Doc Sharing sites. today I was creating an account on squidoo and I wondered if it mattered if I had the username be one of two things: my keyword as a user name, like: [keyword+geotag] example: roofinghouston just my first and last name as the username (or just a username I always use) (The reason behind #1 would be to have the optimized keyword and location I am trying to rank for, inside of the username. The reason for #2 would be that I don't want to get into trouble by having "too much" optimization.) I know a bit about optimization and that getting your keyword out there is great in a lot of areas, but I am not sure if it looks "suspicious" if I have my username be the keyword+geotag. I am just worried that all of this hard work will be torn down if I look like I'm trying too hard to be optimized, etc etc. There is no one answer, I am mainly looking for shared experiences. If you do have a definite answer, then I would like that too 🙂 Thanks SEOMoz!
Intermediate & Advanced SEO | | SEOWizards0 -
Google News URL Structure
Hi there folks I am looking for some guidance on Google News URLs. We are restructuring the site. A main traffic driver will be the traffic we get from Google News. Most large publishers use: www.site.com/news/12345/this-is-the-title/ Others use www.example.com/news/celebrity/12345/this-is-the-title/ etc. www.example.com/news/celebrity-news/12345/this-is-the-title/ www.example.com/celebrity-news/12345/this-is-the-title/ (Celebrity is a channel on Google News so should we try and follow that format?) www.example.com/news/celebrity-news/this-is-the-title/12345/ www.example.com/news/celebrity-news/this-is-the-title-12345/ (unique ID no at the end and part of the title URL) www.example.com/news/celebrity-news/celebrity-name/this-is-the-title-12345/ Others include the date. So as you can see there are so many combinations and there doesnt seem to be any unity across news sites for this format. Have you any advice on how to structure these URLs? Particularly if we want to been seen as an authority on the following topics: fashion, hair, beauty, and celebrity news - in particular "celebrity name" So should the celebrity news section be www.example.com/news/celebrity-news/celebrity-name/this-is-the-title-12345/ or what? This is for a completely new site build. Thanks Barry
Intermediate & Advanced SEO | | Deepti_C0 -
Should pages of old news articles be indexed?
My website published about 3 news articles a day and is set up so that old news articles can be accessed through a "back" button with articles going to page 2 then page 3 then page 4, etc... as new articles push them down. The pages include a link to the article and a short snippet. I was thinking I would want Google to index the first 3 pages of articles, but after that the pages are not worthwhile. Could these pages harm me and should they be noindexed and/or added as a canonical URL to the main news page - or is leaving them as is fine because they are so deep into the site that Google won't see them, but I also won't be penalized for having week content? Thanks for the help!
Intermediate & Advanced SEO | | theLotter0 -
How to deal with old, indexed hashbang URLs?
I inherited a site that used to be in Flash and used hashbang URLs (i.e. www.example.com/#!page-name-here). We're now off of Flash and have a "normal" URL structure that looks something like this: www.example.com/page-name-here Here's the problem: Google still has thousands of the old hashbang (#!) URLs in its index. These URLs still work because the web server doesn't actually read anything that comes after the hash. So, when the web server sees this URL www.example.com/#!page-name-here, it basically renders this page www.example.com/# while keeping the full URL structure intact (www.example.com/#!page-name-here). Hopefully, that makes sense. So, in Google you'll see this URL indexed (www.example.com/#!page-name-here), but if you click it you essentially are taken to our homepage content (even though the URL isn't exactly the canonical homepage URL...which s/b www.example.com/). My big fear here is a duplicate content penalty for our homepage. Essentially, I'm afraid that Google is seeing thousands of versions of our homepage. Even though the hashbang URLs are different, the content (ie. title, meta descrip, page content) is exactly the same for all of them. Obviously, this is a typical SEO no-no. And, I've recently seen the homepage drop like a rock for a search of our brand name which has ranked #1 for months. Now, admittedly we've made a bunch of changes during this whole site migration, but this #! URL problem just bothers me. I think it could be a major cause of our homepage tanking for brand queries. So, why not just 301 redirect all of the #! URLs? Well, the server won't accept traditional 301s for the #! URLs because the # seems to screw everything up (server doesn't acknowledge what comes after the #). I "think" our only option here is to try and add some 301 redirects via Javascript. Yeah, I know that spiders have a love/hate (well, mostly hate) relationship w/ Javascript, but I think that's our only resort.....unless, someone here has a better way? If you've dealt with hashbang URLs before, I'd LOVE to hear your advice on how to deal w/ this issue. Best, -G
Intermediate & Advanced SEO | | Celts180 -
How to beat Wikipedia article from the top spot on SERPS?
Hi Guys, One of our clients has a good web site with lots of content that is ranked already on #2 for the top keyword (singular and plural) on Google UK. The keyword itself is a competitive one. The top spot is occupied by a wikipedia article that doesn't have much content in general. Can anyone come up with an advice what strategy we have to apply to outplace that article? Thanks!
Intermediate & Advanced SEO | | myclicks-1636030