Should pages of old news articles be indexed?
-
My website published about 3 news articles a day and is set up so that old news articles can be accessed through a "back" button with articles going to page 2 then page 3 then page 4, etc... as new articles push them down. The pages include a link to the article and a short snippet.
I was thinking I would want Google to index the first 3 pages of articles, but after that the pages are not worthwhile. Could these pages harm me and should they be noindexed and/or added as a canonical URL to the main news page - or is leaving them as is fine because they are so deep into the site that Google won't see them, but I also won't be penalized for having week content?
Thanks for the help!
-
Ah I'm sorry I misinterpreted you - so it's essentially about pagination? Rel Next/Rel Previous is probably the best way to go - the first page will be given the equity and the pages won't have to compete with each other for ranking. Google have a pretty comprehensive guide: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1663744
-
Thanks Alice, but my question is about the page where the article is linked from not the actual article itself ( which 100% is staying indexed )
-
Hi Sara,
If the articles are time sensitive but high quality, I wouldn't noindex them. They could still have value in the future (for example, if a related story comes up, you can link back to the old article). You might also find ways to refresh or recycle them, such as adding a follow up, updating the information, or promoting a really great post "From Our Archives". They could also be a good longtail source of traffic for people looking for information on past news/events.
Google will be able to index old and outdated articles, but it's smart enough to know that these posts are old and outdated and therefore won't assign big chunks of page rank to them.
However if the articles are low quality, I would take action to improve the good content/poor content ratio. The ideal situation would be to improve the articles themselves, but that might not be a feasible solution if you've been publishing three per day for an extended period of time. I would conduct a thorough audit to see what content could be saved/improved and what content should be deleted. I wouldn't bother with no index or canonicals - if it's good content leave it up and let it be indexed, and if it's bad content that can't be saved, remove it.
Finally if you are redirecting old articles, I would be careful about where they redirect to. Ideally you'd want to redirect from a low quality article to a high quality article on the same subject. A big increase in URLs pointing to the main news page could raise a red flag, and could force readers to look for information unnecessarily.
Good luck!
-
The news articles themselves are not thin content, but the general pages are relatively thin because they only consist of the link + snippet.
-
Are they all thin content? If not, then I don't think it's necessary to NOINDEX them. If you think some of them don't have any real value, you could specifically NOINDEX them(and not all together). Google will crawl those pages no matter how deep they are, as long as they are accessible.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What to do if lots of backend pages have been indexed by Google erroneously?
Hi Guys Our developer forgot to add a no index no follow tag on the pages he created in the back-end. So we have now ended up with lots of back end pages being indexed in google. So my question is, since many of those are now indexed in Google, so is it enough to just place a no index no follow on those or should we do a 301 redirect on all those to the most appropriate page? If a no index no follow is enough, that would create lots of 404 errors so could those affect the site negatively? Cheers Martin
Intermediate & Advanced SEO | | martin19700 -
Indexed Pages Different when I perform a "site:Google.com" site search - why?
My client has an ecommerce website with approx. 300,000 URLs (a lot of these are parameters blocked by the spiders thru meta robots tag). There are 9,000 "true" URLs being submitted to Google Search Console, Google says they are indexing 8,000 of them. Here's the weird part - When I do a "site:website" function search in Google, it says Google is indexing 2.2 million pages on the URL, but I am unable to view past page 14 of the SERPs. It just stops showing results and I don't even get a "the next results are duplicate results" message." What is happening? Why does Google say they are indexing 2.2 million URLs, but then won't show me more than 140 pages they are indexing? Thank you so much for your help, I tried looking for the answer and I know this is the best place to ask!
Intermediate & Advanced SEO | | accpar0 -
How Long Does it Take for Rel Canonical to De-Index / Re-Index a Page?
Hi Mozzers, We have 2 e-commerce websites, Website A and Website B, sharing thousands of pages with duplicate product descriptions. Currently only the product pages on Website B are indexing, and we want Website A indexed instead. We added the rel canonical tag on each of Website B's product pages with a link towards the matching product on Page A. How long until Website B gets de-indexed and Website A gets indexed instead? Did we add the rel canonical tag correctly? Thanks!
Intermediate & Advanced SEO | | Travis-W0 -
Why Does Ebay Allow Internal Search Result Pages to be Indexed?
Click this Google query: https://www.google.com/search?q=les+paul+studio Notice how Google has a rich snippet for Ebay saying that it has 229 results for Ebay's internal search result page: http://screencast.com/t/SLpopIvhl69z Notice how Sam Ash's internal search result page also ranks on page 1 of Google. I've always followed the best practice of setting internal search result pages to "noindex." Previously, our company's many Magento eCommerce stores had the internal search result pages set to be "index," and Google indexed over 20,000 internal search result URLs for every single site. I advised that we change these to "noindex," and impressions from Search Queries (reported in Google Webmaster Tools) shot up on 7/24 with the Panda update on that date. Traffic didn't necessarily shoot up...but it appeared that Google liked that we got rid of all this thin/duplicate content and ranked us more (deeper than page 1, however). Even Dr. Pete advises no-indexing internal search results here: http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world So, why is Google rewarding Ebay and Sam Ash with page 1 rankings for their internal search result pages? Is it their domain authority that lets them get away with it? Could it be that noindexing internal search result pages is NOT best practice? Is the game different for eCommerce sites? Very curious what my fellow professionals think. Thanks,
Intermediate & Advanced SEO | | M_D_Golden_Peak
Dan0 -
404 with a Javascript Redirect to the index page...
I have a client that is wanting me to issue a 404 on her links that are no longer valid to a custom 404, pause for 10 seconds, then rediirect to the root page (or whatever other redirect logic she wants)...to me it seems trying to game googlebot this way is a "bad idea" Can anyone confirm/deny or offer up a better suggestion?
Intermediate & Advanced SEO | | JusinDuff0 -
Sudden Change In Indexed Pages
Every week I check the number of pages indexed by google using the "site:" function. I have set up a permanent redirect from all the non-www pages to www pages. When I used to run the function for the: non-www pages (i.e site:mysite.com), would have 12K results www pages (i.e site:www.mysite.com) would have about 36K The past few days, this has reversed! I get 12K for www pages, and 36K for non-www pages. Things I have changed: I have added canonical URL links in the header, all have www in the URL. My questions: Is this cause for concern? Can anyone explain this to me?
Intermediate & Advanced SEO | | inhouseseo0 -
301 - should I redirect entire domain or page for page?
Hi, We recently enabled a 301 on our domain from our old website to our new website. On the advice of fellow mozzer's we copied the old site exactly to the new domain, then did the 301 so that the sites are identical. Question is, should we be doing the 301 as a whole domain redirect, i.e. www.oldsite.com is now > www.newsite.com, or individually setting each page, i.e. www.oldsite.com/page1 is now www.newsite.com/page1 etc for each page in our site? Remembering that both old and new sites (for now) are identical copies. Also we set the 301 about 5 days ago and have verified its working but haven't seen a single change in rank either from the old site or new - is this because Google hasn't likely re-indexed yet? Thanks, Anthony
Intermediate & Advanced SEO | | Grenadi0 -
Is 404'ing a page enough to remove it from Google's index?
We set some pages to 404 status about 7 months ago, but they are still showing in Google's index (as 404's). Is there anything else I need to do to remove these?
Intermediate & Advanced SEO | | nicole.healthline0