Do you bother cleaning duplicate content from Googles Index?
-
Hi,
I'm in the process of instructing developers to stop producing duplicate content, however a lot of duplicate content is already in Google's Index and I'm wondering if I should bother getting it removed... I'd appreciate it if you could let me know what you'd do...
For example one 'type' of page is being crawled thousands of times, but it only has 7 instances in the index which don't rank for anything. For this example I'm thinking of just stopping Google from accessing that page 'type'.
Do you think this is right?
Do you normally meta NoIndex,follow the page, wait for the pages to be removed from Google's Index, and then stop the duplicate content from being crawled?
Or do you just stop the pages from being crawled and let Google sort out its own Index in its own time?
Thanks
FashionLux
-
One tricky point - you don't necessarily want to fix the duplicate URLs before you 301-redirect and clear out the index. This is counter-intuitive and throws many people off. If you cut the crawl paths to the bad URLs, then Google will never crawl them and process the 301-redirects (since those exist on the page level). Same is try for canonical tags. Clear out the duplicates first, THEN clean up the paths. I know it sounds weird, but it's important.
For malformed URLs and usability, you could still dynamically 301-redirect. In most cases, those bad URLs shouldn't get indexed, because they have no crawl path in your site. Someone would have to link to them. Google will never mis-type, in other words.
-
Hi Highland/Dr Pete,
My apologies I wasn't very clear - fixing the duplicate problem... or rather stopping our site from generating further duplicate content isn't an issue at all, I'm going to instruct our developers to stop generating dupe content by doing things like no longer passing variables in the URL's (mysite.com/page2?previouspage=page1).
However the problem is that for a lot of instances duplicate URL's work and they need to work - for example if a user types in the URL but gets one character wrong ('1q' rather than '1') then from a usability perspective its the correct thing to serve the content they wanted. You don't want to make the user have to stop, figure out what they did wrong and redo it - not when you can make it work seamlessly.
My question relates to 'once my site is no longer generating unnecessary duplicate content, what should I to do about the duplicate pages that have already made their way into the Index?' and you have both answered the question very well, thank you.
I can manually set-up 301 redirects for all of the duplicate pages that I find in the index, once they disappear from the index I can probably remove those 301's. I was thinking of going down the noindex meta tag route which is harder to develop.
Thanks guys
FashionLux
-
I DO NOT believe in letting Google sort it out - they don't do it well, and, since Panda (and really even before), they basically penalize sites for their inability to sort out duplicates. I think it's very important to manage your index.
Unfortunately, how to do that can be very complex and depends a lot on the situation. Highland's covered the big ones, but the details can get messy. I wrote a mega-post about it:
http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
Without giving URLs, can you give us a sense of what kind of duplicates they are (or maybe some generic URL examples)?
-
Your options are
- De-index the duplicate pages yourself and save yourself the crawl budget
- 301 the duplicates to the pages you want to keep (preferred)
- Canonical the duplicate pages, which lets you pick which page remains in the index. The duplicate pages will still be crawled, however.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Any way to force a URL out of Google index?
As far as I know, there is no way to truly FORCE a URL to be removed from Google's index. We have a page that is being stubborn. Even after it was 301 redirected to an internal secure page months ago and a noindex tag was placed on it in the backend, it still remains in the Google index. I also submitted a request through the remove outdated content tool https://www.google.com/webmasters/tools/removals and it said the content has been removed. My understanding though is that this only updates the cache to be consistent with the current index. So if it's still in the index, this will not remove it. Just asking for confirmation - is there truly any way to force a URL out of the index? Or to even suggest more strongly that it be removed? It's the first listing in this search https://www.google.com/search?q=hcahranswers&rlz=1C1GGRV_enUS753US755&oq=hcahr&aqs=chrome.0.69i59j69i57j69i60j0l3.1700j0j8&sourceid=chrome&ie=UTF-8
Intermediate & Advanced SEO | | MJTrevens0 -
Identifying Duplicate Content
Hi looking for tools (beside Copyscape or Grammarly) which can scan a list of URLs (e.g. 100 pages) and find duplicate content quite quickly. Specifically, small batches of duplicate content, see attached image as an example. Does anyone have any suggestions? Cheers. 5v591k.jpg
Intermediate & Advanced SEO | | jayoliverwright0 -
Duplicate content across different domains
Hi Guys, Looking for some advice regarding duplicate content across different domains. I have reviewed some previous Q&A on this topic e.g. https://moz.com/community/q/two-different-domains-exact-same-content but just want to confirm if I'm missing anything. Basically, we have a client which has 1 site (call this site A) which has solids rankings. They have decided to build a new site (site B), which contains 50% duplicate pages and content from site A. Our recommendation to them was to make the content on site B as unique as possible but they want to launch asap, so not enough time. They will eventually transfer over to unique content on the website but in the short-term, it will be duplicate content. John Mueller from Google has said several times that there is no duplicate content penalty. So assuming this is correct site A should be fine, no ranking losses. Any disagree with this? Assuming we don't want to leave this to chance or assume John Mueller is correct would the next best thing to do is setup rel canonical tags between site A and site B on the pages with duplicate content? Then once we have unique content ready, execute that content on the site and remove the canonical tags. Any suggestions or advice would be very much appreciated! Cheers, Chris
Intermediate & Advanced SEO | | jayoliverwright0 -
How to avoid duplicate content
Hi there, Our client has an ecommerce website, their products are also showing on an aggregator website (aka on a comparison website where multiple vendors are showing their products). On the aggregator website the same photos, titles and product descriptions are showing. Now with building their new website, how can we avoid such duplicate content? Or does Google even care in this case? I have read that we could show more product information on their ecommerce website and less details on the aggregator's website. But is there another or better solution? Many thanks in advance for any input!
Intermediate & Advanced SEO | | Gabriele_Layoutweb0 -
Could this be seen as duplicate content in Google's eyes?
Hi I'm an in-house SEO and we've recently seen Panda related traffic loss along with some of our main keywords slipping down the SERPs. Looking for possible Panda related issues I was wondering if the following could be seen as duplicate content. We've got some very similar holidays (travel company) on our website. While they are different I'm concerned it may be seen as creating content that is too similar: http://www.naturalworldsafaris.com/destinations/africa-and-the-indian-ocean/kenya/suggested-holidays/the-wildlife-and-beaches-of-kenya.aspx http://www.naturalworldsafaris.com/destinations/africa-and-the-indian-ocean/kenya/suggested-holidays/ultimate-kenya-wildlife-and-beaches.aspx http://www.naturalworldsafaris.com/destinations/africa-and-the-indian-ocean/kenya/suggested-holidays/wildlife-and-beach-family-safari.aspx They do all have unique text but as you can see from the titles, they are very similar (note from an SEO point of view the tabbed content is all within the same page at source level). At the top level of the holiday pages we have a filtered search:
Intermediate & Advanced SEO | | KateWaite
http://www.naturalworldsafaris.com/destinations/africa-and-the-indian-ocean/kenya/suggested-holidays.aspx These pages have a unique introduction but the content snippets being pulled into the boxes is drawn from each of the individual holiday pages. I'm just concerned that these could be introducing some duplicating issues. Any thoughts?0 -
Duplicate page content errors stemming from CMS
Hello! We've recently relaunched (and completely restructured) our website. All looks well except for some duplicate content issues. Our internal CMS (custom) adds a /content/ to each page. Our development team has also set-up URLs to work without /content/. Is there a way I can tell Google that these are the same pages. I looked into the parameters tool, but that seemed more in-line with ecommerce and the like. Am I missing anything else?
Intermediate & Advanced SEO | | taylor.craig0 -
Duplicate Content in News Section
Our clients site is in the hunting niche. According to webmaster tools there are over 32,000 indexed pages. In the new section that are 300-400 news posts where over the course of a about 5 years they manually copied relevant Press Releases from different state natural resources websites (ex. http://gfp.sd.gov/news/default.aspx). This content is relevant to the site visitors but it is not unique. We have since begun posting unique new posts but I am wondering if anything should be done with these old news posts that aren't unique? Should I use the rel="canonical tag or noindex tag for each of these pages? Or do you have another suggestion?
Intermediate & Advanced SEO | | rise10 -
Mobile Site - Same Content, Same subdomain, Different URL - Duplicate Content?
I'm trying to determine the best way to handle my mobile commerce site. I have a desktop version and a mobile version using a 3rd party product called CS-Cart. Let's say I have a product page. The URLs are... mobile:
Intermediate & Advanced SEO | | grayloon
store.domain.com/index.php?dispatch=categories.catalog#products.view&product_id=857 desktop:
store.domain.com/two-toned-tee.html I've been trying to get information regarding how to handle mobile sites with different URLs in regards to duplicate content. However, most of these results have the assumption that the different URL means m.domain.com rather than the same subdomain with a different address. I am leaning towards using a canonical URL, if possible, on the mobile store pages. I see quite a few suggesting to not do this, but again, I believe it's because they assume we are just talking about m.domain.com vs www.domain.com. Any additional thoughts on this would be great!0