Duplicate Content... Really?
-
Hi all,
My site is www.actronics.eu
Moz reports virtually every product page as duplicate content, flagged as HIGH PRIORITY!.
I know why.
Moz classes a page as duplicate if >95% content/code similar.
There's very little I can do about this as although our products are different, the content is very similar, albeit a few part numbers and vehicle make/model.
Here's an example:
http://www.actronics.eu/en/shop/audi-a4-8d-b5-1994-2000-abs-ecu-en/bosch-5-3
http://www.actronics.eu/en/shop/bmw-3-series-e36-1990-1998-abs-ecu-en/ate-34-51Now, multiply this by ~2,000 products X 7 different languages and you'll see we have a big dupe content issue (according to Moz's Crawl Diagnostics report).
I say "according to Moz..." as I do not know if this is actually an issue for Google? 90% of our products pages rank, albeit some much better than others?
So what is the solution? We're not trying to deceive Google in any way so it would seem unfair to be hit with a dupe content penalty, this is a legit dilemma where our product differ by as little as a part number.
One ugly solution would be to remove header / sidebar / footer on our product pages as I've demonstrated here - http://woodberry.me.uk/test-page2-minimal-v2.html since this removes A LOT of page bloat (code) and would bring the page difference down to 80% duplicate.
(This is the tool I'm using for checking http://www.webconfs.com/similar-page-checker.php)Other "prettier" solutions would greatly appreciated. I look forward to hearing your thoughts.
Thanks,
Woody -
Hey David
Thanks for reply.
3. Use a plugin to apply rich snippet markup to the individual product pages, adding another layer of "uniqueness"
I had thought about this already and was looking into the MPN (Manufacturer Part Number) attribute for products (https://schema.org/mpn) however, it's not clear if, like SKU, the MPN needs to be unique to ProductModel (https://schema.org/ProductModel)?
If that were the case, I'd have a problem as there are multiple MPN's per ProductModel.
I see https://schema.org/isVariantOf too, which could be useful?
Anyone with experience of Schema?
-
First, why were you looking at the reports? Have you seen some type of ranking loss that you are trying to remedy?
Second, the moz tools are just tools to provide you with an oversight on where you are at, and potential areas your site can be improved. They work, but are not dedicated to any one type of website i.e. e-commerce vs static or content-based.
To get the unique pages you seek, it may be possible to use javascript to load content for variables of part numbers. As stated before, your site is getting seen as duplicate due to only a few things changing out per page.
Possible fixes:
1. Use dynamic coding to load part number variables, such as drop down menus for alternate versions or parts or models. This will allow you fewer pages to direct your backlinks to as well.2. Have more top level pages based around the category, and focus on getting the category pages ranking rather than the individual part pages. Again, focus your backlinking efforts on these pages.
3. Use a plugin to apply rich snippet markup to the individual product pages, adding another layer of "uniqueness"
-
The pages were not intended strictly for SEO value, they were mainly built for user value, i.e. returning a 100% focused page on the part number they searched for. Remember, many people use Google as a navigational tool and they also consider the product to the the part no. they searched for, not the main manufacturer of the product (ATE).
I understand what you are saying though and think building stronger product pages is the way to go, although I will try on a subset of pages and monitor results.
Now to decide which approach to take to yield the best results:
a.) SEO focus on ATE MK70 (list all the vehicle makes/models/years this product work on, including list of part numbers)
or...
b.) SEO focus on vehicle makes/model (then list all the manufacturers of suitable products, with corresponding part numbers)Thanks,
Woody -
This is one of the things Panda was trying to discourage (creating pages strictly for SEO value as opposed to user value that have thin content).
Consolidating and building out a single page is the way to go. Google will still crawl the product numbers, and they will be on a much stronger page. Even if they're not in the URL and title, a more valuable page nearly always wins out.
Not only that, you're playing with fire right now. If you haven't been hit by Panda yet, your odds are much higher with the numerous little pages.
-
Thanks guys
William
What's the thought process of creating a bunch of new pages, even though it's the same product, just referred to differently by different companies? Just for the unique URLs and titles?
Samuel
Would you want to create a separate page for "red Honda Civic," "green Honda civic," and countless other colors? Of course not.
To hopefully address both questions with one answer; the reason for building separate pages was to give SEO focus to the unique part numbers and the product type by vehicle make / model / year.
Very few people in the industry search for the product by name, it's always by part number. In fact, I'd go as far as to say there's few who would actually know the brand of "the product", that being ATE MK70 in our example above.
I understand the logic of building a strong single product page with all these part numbers listed, but would this page really rank well for searches on part number? Bear in mind, unlike the red, green, blue Honda Civic example, where there's perhaps a dozen different colours, we're talking literally 100's of part numbers per product and variations of it's formatting.
I welcome further conversation and ideas on this
Thanks so far guys! -
Thanks for the question. I'm not able to go through your site at the moment, but I would ask: Do you really need a separate page for every single make, model, and part number? Correct me if I'm wrong, but this seems to be what you're doing. If so, you're just asking for a Panda penalty.
Here's a basic example: Say that you sell Honda Civics. Would you want to create a separate page for "red Honda Civic," "green Honda civic," and countless other colors? Of course not. All of the content would be entirely the same except for the listed color throughout each title and page's text.
I'd take a look at Amazon as an example. Say that I go to a page for a certain T-shirt. The same page for that individual product will include all of the color variations w_ithin that single product page_. Each color variation is not a new page and URL (or if it is, it has a rel=canonical tag back to the main product page -- I don't remember). I'd look to this example as a way that you can vastly cut down the number of product pages so that each one is truly unique, valuable, and useful to both search engines and customers.
I hope that helps -- good luck!
-
I think you're already in Panda territory. The content can't get much thinner. It seems like all those sub-pages that are linked to on the page you just shared are unnecessary, no? Couldn't you just have the one page, build it out with the cars it works in, maybe a diagram or instruction on how to put it in, and make a really valuable page?
What's the thought process of creating a bunch of new pages, even though it's the same product, just referred to differently by different companies? Just for the unique URLs and titles?
Consolidating all of that would eliminate thin content and likely strengthen your landing page exponentially.
-
Thank you for your answer William and taking the time to respond,
I understand what you are saying but I am a little skeptical as that being a logical/achievable solution?
Let's say we did write some content for each product, the content would be "thin" to say the least.
As an example, we have over 700 products (per language), this being on of them - http://www.actronics.eu/en/shop/product/ate-mk70
This product alone works in over 43 different vehicle marques, illustrated in the list of on the page.
The only thing different about them is the part number, i.e. what the manufacturer refers to this part as (Audi A3 refer to it as 10097003153, Peugeot 206 refer to it as 9659136980). There really is nothing more to say about the product, without creating more dupe content and getting into Panda territory, so I don't see this being a viable solution?
We have the pages in place as mechanics/garages search by manufactures number, not product type.
Any more thoughts/ideas?
-
This issue isn't duplicate content, Moz is just flagging it as that because of the severe lack of content, making the footer, sidebar, etc. the majority of the content on the page. This is not good, and the best way to remedy it would be to build out more content.
I realize with roughly 14k pages, this isn't realistic to do for every single page, but you could prioritize. What are your most popular products? Start with those and build out content to make sure they rank and perform as well as possible, and then continue to go down the list as you have time to do so, manually optimizing and building out the most profitable/popular pages first.
When it comes to unique content, there is no automated solution. Either you write stuff, hire someone else to write stuff, or do what a lot of places do: implements a review system for customers to use and crowd-source the unique content that way.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are feeds bad for duplicate content?
One of my clients has been invited to feature his blog posts here https://app.mindsettlers.com/. Here is an example of what his author page would look like: https://app.mindsettlers.com/author/6rs0WXbbqwqsgEO0sWuIQU. I like that he would get the exposure however I am concerned about duplicate content with the feed. If he has a canonical tag on each blog post to itself, would that be sufficient for the search engines? Is there something else that could be done? Or should he decline? Would love your thoughts! Thanks.
Intermediate & Advanced SEO | | cindyt-17038
Cindy T.0 -
Duplicate content on URL trailing slash
Hello, Some time ago, we accidentally made changes to our site which modified the way urls in links are generated. At once, trailing slashes were added to many urls (only in links). Links that used to send to
Intermediate & Advanced SEO | | yacpro13
example.com/webpage.html Were now linking to
example.com/webpage.html/ Urls in the xml sitemap remained unchanged (no trailing slash). We started noticing duplicate content (because our site renders the same page with or without the trailing shash). We corrected the problematic php url function so that now, all links on the site link to a url without trailing slash. However, Google had time to index these pages. Is implementing 301 redirects required in this case?1 -
Complicated Duplicate Content Question...but it's fun, so please help.
Quick background: I have a page that is absolutely terrible, but it has links and it's a category page so it ranks. I have a landing page which is significantly - a bizillion times - better, but it is omitted in the search results for the most important query we need. I'm considering switching the content of the two pages, but I have no idea what they will do. I'm not sure if it will cause duplicate content issues or what will happen. Here are the two urls: Terrible page that ranks (not well but it's what comes up eventually) https://kemprugegreen.com/personal-injury/ Far better page that keeps getting omitted: https://kemprugegreen.com/location/tampa/tampa-personal-injury-attorney/ Any suggestions (other than just wait on google to stop omitting the page, because that's just not going to happen) would be greatly appreciated. Thanks, Ruben
Intermediate & Advanced SEO | | KempRugeLawGroup0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Magento Duplicate Content Recovery
Hi, we switched platforms to Magento last year. Since then our SERPS rankings have declined considerably (no sudden drop on any Panda/Penguin date lines). After investigating, it appeared we neglected to No index, follow all our filter pages and our total indexed pages rose sevenfold in a matter of weeks. We have since fixed the no index issue and the pages indexed are now below what we had pre switch to Magento. We've seen some positive results in the last week. Any ideas when/if our rankings will return? Thanks!
Intermediate & Advanced SEO | | Jonnygeeuk0 -
How can I remove duplicate content & titles from my site?
Without knowing I created multiple URLs to the same page destinations on my website. My ranking is poor and I need to fix this problem quickly. My web host doesn't understand the problem!!! How can I use canonical tags? Can somebody help, please.
Intermediate & Advanced SEO | | ZoeAlexander0 -
Duplicate Content/ Indexing Question
I have a real estate Wordpress site that uses an IDX provider to add real estate listings to my site. A new page is created as a new property comes to market and then the page is deleted when the property is sold. I like the functionality of the service but it creates a significant amount of 404's and I'm also concerned about duplicate content because anyone else using the same service here in Las Vegas will have 1000's of the exact same property pages that I do. Any thoughts on this and is there a way that I can have the search engines only index the core 20 pages of my site and ignore future property pages? Your advice is greatly appreciated. See link for example http://www.mylvcondosales.com/mandarin-las-vegas/
Intermediate & Advanced SEO | | AnthonyLasVegas0 -
Duplicate content ramifications for country TLDs
We have a .com site here in the US that is ranking well for targeted phrases. The client is expanding its sales force into India and South Africa. They want to duplicate the site entirely, twice. Once for each country. I'm not well-versed in international SEO. Will this cause a duplicate content filter? Would google.co.in and google.co.za look at google.com's index for duplication? Thanks. Long time lurker, first time question poster.
Intermediate & Advanced SEO | | Alter_Imaging0