Advice on Duplicate Page Content
-
We have many pages on our website and they all have the same template (we use a CMS) and at the code level, they are 90% the same. But the page content, title, meta description, and image used are different for all of them.
For example -
http://www.jumpstart.com/common/find-easter-eggs
http://www.jumpstart.com/common/recognize-the-rsWe have many such pages.
Does Google look at them all as duplicate page content? If yes, how do we deal with this?
-
EGOL, Everett,
Thank you both for your very useful suggestions. Sounds like we should do something similar to our PDF documents to represent them as the actual/canonical content on the page. And we'll look at our CMS to see how we might implement the unlinked page name in the breadcrumb. We have done some work already in adding structured data with schemas (including aggregate ratings), so that is hopefully yielding some results already.
However, after an encouraging traffic spike that seemed to indicate that we were on the right track, we saw a very worrisome dip last month.... which then led to a lot of worried hand wringing about Panda.
So these suggestions are very helpful ; thanks again and we'll try them out!
-
Thank you, Everett,
Nice to see you posting in Q&A.
Look forward to seeing you regularly.
-
Hello Sudhir,
Those two pages would not be seen as duplicates. Google is very capable of separating the template from the content.
On a side note, you should look into getting the name of the page/game into the breadcrumb, though it doesn't have to be linked like the previous two pages in the path. For example:
You are here: Home --> Common --> Find Easter Eggs
Allowing visitors to review and rate the games would provide useful, keyword-rich, natural content on an otherwise content-sparse page. Once reviews/ratings are implemented you could also use Schema.org markup to enhance your search engine results by showing star ratings next to each game.
Good luck!
-
Google knows how to separate the template of the site from the content. So you have nothing to worry about if most of the code on your pages is the same code that is used on every other page.
I looked at your two sample pages and saw a few things that would concern me...
This page had very little content. If you have lots of pages with such a tiny amount of content you could have Panda problems.
http://www.jumpstart.com/common/find-easter-eggs
You also have pages like this....
http://www.jumpstart.com/common/recognize-the-rs-view
These have very little content.
I have a site with lots of printable content that is mainly images placed in .pdf documents to control the scale of the printing and the look of the printed page. The pages used to present them to visitors and the pdf documents were all thin content and my site had a Panda problem. That cause the rankings of every page on the site to fall and really damaged my traffic. I solved that by noindexing the html pages and applying rel=canonical to the pdf files using .htacess.
I can't say if this will happen to you but I would be uncomfortable if I had a site with such little content on its pages.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content, although page has "noindex"
Hello, I had an issue with some pages being listed as duplicate content in my weekly Moz report. I've since discussed it with my web dev team and we decided to stop the pages from being crawled. The web dev team added this coding to the pages <meta name='robots' content='max-image-preview:large, noindex dofollow' />, but the Moz report is still reporting the pages as duplicate content. Note from the developer "So as far as I can see we've added robots to prevent the issue but maybe there is some subtle change that's needed here. You could check in Google Search Console to see how its seeing this content or you could ask Moz why they are still reporting this and see if we've missed something?" Any help much appreciated!
Technical SEO | | rj_dale0 -
Duplicate content. Wordpress and Website
Hi All, Will Google punish me for having duplicate blog posts on my website's blog and wordpress? Thanks
Technical SEO | | Mike.NW0 -
Duplicate content on Product pages for different product variations.
I have multiple colors of the same product, but as a result I'm getting duplicate content warnings. I want to keep these all different products with their own pages, so that the color can be easily identified by browsing the category page. Any suggestions?
Technical SEO | | bobjohn10 -
Is this duplicate content when there is a link back to the original content?
Hello, My question is: Is it duplicate content when there is a link back to the original content? For example, here is the original page: http://www.saugstrup.org/en-ny-content-marketing-case-infografik/. But that same content can be found here: http://www.kommunikationsforum.dk/anders-saugstrup/blog/en-ny-content-marketing-case-til-dig, but there is a link back to the original content. Is it still duplicate content? Thanks in advance.
Technical SEO | | JoLindahl912 -
Duplicate Content Reports
Hi Dupe content reports for a new client are sjhowing very high numbers (8000+) main of them seem to be for sign in, register, & login type pages, is this a scenario where best course of action to resolve is likely to be via the parameter handling tool in GWT ? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
Duplicate Content Issue
My issue with duplicate content is this. There are two versions of my website showing up http://www.example.com/ http://example.com/ What are the best practices for fixing this? Thanks!
Technical SEO | | OOMDODigital0 -
Duplicate Content in Wordpress.com
Hi Mozers! I have a client with a blog on wordpress.com. http://newsfromtshirts.wordpress.com/ It just had a ranking drop because of a new Panda Update, and I know it's a Dupe Content problem. There are 3900 duplicate pages, basically because there is no use of noindex or canonical tag, so archives, categories pages are totally indexed by Google. If I could install my usual SEO plugin, that would be a piece of cake, but since Wordpress.com is a closed environment I can't. How can I put a noindex into all category, archive and author peges in wordpress.com? I think this could be done by writing a nice robot.txt, but I am not sure about the syntax I shoud use to achieve that. Thank you very much, DoMiSol Rossini
Technical SEO | | DoMiSoL0 -
Duplicate Page Title
First i had a problem with duplicate title errors, almost every page i had was double because my website linked to both www.funky-lama.com and funky-lama.com I changed this by adding a code to htaccess to redirect everything to www.funky-lama.com, but now my website was crawled again and the errors were actually doubled. all my pages now have duplicate title errors cause of pages like this www.funky-lama.com/160-confetti-gitaar.html funky-lama.com/160-confetti-gitaar.html www.funky-lama.com/1_present-time funky-lama.com/1_present-time
Technical SEO | | funkylama0