Duplicate Page Content
-
I've got several pages of similar products that google has listed as duplicate content. I have them all set up with rel="prev" and rel="next tags telling google that they are part of a group but they've still got them listed as duplicates. Is there something else I should do for these pages or is that just a short falling of googles webmaster tools?
One of the pages: http://www.jaaronwoodcountertops.com/wood-countertop-gallery/walnut-countertop-9.html
-
Oh, sorry - didn't catch that some were duplicated. Given the scope, I think I'd put the time into creating unique titles and single-paragraph descriptions. There's a fair shot these pages could rank for longer-tail terms, and the content certainly has value to visitors.
-
Your right. There are a few pages with unique titles already but there are several that are dups.
-
I'm wondering if we're looking at two different things - I was looking at the pages like:
http://www.jaaronwoodcountertops.com/wood-countertop-gallery/walnut-countertop-8.html
http://www.jaaronwoodcountertops.com/wood-countertop-gallery/walnut-countertop-9.html
These already seem to have unique titles.
-
Thanks. We try to make them nice. I'm going to work on adding some content to each page but it does get difficult when they're so similar. I may just do a few pages and have them indexed and the others noindex.
-
The duplicate content is showing up as dup. titles and description tags. Do you think that if I added titles like "Photographs of J. Aaron Wood Countertops and Butcher Block | Image One" to all the pages and then changed just the image number that would be enough to eliminate the dup. title issues? Would that make any difference if the content is the same?
-
It sounds like you're all pretty much saying the same thing as far as the options go. I was so happy when I learned about the rel=prev/next tags.
Do you guys think I should add noindex to all the pages now and as I add content remove the noindex or should I just leave them as they are and start adding the content as I get time? Which is worse for overall site rankings, loosing content or having duplicate content?
Dr. Meyers: The duplicate content is showing up as dup. titles and descriptions. Do you think that if I added titles like "Photographs of J. Aaron Wood Countertops and Butcher Block - Image One" to all the pages and then changed just the image number that would be enough to eliminate the dup. title issues? Would that make any difference if the content is the same?
Thanks guys.
-
This isn't a typical application of rel=prev/next, and I'm finding Google's treatment of those tags is inconsistent, but the logic of what you're doing makes sense, and the tags seem to be properly implemented. Google is showing all of the pages indexed, but rel=prev/next doesn't generally de-index paginated content (like a canonical tag can).
Where is GWT showing them as duplicates (i.e. title, META description, etc.)?
Long-term, there are two viable solutions:
(1) Only index the main gallery (NOINDEX the rest). This will focus your ranking power, but you'll lose long-tail content.
(2) Put in the time to write at least a paragraph for each gallery page. It'll take some time, but it's doable.
Given the scope (you're talking dozens of pages, not 1000s), I'd lean toward (2). These pages are somewhat unique and do potentially have value, but you need to translate more of that uniqueness into copy Google can index.
-
in the duplicate page use meta tag no index follow.
-
Hi
It seems that you have created pages just for the pictures you wanted to display and google possibly does not understand the content, as there isn't content. in a nutshell page 9 and 10 has almost the same content just with a different picture.
For your own sake a Title on each page will help you get better results and while you have already a page for each picture why not adding some details to it. google will like that more:-)
You might have a look into a proper CMS system in the future as pages will change!
I like your products:-)
Regards,
Jim Cetin
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content
I am trying to get a handle on how to fix and control a large amount of duplicate content I keep getting on my Moz Reports. The main area where this comes up is for duplicate page content and duplicate title tags ... thousands of them. I partially understand the source of the problem. My site mixes free content with content that requires a login. I think if I were to change my crawl settings to eliminate the login and index the paid content it would lower the quantity of duplicate pages and help me identify the true duplicate pages because a large number of duplicates occur at the site login. Unfortunately, it's not simple in my case because last year I encountered a problem when migrating my archives into a new CMS. The app in the CMS that migrated the data caused a large amount of data truncation Which means that I am piecing together my archives of approximately 5,000 articles. It also means that much of the piecing together process requires me to keep the former app that manages the articles to find where certain articles were truncated and to copy the text that followed the truncation and complete the articles. So far, I have restored about half of the archives which is time-consuming tedious work. My question is if anyone knows a more efficient way of identifying and editing duplicate pages and title tags?
Technical SEO | | Prop650 -
Duplicate Page Content for www and non-www. Help!
Hi guys, having a bit of a tough time here... MOZ is reporting duplicate content for 21 pages on eagleplumbing.co.nz, however the reported duplicate is the www version of the page. For example: http://eagleplumbing.co.nz and http://www.eagleplumbing.co.nz are considered duplicates (see screenshot attached) Currently in search console I have just updated the non-www version to be set as the preferred version (I changed this back and forth twice today because I am confused!!!). Does anyone know what the correct course of action should be in this case? Things I have considered doing include: changing the preferred version to the www version in webmaster tools, setting up 301 redirects using a wordpress plugin called Eggplant 301 redirects. I have been doing some really awesome content creation and have created some good quality citations, so I think this is only thing that is eaffecting my rank. Any help would be greatly appreciated. view?usp=sharing
Technical SEO | | QRate0 -
I really need some help with Magento and Duplicate Page Content results I;m getting
Hi, We use Magento for our eCommerce platform and I'm getting a number of duplicate page content results. It mainly concerns the duplicate page content errors for our category pages. Firstly It seems like the product type and filter options highlighted in the picture are causing duplicate page content Also one particularity category is getting a lot from duplicate page content errors , http://www.tidy-books.co.uk/shop-all-products I understand that this category page is using duplicate pages of other category pages so I set this to exclude them from the site map but it looks likes its till being picked up? I've attached the csv file showing these errors as well. - > Any help would be massively appreciated Thanks filter.png moz-tidy-books-uk-crawl_issues-01-OCT-2014.csv
Technical SEO | | tidybooks0 -
Duplicate Page Content for sorted archives?
Experienced backend dev, but SEO newbie here 🙂 When SEOmoz crawls my site, I get notified of DPC errors on some list/archive sorted pages (appending ?sort=X to the url). The pages all have rel=canonical to the archive home. Some of the pages are shorter (have only one or two entries). Is there a way to resolve this error? Perhaps add rel=nofollow to the sorting menu? Or perhaps find a method that utilizes a non-link navigation method to sort / switch sorted pages? No issues with duplicate content are showing up on google webmaster tools. Thanks for your help!
Technical SEO | | jwondrusch0 -
We have set up 301 redirects for pages from an old domain, but they aren't working and we are having duplicate content problems - Can you help?
We have several old domains. One is http://www.ccisound.com - Our "real" site is http://www.ccisolutions.com The 301 redirect from the old domain to the new domain works. However, the 301-redirects for interior pages, like: http://www.ccisolund.com/StoreFront/category/cd-duplicators do not work. This URL should redirect to http://www.ccisolutions.com/StoreFront/category/cd-duplicators but as you can see it does not. Our IT director supplied me with this code from the HT Access file in hopes that someone can help point us in the right direction and suggest how we might fix the problem: RewriteCond%{HTTP_HOST} ccisound.com$ [NC] RewriteRule^(.*)$ http://www.ccisolutions.com/$1 [R=301,L] Any ideas on why the 301 redirect isn't happening? Thanks all!
Technical SEO | | danatanseo0 -
301ed Pages Still Showing as Duplicate Content in GWMT
I thank anyone reading this for their consideration and time. We are a large site with millions of URLs for our product pages. We are also a textbook company, so by nature, our products have two separate ISBNs: a 10 digit and a 13 digit form. Thus, every one of our books has at least two pages (10 digit and 13 digit ISBN page). My issue is that we have established a 301 for all the 10 digit URLs so they automatically redirect to the 13 digit page. This fix has been in place for months. However, Google still reports that they are detecting thousands of pages with duplicate title and meta tags. Google is referring to these page URLs that I already have 301ed to the canonical version many months ago! Is there anything that I can do to fix this issue? I don't understand what I am doing wrong. Example:
Technical SEO | | dfinn
http://www.bookbyte.com/product.aspx?isbn=9780321676672
http://www.bookbyte.com/product.aspx?isbn=032167667X As you can see the 10 digit ISBN page 301s to 13 digit canonical version. Google reports that they have detected duplicate title and meta tags between the two pages and there are thousands of these duplicate pages listed. To add some further context: The ISBN is just a parameter that allows us to provide content when someone searches for a product with the 10 or 13 digit ISBN. The 13 digit version of the page is the only physical page that exists, the 10 digit is only a part of the virtual URL structure of the website. This is why I cannot simply change the title and meta tags of the 10 digit pages because they only exist in the sense that the URL redirects to the 13 digit version. Also, we submit a sitemap every day of all the 13 digit pages so Google knows exactly what our physical URL structure is. I have submitted this question to GWMT forums and received no replies.0 -
Duplicate Content Caused By Blog Filters
We are getting some duplicate content warnings based on our blog. Canonical URL's can work for some of the pages, but most of the duplicate content is caused by blog posts appearing on more than 1 URL. What is the best way to fix this?
Technical SEO | | Marketpath0 -
Complex duplicate content question
We run a network of three local web sites covering three places in close proximity. Each sitehas a lot of unique content (mainly news) but there is a business directory that is shared across all three sites. My plan is that the search engines only index the business in the directory that are actually located in the place the each site is focused on. i.e. Listing pages for business in Alderley Edge are only indexed on alderleyedge.com and businesses in Prestbury only get indexed on prestbury.com - but all business have a listing page on each site. What would be the most effective way to do this? I have been using rel canonical but Google does not always seem to honour this. Will using meta noindex tags where appropriate be the way to go? or would be changing the urls structure to have the place name in and using robots.txt be a better option. As an aside my current url structure is along the lines of: http://dev.alderleyedge.com/directory/listing/138/the-grill-on-the-edge Would changing this have any SEO benefit? Thanks Martin
Technical SEO | | mreeves0