Duplicate Page Content
-
I've got several pages of similar products that google has listed as duplicate content. I have them all set up with rel="prev" and rel="next tags telling google that they are part of a group but they've still got them listed as duplicates. Is there something else I should do for these pages or is that just a short falling of googles webmaster tools?
One of the pages: http://www.jaaronwoodcountertops.com/wood-countertop-gallery/walnut-countertop-9.html
-
Oh, sorry - didn't catch that some were duplicated. Given the scope, I think I'd put the time into creating unique titles and single-paragraph descriptions. There's a fair shot these pages could rank for longer-tail terms, and the content certainly has value to visitors.
-
Your right. There are a few pages with unique titles already but there are several that are dups.
-
I'm wondering if we're looking at two different things - I was looking at the pages like:
http://www.jaaronwoodcountertops.com/wood-countertop-gallery/walnut-countertop-8.html
http://www.jaaronwoodcountertops.com/wood-countertop-gallery/walnut-countertop-9.html
These already seem to have unique titles.
-
Thanks. We try to make them nice. I'm going to work on adding some content to each page but it does get difficult when they're so similar. I may just do a few pages and have them indexed and the others noindex.
-
The duplicate content is showing up as dup. titles and description tags. Do you think that if I added titles like "Photographs of J. Aaron Wood Countertops and Butcher Block | Image One" to all the pages and then changed just the image number that would be enough to eliminate the dup. title issues? Would that make any difference if the content is the same?
-
It sounds like you're all pretty much saying the same thing as far as the options go. I was so happy when I learned about the rel=prev/next tags.
Do you guys think I should add noindex to all the pages now and as I add content remove the noindex or should I just leave them as they are and start adding the content as I get time? Which is worse for overall site rankings, loosing content or having duplicate content?
Dr. Meyers: The duplicate content is showing up as dup. titles and descriptions. Do you think that if I added titles like "Photographs of J. Aaron Wood Countertops and Butcher Block - Image One" to all the pages and then changed just the image number that would be enough to eliminate the dup. title issues? Would that make any difference if the content is the same?
Thanks guys.
-
This isn't a typical application of rel=prev/next, and I'm finding Google's treatment of those tags is inconsistent, but the logic of what you're doing makes sense, and the tags seem to be properly implemented. Google is showing all of the pages indexed, but rel=prev/next doesn't generally de-index paginated content (like a canonical tag can).
Where is GWT showing them as duplicates (i.e. title, META description, etc.)?
Long-term, there are two viable solutions:
(1) Only index the main gallery (NOINDEX the rest). This will focus your ranking power, but you'll lose long-tail content.
(2) Put in the time to write at least a paragraph for each gallery page. It'll take some time, but it's doable.
Given the scope (you're talking dozens of pages, not 1000s), I'd lean toward (2). These pages are somewhat unique and do potentially have value, but you need to translate more of that uniqueness into copy Google can index.
-
in the duplicate page use meta tag no index follow.
-
Hi
It seems that you have created pages just for the pictures you wanted to display and google possibly does not understand the content, as there isn't content. in a nutshell page 9 and 10 has almost the same content just with a different picture.
For your own sake a Title on each page will help you get better results and while you have already a page for each picture why not adding some details to it. google will like that more:-)
You might have a look into a proper CMS system in the future as pages will change!
I like your products:-)
Regards,
Jim Cetin
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content on a Page Due to Responsive Version
What are the implications if a web designer codes the content of the site twice into the page in order to make the site responsive? I can't add the url I'm afraid but the H1 and the content appear twice in the code in order to produce both a responsive version and a desktop version. This is a Wordpress site. Is Google clever enough to distinguish between the 2 versions and treat them individually? Or will Google really think that the content has been repeated on the same page?
Technical SEO | | Wagada0 -
Duplicate Page Content
Hi, I just had my site crawled by the seomoz robot and it came back with some errors. Basically it seems the categories and dates are not crawling directly. I'm a SEO newbie here Below is a capture of the video of what I am talking about. Any ideas on how to fix this? Hkpekchp
Technical SEO | | mcardenal0 -
A problem with duplicate content
I'm kind of new at this. My crawl anaylsis says that I have a problem with duplicate content. I set the site up so that web sections appear in a folder with an index page as a landing page for that section. The URL would look like: www.myweb.com/section/index.php The crawl analysis says that both that URL and its root: www.myweb.com/section/ have been indexed. So I appear to have a situation where the page has been indexed twice and is a duplicate of itself. What can I do to remedy this? And, what steps should i take to get the pages re-indexed so that this type of duplication is avoided? I hope this makes sense! Any help gratefully received. Iain
Technical SEO | | iain0 -
Duplicate Content - Home Page even wth Mod Rewrite 301
Hi, It looks like Seomoz (and Screaming Frog) is showing my home page as duplicate content. http://www.mydomain.com Page Authority 61 Linking root Domain 321 http://www.mydomain.com/ Page Authority 61 Linking root Domain 321 [Screaming Frog shows duplicate as]
Technical SEO | | Force7
www.mydomain.com/
www.mydomain.com/index.html} Years ago I hired someone to write the code for a rewrite for non www to be 301 redirected to www version. I was surprised at finding out that I still have a problem. Here is the code on my htaccess page. <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteCond %{HTTP_HOST} !^www.mydomain.com [NC]
RewriteRule ^(.*)$ http://www.mydomain.com/$1 [L,R=301]</ifmodule> Was this code not properly written ? One more question, we were hit hard by Panda and Penguin, would something like this be that much of a factor. Thanks in advance, Force70 -
Worpress Tags Duplicate Content
I just fixed a tags duplicate content issue. I have noindexed the tags. Was wondering if anyone has ever fixed this issue and how long did it take you to recover from it? Just kind of want to know for a piece of mind.
Technical SEO | | deaddogdesign0 -
Cross-domain duplicate content issue
Hey all, Just double-checking something. Here's the issue, briefly. One of my clients is a large law firm. The firm has a main site, and an additional site for an office in Atlanta. On the main site, there is a list of all attorneys and links to their profiles (that they wrote themselves). The Atlanta site has this as well, but lists only the attorneys located in that office. I would like to have the profiles for the Atlanta lawyers on both sites. Would rel=canonical work to avoid a dupe-content smackdown? The profiles should rank for Atlanta over the main site. This just means that G will drop the main site's profiles (for those attorneys) from their index, correct? No other weird side effects? I hope I worded all that clearly!
Technical SEO | | LCNetwork0 -
Duplicate content, Original source?
Hi there, say i have two websites with identicle content. website a had content on before website b - so will be seen as the original source? If the content was intended for website b, would taking it off a then make the orinal source to google then go to website b? I want website b to get the value of the content but it was put on website a first - would taking it off website a then give website b the full power of the content? Any help of advice much appreciated. Kind Regards,
Technical SEO | | pauledwards0 -
Complex duplicate content question
We run a network of three local web sites covering three places in close proximity. Each sitehas a lot of unique content (mainly news) but there is a business directory that is shared across all three sites. My plan is that the search engines only index the business in the directory that are actually located in the place the each site is focused on. i.e. Listing pages for business in Alderley Edge are only indexed on alderleyedge.com and businesses in Prestbury only get indexed on prestbury.com - but all business have a listing page on each site. What would be the most effective way to do this? I have been using rel canonical but Google does not always seem to honour this. Will using meta noindex tags where appropriate be the way to go? or would be changing the urls structure to have the place name in and using robots.txt be a better option. As an aside my current url structure is along the lines of: http://dev.alderleyedge.com/directory/listing/138/the-grill-on-the-edge Would changing this have any SEO benefit? Thanks Martin
Technical SEO | | mreeves0