Duplicate Page Content
-
I've got several pages of similar products that google has listed as duplicate content. I have them all set up with rel="prev" and rel="next tags telling google that they are part of a group but they've still got them listed as duplicates. Is there something else I should do for these pages or is that just a short falling of googles webmaster tools?
One of the pages: http://www.jaaronwoodcountertops.com/wood-countertop-gallery/walnut-countertop-9.html
-
Oh, sorry - didn't catch that some were duplicated. Given the scope, I think I'd put the time into creating unique titles and single-paragraph descriptions. There's a fair shot these pages could rank for longer-tail terms, and the content certainly has value to visitors.
-
Your right. There are a few pages with unique titles already but there are several that are dups.
-
I'm wondering if we're looking at two different things - I was looking at the pages like:
http://www.jaaronwoodcountertops.com/wood-countertop-gallery/walnut-countertop-8.html
http://www.jaaronwoodcountertops.com/wood-countertop-gallery/walnut-countertop-9.html
These already seem to have unique titles.
-
Thanks. We try to make them nice. I'm going to work on adding some content to each page but it does get difficult when they're so similar. I may just do a few pages and have them indexed and the others noindex.
-
The duplicate content is showing up as dup. titles and description tags. Do you think that if I added titles like "Photographs of J. Aaron Wood Countertops and Butcher Block | Image One" to all the pages and then changed just the image number that would be enough to eliminate the dup. title issues? Would that make any difference if the content is the same?
-
It sounds like you're all pretty much saying the same thing as far as the options go. I was so happy when I learned about the rel=prev/next tags.
Do you guys think I should add noindex to all the pages now and as I add content remove the noindex or should I just leave them as they are and start adding the content as I get time? Which is worse for overall site rankings, loosing content or having duplicate content?
Dr. Meyers: The duplicate content is showing up as dup. titles and descriptions. Do you think that if I added titles like "Photographs of J. Aaron Wood Countertops and Butcher Block - Image One" to all the pages and then changed just the image number that would be enough to eliminate the dup. title issues? Would that make any difference if the content is the same?
Thanks guys.
-
This isn't a typical application of rel=prev/next, and I'm finding Google's treatment of those tags is inconsistent, but the logic of what you're doing makes sense, and the tags seem to be properly implemented. Google is showing all of the pages indexed, but rel=prev/next doesn't generally de-index paginated content (like a canonical tag can).
Where is GWT showing them as duplicates (i.e. title, META description, etc.)?
Long-term, there are two viable solutions:
(1) Only index the main gallery (NOINDEX the rest). This will focus your ranking power, but you'll lose long-tail content.
(2) Put in the time to write at least a paragraph for each gallery page. It'll take some time, but it's doable.
Given the scope (you're talking dozens of pages, not 1000s), I'd lean toward (2). These pages are somewhat unique and do potentially have value, but you need to translate more of that uniqueness into copy Google can index.
-
in the duplicate page use meta tag no index follow.
-
Hi
It seems that you have created pages just for the pictures you wanted to display and google possibly does not understand the content, as there isn't content. in a nutshell page 9 and 10 has almost the same content just with a different picture.
For your own sake a Title on each page will help you get better results and while you have already a page for each picture why not adding some details to it. google will like that more:-)
You might have a look into a proper CMS system in the future as pages will change!
I like your products:-)
Regards,
Jim Cetin
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Partially duplicated content on separate pages
TL;DR: I am writing copy for some web pages. I am duplicating some bits of copy exactly on separate web pages. And in other cases I am using the same bits of copy with slight alterations. Is this bad for SEO? Details: We sell about 10 different courses. Each has a separate page. I'm currently writing copy for those pages. Some of the details identical for each course. So I can duplicate the content and it will be 100% applicable. For example, when we talk about where we can run courses (we go to a company and run it on their premises) – that's applicable to every course. Other bits are applicable with minor alterations. So where we talk about how we'll tailor the course, I will say for example: "We will the tailor the course to the {technical documents|customer letters|reports} your company writes." Or where we have testimonials, the headline reads "Improving {customer writing|reports|technical documents} in every sector and industry". There is original content on each page. The duplicate stuff may seem spammy, but the alternative is me finding alternative re-wordings for exactly the same information. This is tedious and time-consuming and bizarre given that the user won't notice any difference. Do I need to go ahead and re-write these bits ten slightly different ways anyway?
Technical SEO | | JacobFunnell0 -
How to protect against duplicate content?
I just discovered that my company's 'dev website' (which mirrors our actual website, but which is where we add content before we put new content to our actual website) is being indexed by Google. My first thought is that I should add a rel=canonical tag to the actual website, so that Google knows that this duplicate content from the dev site is to be ignored. Is that the right move? Are there other things I should do? Thanks!
Technical SEO | | williammarlow0 -
Pages with content defined by querystring
I have a page that show traveltips: http://www.spies.dk/spanien/alcudia/rejsemalstips-liste This page shows all traveltips for Alcudia. Each traveltip also has its own url: http://www.spies.dk/spanien/alcudia/rejsemalstips?TravelTipsId=19767 ( 2 weeks ago i noticed the url http://www.spies.dk/spanien/alcudia/rejsemalstips show up in google webmaster tools as a 404 page, along with 100 of others urls to the subpage /rejsemalstips WITHOUT a querystring. With no querystring there is no content on the page and it goes 404. I need my technicians to redirect that page so it shows the list, but in the meantime i would like to block it in robots.txt But how do i block a page if it is called without a querystring?
Technical SEO | | alsvik0 -
"nofollow pages" or "duplicate content"?
We have a huge site with lots of geographical-pages in this structure: domain.com/country/resort/hotel domain.com/country/resort/hotel/facts domain.com/country/resort/hotel/images domain.com/country/resort/hotel/excursions domain.com/country/resort/hotel/maps domain.com/country/resort/hotel/car-rental Problem is that the text on ie. /excursions is often exactly the same on .../alcudia/hotel-sea-club/excursion and .../alcudia/hotel-beach-club/excursion The two hotels offer the same excursions, and the intro text on the pages are the exact same throughout the entire site. This is also a problem on the /images and /car-rental pages. I think in most cases the only difference on these pages is the Title, description and H1. These pages do not attract a lot of visits through search-engines. But to avoid them being flagged as duplicate content (we have more than 4000 of these pages - /excursions, /maps, /car-rental, /images), do i add a nofollow-tag to these, do i block them in robots.txt or should i just leave them and live with them being flagged as duplicate content? Im waiting for our web-team to add a function to insert a geographical-name in the text, so i could add ie #HOTELNAME# in the text and thereby avoiding the duplicate text. Right now we have intros like: When you visit the hotel ... instead of: When you visit Alcudia Sea Club But untill the web-team has fixed these GEO-tags, what should i do? What would you do and why?
Technical SEO | | alsvik0 -
Search/Search Results Page & Duplicate Content
If you have a page whose only purpose is to allow searches and the search results can be generated by any keyword entered, should all those search result urls be no index or rel canonical? Thanks.
Technical SEO | | cakelady0 -
Tags and Duplicate Content
Just wondering - for a lot of our sites we use tags as a way of re-grouping articles / news / blogs so all of the info on say 'government grants' can be found on one page. These /tag pages often come up with duplicate content errors, is it a big issue, how can we minimnise that?
Technical SEO | | salemtas0 -
Duplicate Content issue
I have been asked to review an old website to an identify opportunities for increasing search engine traffic. Whilst reviewing the site I came across a strange loop. On each page there is a link to printer friendly version: http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes That page also has a link to a printer friendly version http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes&printfriendly=yes and so on and so on....... Some of these pages are being included in Google's index. I appreciate that this can't be a good thing, however, I am not 100% sure as to the extent to which it is a bad thing and the priority that should be given to getting it sorted. Just wandering what views people have on the issues this may cause?
Technical SEO | | CPLDistribution0 -
Solution for duplicate content not working
I'm getting a duplicate content error for: http://www.website.com http://www.website.com/default.htm I searched for the Q&A for the solution and found: Access the.htaccess file and add this line: redirect 301 /default.htm http://www.website.com I added the redirect to my .htaccess and then got the following error from Google when trying to access the http://www.website.com/default.htm page: "This webpage has a redirect loop
Technical SEO | | Joeuspe
The webpage at http://www.webpage.com/ has resulted in too many redirects. Clearing your cookies for this site or allowing third-party cookies may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer." "Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many redirects." How can I correct this? Thanks0