Duplicate content
-
the report shows duplicate content for a category page that has more than one page.
how can we avoid this since i cannot make a different meta content for the second page of the category page:
http://www.geographics.com/2-Cool-Colors-Poster-Board-14x22/c183_66_327_387/index.html
http://www.geographics.com/2-Cool-Colors-Poster-Board-14x22/c183_66_327_387/index.html?page=2
thanks,
Madlena
-
I'm not seeing that Google is currently indexing either of these pages, so they may be too deep or duplicated in other ways. Pagination is a tough issue, but in general pages 2+ have little or no search value (and, post-Panda, can actually harm you).
I would strongly recommend NOT using a canonical tag to page 1 - Google generally advises against this. You can use rel=prev/next, although it's a bit tough to implement and isn't honored by Bing. Generally, I'd advise one of two things:
(1) META NOINDEX, FOLLOW pages 2, 3, etc. - they really have no SEO value.
(2) If you have a View All page, link to it and rel-canonical to view all. This seems to be accepted by Google, but then the larger page will rank.
Generally, I find (1) easier and pretty effective.
Sorry, just saw Nakul's comment, and didn't realize you already have canonical tags in place. While it's not preferred solution, since it's already there and seems to be keeping these pages out of the index, I'd probably leave it alone. It doesn't look like Google is indexing these pages at all right now, though, which you may need to explore in more depth.
-
I see that you already have canonical tags in place. If I am on either of the 2 URLs you posted or if I am on http://www.geographics.com/?cPath=183_66_327_387&custom_perpage=48 or http://www.geographics.com/?cPath=183_66_327_387&custom_perpage=24 They all lead me to the same page, which will help you get rid of any possible duplicate content penalty, because you are passing a directive to Google telling that what the correct URL is so they only rank the canonical tag URL in the SERPS. IMO, you are good. You can however take it to the next level if needed by implementing the rel canocal = next / prev and test it to see if that helps.
-
I am not entirely sure if this will prevent the dup content issue, but you could try setting up rel canocal = next / prev for the pages to make it explicit that they are paginated content, and then change the rel canoncal on the individual pages to point to themselves instead of the index page.
If it's the rel cannocal causing confusion, that should help.
-
You will need to use the rel=cannonical meta tag in your index.html file.
In the section of your index.html file include the follow:
It will solve your problem. Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Photo Gallery with Duplicate Content and Titles
I have a photo Gallery that is coming up as a lot of Duplicate Titles and Page Content and fixing each photo just isn't possible right now. Should I just block the search engines from indexing them to resolve the errors?
On-Page Optimization | | NeilBelliveau0 -
Duplicate Content from WordPress Category Base?
I recently changed my category base in WordPress and instead of redirecting or deleting the old base, WordPress kept the content up. So I now have duplicate content on two different urls - one on the old category base, one on the new category base. How should I handle this situation? The site is only a couple weeks old, if that makes any difference.
On-Page Optimization | | JABacchetta0 -
Does schema.org assist with duplicate content concerns
The issue of duplicate content has been well documented and there are lots of articles suggesting to noindex archive pages in WordPress powered sites. Schema.org allows us to mark-up our content, including marking a components URL. So my question simply, is no-indexing archive (category/tag) pages still relevant when considering duplicate content? These pages are in essence a list of articles, which can be marked as an article or blog posting, with the url of the main article and all the other cool stuff the scheme gives us. Surely Google et al are smart enough to recognise these article listings as gateways to the main content, therefore removing duplicate content concerns. Of course, whether or not doing this is a good idea will be subjective and based on individual circumstances - I'm just interested in whether or not the search engines can handle this appropriately.
On-Page Optimization | | MarkCA0 -
Locating Duplicate Pages
Hi, Our website consists of approximately 15,000 pages however according to our Google Webmaster Tools account Google has around 26,000 pages for us in their index. I have run through half a dozen sitemap generators and they all only discover the 15,000 pages that we know about. I have also thoroughly gone through the site to attempt to find any sections where we might be inadvertently generating duplicate pages without success. It has been over six months since we did any structural changes (at which point we did 301's to the new locations) and so I'd like to think that the majority of these old pages have been removed from the Google Index. Additionally, the number of pages in the index doesn't appear to be going down by any discernable factor week on week. I'm certain it's nothing to worry about however for my own peace of mind I'd like to just confirm that the additional 11,000 pages are just old results that will eventually disappear from the index and that we're not generating any duplicate content. Unfortunately there doesn't appear to be a way to download a list of the 26,000 pages that Google has indexed so that I can compare it against our sitemap. Obviously I know about site:domain.com however this only returned the first 1,000 results which all checkout fine. I was wondering if anybody knew of any methods or tools that we could use to attempt to identify these 11,000 extra pages in the Google index so we can confirm that they're just old pages which haven’t fallen out of the index yet and that they’re not going to be causing us a problem? Thanks guys!
On-Page Optimization | | ChrisHolgate0 -
How to get rid of those duplicate pages
Hi eveyone Just got my first diagnostics report and i got 220 duplicate page titles and 217 duplicate content pages. I know why this has happened it is because i did play about a bit and did originally have:- www.mydomainname.com/index.php/alpha Then i change the page path to:- www.mydomainname.com/alpha Then i change the page path to:- www.mydomainname.com/category/alpha So now when i get crawled i have 3x duplicate page titles, descriptions and page content. Even when i have put 301 redirects to my preferred domain path. Which is hurting my seo, right? How do i stop theold domains from giving me these bad reports? The site is on Joomla Thanks guys Oujipickle
On-Page Optimization | | oujipickle0 -
Is rel=canonical used only for duplicate content
Can the rel-canonical be used to tell the search engines which page is "preferred" when there are similar pages? For instance, I have an internal page that Google is showing on the first page of the SERPs that I would prefer the home page be ranked for. Both the home and internal page have been optimized for the same keyword. What is interesting is that the internal page has very few backlinks compared to the home page but Google seems to favor it since the keyword is in the URL. I am afraid a 301 will drop us from the first page of the SERPs.
On-Page Optimization | | surveygizmo0 -
Strategies for revising my duplicate content?
New to SEO and SEOmoz. I tried searching for this first and I'm sure it's on here but I could not find it. I have a site that markets fishing charters in a few dozen cities. Up to now I was relying on PPC and using each city page as a landing page of sorts. Each citiy page is very similar (there are only so many ways to write about a type of fish or fishing). What would be the recommended way for optimizing this, keeping in mind the duplicate information we provide on each page seems to be important to people. Site is www.vipfishingcharters.com Thanks!
On-Page Optimization | | NoahC0 -
Duplicate Product BUT Unique Content -- any issues?
We have the situation where a group of products fit into 2 different categories and also serve different purposes (to the customer). Essentially, we want to have the same product duplicated on the site, but with unique content and it would even have a slightly different product name. Some specifications would be redundant, but the core content would be different. Any issues?
On-Page Optimization | | SEOPA1