Duplicate content issue with pages that have navigation
-
We have a large consumer website with several sections that have navigation of several pages. How would I prevent the pages from getting duplicate content errors and how best would I handle SEO for these?
For example we have about 500 events with 20 events showing on each page. What is the best way to prevent all the subsequent navigation pages from getting a duplicate content and duplicate title error?
-
Hi,
To add to all the good advice given so far, you can also utilise URL Parameters in WMT Search Console to specify that a cert URL pattern is canonical content.
Kind Regards
Jimmy
-
Hello,
I assume that you are using get method where page navigation values passed through query string like - abc.com/event?page=2, page=3,. If this is your case then you should configure 'URL parameters' on both Google and Bing search engines. URL parameters are kind of pre notice for search engines crawlers to know that page content is duplicate on pages - event?page=2, page=3. After configuring URL parameters, Search engines will only index 'abc.com/event' page and exclude indextion of page=2, page=3 and so on
Few other important tags [rel="prev", rel="next"] you should add to notify crawlers that content is in pagination.
eg. on page - abc.com/event
<link rel="next" href="https://www.abc.com/even?page=2" />
on page - abc.com/even?page=2 <link rel="prev" href="https://www.abc.com/even" /> <link rel="next" href="https://www.abc.com/even?page=3" />
You may check pagination given on Arvixe Review
Thanks!
-
Hello.
I would include a different description and title for every section to avoid them as being duplicates if you only list the events.
Then, for the navigation on each section, you have three good options:
- Remove every page except the first from search engines using robots.txt
- Create a "Show all" page, linked from menus, and put canonicals on each pagination pointing to the full listing.
- Implement the link rel="next" and rel="prev" meta tags to help Google interpreting it is a pagination.
Any of those ways would help quite a lot to Google so that it can understand what is happening on your site.
I hope it helps you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Concerns of Duplicative Content on Purchased Site
Recently I purchased a site of 50+ DA (oldsite.com) that had been offline/404 for 9-12 months from the previous owner. The purchase included the domain and the content previously hosted on the domain. The backlink profile is 100% contextual and pristine. Upon purchasing the domain, I did the following: Rehosted the old site and content that had been down for 9-12 months on oldsite.com Allowed a week or two for indexation on oldsite.com Hosted the old content on my newsite.com and then performed 100+ contextual 301 redirects from the oldsite.com to newsite.com using direct and wild card htaccess rules Issued a Press Release declaring the acquisition of oldsite.com for newsite.com Performed a site "Change of Name" in Google from oldsite.com to newsite.com Performed a site "Site Move" in Bing/Yahoo from oldsite.com to newsite.com It's been close to a month and while organic traffic is growing gradually, it's not what I would expect from a domain with 700+ referring contextual domains. My current concern is around original attribution of content on oldsite.com shifting to scraper sites during the year or so that it was offline. For Example: Oldsite.com has full attribution prior to going offline Scraper sites scan site and repost content elsewhere (effort unsuccessful at time because google know original attribution) Oldsite.com goes offline Scraper sites continue hosting content Google loses consumer facing cache from oldsite.com (and potentially loses original attribution of content) Google reassigns original attribution to a scraper site Oldsite.com is hosted again and Google no longer remembers it's original attribution and thinks content is stolen Google then silently punished Oldsite.com and Newsite.com (which it is redirected to) QUESTIONS Does this sequence have any merit? Does Google keep track of original attribution after the content ceases to exist in Google's search cache? Are there any tools or ways to tell if you're being punished for content being posted else on the web even if you originally had attribution? Unrelated: Are there any other steps that are recommend for a Change of site as described above.
Intermediate & Advanced SEO | | PetSite0 -
Duplicate page wordpress title
Hi Moz Fan, I have a few question that confuse by now, According to my website used wordpress for create a blog.
Intermediate & Advanced SEO | | ASKHANUMANTHAILAND
but now i have got duplicate page title from wordpress blog and very weird because these duplicate
page title look like these term Lamborghini ASK Hanuman Club < title that was duplicate but they duplicate with the page like this
articles/13-วิธีง่ายๆ-เป็น-คนรวย-ได้/lamborghini-2/ , /articles/13-วิธีง่ายๆ-เป็น-คนรวย-ได้/lamborghini/ I don't know why Worpress create a new page that only have 1 pics, So I think that might be the effect
from Wordpress PHP Code Or wrong setting. Can anyone suggestion what should we do for these issues ?
because i have many duplicate page title from this case.1 -
About robots.txt for resolve Duplicate content
I have a trouble with Duplicate content and title, i try to many way to resolve them but because of the web code so i am still in problem. I decide to use robots.txt to block contents that are duplicate. The first Question: How do i use command in robots.txt to block all of URL like this: http://vietnamfoodtour.com/foodcourses/Cooking-School/
Intermediate & Advanced SEO | | magician
http://vietnamfoodtour.com/foodcourses/Cooking-Class/ ....... User-agent: * Disallow: /foodcourses ( Is that right? ) And the parameter URL: h
ttp://vietnamfoodtour.com/?mod=vietnamfood&page=2
http://vietnamfoodtour.com/?mod=vietnamfood&page=3
http://vietnamfoodtour.com/?mod=vietnamfood&page=4 User-agent: * Disallow: /?mod=vietnamfood ( Is that right? i have folder contain module, could i use: disallow:/module/*) The 2nd question is: Which is the priority " robots.txt" or " meta robot"? If i use robots.txt to block URL, but in that URL my meta robot is "index, follow"0 -
Duplicate Content Error because of passed through variables
Hi everyone... When getting our weekly crawl of our site from SEOMoz, we are getting errors for duplicate content. We generate pages dynamically based on variables we carry through the URL's, like: http://www.example123.com/fun/life/1084.php
Intermediate & Advanced SEO | | CTSupp
http://www.example123.com/fun/life/1084.php?top=true ie, ?top=true is the variable being passed through. We are a large site (approx 7000 pages) so obviously we are getting many of these duplicate content errors in the SEOMoz report. Question: Are the search engines also penalizing for duplicate content based on variables being passed through? Thanks!0 -
How to Best Establish Ownership when Content is Duplicated?
A client (Website A) has allowed one of their franchisees to use some of the content from their site on the franchisee site (Website B). This franchisee lifted the content word for word, so - my question is how to best establish that Website A is the original author? Since there is a business relationship between the two sites, I'm thinking of requiring Website B to add a rel=canonical tag to each page using the duplicated content and referencing the original URL on site A. Will that work, or is there a better solution? This content is primarily informational product content (not blog posts or articles), so I'm thinking rel=author may not be appropriate.
Intermediate & Advanced SEO | | Allie_Williams0 -
Duplicate content even with 301 redirects
I know this isn't a developer forum but I figure someone will know the answer to this. My site is http://www.stadriemblems.com and I have a 301 redirect in my .htaccess file to redirect all non-www to www and it works great. But SEOmoz seems to think this doesn't apply to my blog, which is located at http://www.stadriemblems.com/blog It doesn't seem to make sense that I'd need to place code in every .htaccess file of every sub-folder. If I do, what code can I use? The weirdest part about this is that the redirecting works just fine; it's just SEOmoz's crawler that doesn't seem to be with the program here. Does this happen to you?
Intermediate & Advanced SEO | | UnderRugSwept0 -
Content that is split into 4 pages, should I consolidate?
I am working on improving a website that has each section split into four pages. For example, if Indonesia Vacation was a section, it would have its main page, www.domain.com/indonesia-vacation, and the about, fact sheet, and tips on three other pages www.domain.com/indonesia-vacation-1 www.domain.com/indonesia-vacation-2 www.domain.com/indonesia-vacation-3 The pages share very similar title tags and I am worried it is hurting the main page for placement.. So to conserve link juice, would it make sense to have them all one page? There is not so much content that it would affect load time. My strategy would be to have all content available and part of the main page and 301 the three URL's back to the main page: www.domain.com/indonesia-vacation Any insight would be greatly appreciated!!!
Intermediate & Advanced SEO | | MattAaron0 -
Mobile version creating duplicate content
Hi We have a mobile site which is a subfolder within our site. Therefore our desktop site is www.mysite.com and the mobile version is www.mysite.com/m/. All URL's for specific pages are the same with the exception of /m/ in them for the mobile version. The mobile version has the specific user agent detection capabilities. I never saw this as being duplicate content initially as I did some research and found the following links
Intermediate & Advanced SEO | | peterkn
http://www.youtube.com/watch?v=mY9h3G8Lv4k
http://searchengineland.com/dont-penalize-yourself-mobile-sites-are-not-duplicate-content-40380
http://www.seroundtable.com/archives/022109.html What I am finding now is that when I look into Google Webmaster Tools, Google shows that there are 2 pages with the same Page title and therefore Im concerned if Google sees this as duplicate content. The reason why the page title and meta description is the same is simply because the content on the 2 verrsions are the exact same. Only layout changes due to handheld specific browsing. Are there any speficific precausions I could take or best practices to ensure that Google does not see the mobile pages as duplicates of the desktop pages Does anyone know solid best practices to achieve maximum results for running an idential mobile version of your main site?1