Duplicate content in Shopify - subsequent pages in collections
-
Hello everyone!
I hope an expert in this community can help me verify the canonical codes I'll add to our store is correct.
Currently, in our Shopify store, the subsequent pages in the collections are not indexed by Google, however the canonical URL on these pages aren't pointing to the main collection page (page 1), e.g. The canonical URL of page 2, page 3 etc are used as canonical URLs instead of the first page of the collections.
I have the canonical codes attached below, it would be much appreciated if an expert can urgently verify these codes are good to use and will solve the above issues? Thanks so much for your kind help in advance!!
-----------------CODES BELOW---------------
<title><br /> {{ page_title }}{% if current_tags %} – tagged "{{ current_tags | join: ', ' }}"{% endif %}{% if current_page != 1 %} – Page {{ current_page }}{% endif %}{% unless page_title contains shop.name %} – {{ shop.name }}{% endunless %}<br /></title>
{% if page_description %}{% endif %}
{% if current_page != 1 %}
{% else %}
{% endif %}
{% if template == 'collection' %}{% if collection %}
{% if current_page == 1 %}{% endif %}
{% if template == 'product' %}{% if product %}{% endif %}
{% if template == 'collection' %}{% if collection %}{% endif %}
-
The advice is no longer current. If you want to see what Google used to say about rel=next/prev, you can read that on this archived URL: https://web.archive.org/web/20190217083902/https://support.google.com/webmasters/answer/1663744?hl=en
As you say Google are no longer using rel=prev/next as an indexation signal. Don't take that to mean that, Google are now suddenly blind to paginated content. It probably just means that their base-crawler is now advanced enough, not to require in-code prompting
I still don't think that de-indexing all your paginated content with canonical tags is a good idea. What if, for some reason, the paginated version of a parent URL is more useful to end-users? Should you disallow Google from ranking that content appropriately, by using canonical tags (remember: a page that uses a canonical tag cites itself as non-canonical, making it unlikely that it could be indexed)
Google may not find the parent URL as useful as the paginated variant which they might otherwise rank, so using canonical tags in this way could potentially reduce your number of rankings or ranking URLs. The effect is likely to be very slight, but personally I would not recommend de-indexation of paginated content via canonical tags (unless you are using some really weird architecture that you don't believe Google would recognise as pagination). The parameter based syntax of "?p=" or "&p=" is widely adopted, Google should be smart enough to think around this
If Search Console starts warning you of content duplication, maybe consider canonical deployment. Until such a time, it's not really worth it
-
Hi, I came across this page because I have the same question about page 2 of collection pages. In my case, the URL for page 2 of a collection would be site.com/collection?p=2, with the canonical tag for the page also pointing to site.com/collection?p=2.
I am concerned that this will create duplicate content, because the collection description is repeated on each page of the collection.
Is your advice still current? The link in your response no longer exists, and according to webmasters.googleblog.com/2011/09/pagination-with-relnext-and-relprev.html, Rel=prev/next is not an indexing signal anymore.
Thanks!
-
Your code looks as if you have more than one canonical tag deployed on a single web-page, so that would be a bad deployment. One page can only have one canonical parent and that's that
It seems that you are attempting to use canonical tags to address pagination (paginated content, e.g: site.com/collection/page-2/ or site.com/collection?p=2) on your collection URLs, is that right?
Don't use canonical tags to address pagination. A paginated URL is canonical for the specified 'page' of content, which may (under some rare circumstances) be more useful to search users. Do not de-index your paginated content by making those paginated URLs canonical elsewhere
Instead, use Google's rel=prev/next guidance as outlined here.
If you de-index paginated URLs by using canonical tags, the rankings that some of those paginated URLs (due to their unique comments or tabbed content) may have gained, will not usually be given to the canonical parent. Although you will have more control over the user-journey, you will lose out on some long-tail traffic
Instead use rel=prev/next which will tell Google that the content is a subsequent 'page' of another document. This will make the paginated URLs 'less' likely to rank, but will allow them to rank for very specific search queries. Then you have the best of both worlds
Some people think that, prev/next and canonical are actually compatible. I am a little uneasy with regards to that, but if you do decide to utilise canonical tags to force one page to rank more often - don't deploy them without rel-prev/next
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate pages and Canonicals
Hi all, Our website has more than 30 pages which are duplicates. So canonicals have been deployed to show up only 10 of these pages. Do more of these pages impact rankings? Thanks
Intermediate & Advanced SEO | | vtmoz0 -
Scraping / Duplicate Content Question
Hi All, I understanding the way to protect content such as a feature rich article is to create authorship by linking to your Google+ account. My Question
Intermediate & Advanced SEO | | Mark_Ch
You have created a webpage that is informative but not worthy to be an article, hence no need create authorship in Google+
If a competitor comes along and steals this content word for word, something similar, creates their own Google+ page, can you be penalised? Is there any way to protect yourself without authorship and Google+? Regards Mark0 -
Is Sitemap Issue Causing Duplicate Content & Unindexed Pages on Google?
On July 10th my site was migrated from Drupal to Google. The site contains approximately 400 pages. 301 permanent redirects were used. The site contains maybe 50 pages of new content. Many of the new pages have not been indexed and many pages show as duplicate content. Is it possible that there is a site map issue that is causing this problem? My developer believes the map is formatted correctly, but I am not convinced. The sitemap address is http://www.nyc-officespace-leader.com/page-sitemap.xml [^] I am completely non technical so if anyone could take a brief look I would appreciate it immensely. Thanks,
Intermediate & Advanced SEO | | Kingalan1
Alan | |0 -
Duplicate pages with http and https
Hi all, We changed the payment part of our site to https from http a while ago. However once on the https pages, all the footer and header links are relative URLs, so once users have reached the payment pages and then re-navigate back to other pages in our website they stay on https. The build up of this happening has led to Google indexing all our pages in https (something we did not want to happen), and now we are in the situation where our homepage listing on Google is https rather than http. We would prefer the organic listings to be http (rather than https) and having read lots on this (included the great posts on the moz (still feels odd not refering to it as seomoz!) blog around this subject), possible solutions include redirects or a canoncial tags. My additional questions around these options are: 1. We already have 2 redirects on some pages (long story), will another one negatively impact our rankings? 2. Is a canonical a strong enough hint to Google to stop Google indexing the https versions of these page to the extent that out http pages will appear in natural listings again? If anyone has any other suggestions or other ideas of how to address this issue, that would be great! Thanks 🙂 Diana
Intermediate & Advanced SEO | | Diana.varbanescu0 -
Wordpress and duplicate content
Hi, I have recently installed wordpress and started a blog but now loads of duplicate pages are cropping up for tags and authors and dates etc. How do I do the canonical thing in wordpress? Thanks Ian
Intermediate & Advanced SEO | | jwdl0 -
Worldwide Stores - Duplicate Content Question
Hello, We recently added new store views for our primary domain for different countries. Our primary url: www.store.com Different Countries URLS: www.store.com/au www.store.com/uk www.store.com/nz www.store.com/es And so forth and so on. This resulted in an almost immediate rankings drop for several keywords which we feel is a result of duplicate content creation. We've thousands of pages on our primary site. We've assigned a "no follow" tags to all store views for now, and trying to roll back the changes we did. However, we've seen some stores launching in different countries with same content, but with a country specific extensions like .co.uk, .co.nz., .com.au. At this point, it appears we have three choices: 1. Remove/Change duplicate content in country specific urls/store views. 2. Launch using .co.uk, .com.au with duplicate content for now. 3. Launch using .co.uk, .com.au etc with fresh content for all stores. Please keep in mind, option 1, and 3 can get very expensive keeping hundreds of products in untested territories. Ideally, we would like test first and then scale. However, we'd like to avoid any duplicate penalties on our main domain. Thanks for your help and answers on the same.
Intermediate & Advanced SEO | | globaleyeglasses0 -
Having a hard time with duplicate page content
I'm having a hard time redirecting website.com/ to website.com The crawl report shows both versions as duplicate content. Here is my htaccess: RewriteEngine On
Intermediate & Advanced SEO | | cgman
RewriteBase /
#Rewrite bare to www
RewriteCond %{HTTP_HOST} ^mywebsite.com
RewriteRule ^(([^/]+/)*)index.php$ http://www.mywebsite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}.php -f
RewriteRule ^(.*)$ $1.php [NC,L]
RewriteCond %{HTTP_HOST} !^.localhost$ [NC]
RewriteRule ^(.+)/$ http://%{HTTP_HOST}$1 [R=301,L] I added the last 2 lines after seeing a Q&A here, but I don't think it has helped.0 -
How to manage duplicate content?
I have a real estate site that contains a large amount of duplicate content. The site contains listings that appear both on my clients website and on my competitors websites(who have better domain authority). It is critical that the content is there because buyers need to be able to find these listings to make enquiries. The result is that I have a large number pages that contain duplicate content in some way, shape or form. My search results pages are really the most important ones because these are the ones targeting my keywords. I can differentiate these to some degree but the actual listings themselves are duplicate. What strategies exist to ensure that I'm not suffereing as a result of this content? Should I : Make the duplicate content noindex. Yes my results pages will have some degree of duplicate content but each result only displays a 200 character summary of the advert text so not sure if that counts. Would reducing the amount of visible duplicate content improve my rankings as a whole? Link back to the clients site to indicate that they are the original source Any suggestions?
Intermediate & Advanced SEO | | Mulith0