Duplicate content in Shopify - subsequent pages in collections
-
Hello everyone!
I hope an expert in this community can help me verify the canonical codes I'll add to our store is correct.
Currently, in our Shopify store, the subsequent pages in the collections are not indexed by Google, however the canonical URL on these pages aren't pointing to the main collection page (page 1), e.g. The canonical URL of page 2, page 3 etc are used as canonical URLs instead of the first page of the collections.
I have the canonical codes attached below, it would be much appreciated if an expert can urgently verify these codes are good to use and will solve the above issues? Thanks so much for your kind help in advance!!
-----------------CODES BELOW---------------
<title><br /> {{ page_title }}{% if current_tags %} – tagged "{{ current_tags | join: ', ' }}"{% endif %}{% if current_page != 1 %} – Page {{ current_page }}{% endif %}{% unless page_title contains shop.name %} – {{ shop.name }}{% endunless %}<br /></title>
{% if page_description %}{% endif %}
{% if current_page != 1 %}
{% else %}
{% endif %}
{% if template == 'collection' %}{% if collection %}
{% if current_page == 1 %}{% endif %}
{% if template == 'product' %}{% if product %}{% endif %}
{% if template == 'collection' %}{% if collection %}{% endif %}
-
The advice is no longer current. If you want to see what Google used to say about rel=next/prev, you can read that on this archived URL: https://web.archive.org/web/20190217083902/https://support.google.com/webmasters/answer/1663744?hl=en
As you say Google are no longer using rel=prev/next as an indexation signal. Don't take that to mean that, Google are now suddenly blind to paginated content. It probably just means that their base-crawler is now advanced enough, not to require in-code prompting
I still don't think that de-indexing all your paginated content with canonical tags is a good idea. What if, for some reason, the paginated version of a parent URL is more useful to end-users? Should you disallow Google from ranking that content appropriately, by using canonical tags (remember: a page that uses a canonical tag cites itself as non-canonical, making it unlikely that it could be indexed)
Google may not find the parent URL as useful as the paginated variant which they might otherwise rank, so using canonical tags in this way could potentially reduce your number of rankings or ranking URLs. The effect is likely to be very slight, but personally I would not recommend de-indexation of paginated content via canonical tags (unless you are using some really weird architecture that you don't believe Google would recognise as pagination). The parameter based syntax of "?p=" or "&p=" is widely adopted, Google should be smart enough to think around this
If Search Console starts warning you of content duplication, maybe consider canonical deployment. Until such a time, it's not really worth it
-
Hi, I came across this page because I have the same question about page 2 of collection pages. In my case, the URL for page 2 of a collection would be site.com/collection?p=2, with the canonical tag for the page also pointing to site.com/collection?p=2.
I am concerned that this will create duplicate content, because the collection description is repeated on each page of the collection.
Is your advice still current? The link in your response no longer exists, and according to webmasters.googleblog.com/2011/09/pagination-with-relnext-and-relprev.html, Rel=prev/next is not an indexing signal anymore.
Thanks!
-
Your code looks as if you have more than one canonical tag deployed on a single web-page, so that would be a bad deployment. One page can only have one canonical parent and that's that
It seems that you are attempting to use canonical tags to address pagination (paginated content, e.g: site.com/collection/page-2/ or site.com/collection?p=2) on your collection URLs, is that right?
Don't use canonical tags to address pagination. A paginated URL is canonical for the specified 'page' of content, which may (under some rare circumstances) be more useful to search users. Do not de-index your paginated content by making those paginated URLs canonical elsewhere
Instead, use Google's rel=prev/next guidance as outlined here.
If you de-index paginated URLs by using canonical tags, the rankings that some of those paginated URLs (due to their unique comments or tabbed content) may have gained, will not usually be given to the canonical parent. Although you will have more control over the user-journey, you will lose out on some long-tail traffic
Instead use rel=prev/next which will tell Google that the content is a subsequent 'page' of another document. This will make the paginated URLs 'less' likely to rank, but will allow them to rank for very specific search queries. Then you have the best of both worlds
Some people think that, prev/next and canonical are actually compatible. I am a little uneasy with regards to that, but if you do decide to utilise canonical tags to force one page to rank more often - don't deploy them without rel-prev/next
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Semi-duplicate content yet authoritative site
So I have 5 real estate sites. One of those sites is of course the original, and it has more/better content on most of the pages than the other sites. I used to be top ranked for all of the subdivsion names in my town. Then when I did the next 2-4 sites, I had some sites doing better than others for certain keywords, and then I have 3 of those sites that are basically the same URL structures (besides the actual domain) and they aren't getting fed very many visits. I have a couple of agents that work with me that I loaned my sites to to see if that would help since it would be a different name. My same youtube video is on each of the respective subdivision pages of my site and theirs. Also, their content is just rewritten content from mine about the same length of content. I have looked over and seen a few of my competitors who only have one site and their URL structures arent good at all, and their content isn't good at all and a good bit of their pages rank higher than my main site which is very frustrating to say the least since they are actually copy cats to my site. I sort of started the precedent of content, mapping the neighborhood, how far that subdivision is from certain landmarks, and then shot a video of each. They have pretty much done the same thing and are now ahead of me. What sort of advice could you give me? Right now, I have two sites that are almost duplicate in terms of a template and same subdivsions although I did change the content the best I could, and that site is still getting pretty good visits. I originally did it to try and dominate the first page of the SERPS and then Penguin and Panda came out and seemed to figure that game out. So now, I would still like to keep all the sites, but I'm assuming that would entail making them all unique, which seems to be tough seeing as though my town has the same subdivisions. Curious as to what the suggestions would be, as I have put a lot of time into these sites. If I post my site will it show up in the SERPS? Thanks in advance
Intermediate & Advanced SEO | | Veebs0 -
Duplicate Content Pages - A Few Queries..
I am working through the latest Moz Crawl Report and focusing on the 'high priority' issues of Duplicate Page Content. There are some strange instances being flagged and so wondered whether anyone has any knowledge as to why this may be happening... Here is an example; This page; http://www.bolsovercruiseclub.com/destinations/cruise-breaks-&-british-isles/bruges/ ...is apparently duplicated with these pages; http://www.bolsovercruiseclub.com/guides/excursions http://www.bolsovercruiseclub.com/guides/cruises-from-the-uk http://www.bolsovercruiseclub.com/cruise-deals/norwegian-star-europe-cruise-deals Not sure why...? Also, pages that are on our 'Cruise Reviews' section such as this page; http://www.bolsovercruiseclub.com/cruise-reviews/p&o-cruises/adonia/cruising/931 ...are being flagged as duplicated content with a page like this; http://www.bolsovercruiseclub.com/destinations/cruise-breaks-&-british-isles/bilbao/ Is this a 'thin content' issue i.e. 2 pages have 'thin content' and are therefore duplicated? If so, the 'destinations' page can (and will be) rewritten with more content (and images) but the 'cruise reviews' are written by customers and so we are unable to do anything there... Hope that all makes sense?! Andy
Intermediate & Advanced SEO | | TomKing0 -
Need help with huge spike in duplicate content and page title errors.
Hi Mozzers, I come asking for help. I've had a client who's reported a staggering increase in errors of over 18,000! The errors include duplicate content and page titles. I think I've found the culprit and it's the News & Events calender on the following page: http://www.newmanshs.wa.edu.au/news-events/events/07-2013 Essentially each day of the week is an individual link, and events stretching over a few days get reported as duplicate content. Do you have any ideas how to fix this issue? Any help is much appreciated. Cheers
Intermediate & Advanced SEO | | bamcreative0 -
Duplicate Content Question
Hey Everyone, I have a question regarding duplicate content. If your site is penalized for duplicate content, is it just the pages with the content on it that are affected or is the whole site affected? Thanks 🙂
Intermediate & Advanced SEO | | jhinchcliffe0 -
Is this duplicate content?
My client has several articles and pages that have 2 different URLs For example: /bc-blazes-construction-trail is the same article as: /article.cfm?intDocID=22572 I was not sure if this was duplicate content or not ... Or if I should be putting "/article.cfm" into the robots.txt file or not.. if anyone could help me out, that would be awesome! Thanks 🙂
Intermediate & Advanced SEO | | ATMOSMarketing560 -
Duplicate Content on Product Pages
I'm getting a lot of duplicate content errors on my ecommerce site www.outdoormegastore.co.uk mainly centered around product pages. The products are completely different in terms of the title, meta data, product descriptions and images (with alt tags)but SEOmoz is still identifying them as duplicates and we've noticed a significant drop in google ranking lately. Admittedly the product descriptions are a little bit thin but I don't understand why the pages would be viewed as duplicates and therefore can be ranked lower? The content is definitely unique too. As an example these three pages have been identified as being duplicates of each other. http://www.outdoormegastore.co.uk/regatta-landtrek-25l-rucksack.html http://www.outdoormegastore.co.uk/canyon-bryce-adult-cycling-helmet-9045.html http://www.outdoormegastore.co.uk/outwell-minnesota-6-carpet-for-green-07-08-tent.html
Intermediate & Advanced SEO | | gavinhoman0 -
Are links to on-page content crawled / have any effect on page rank?
Lets say I have a really long article that begins with links to <a name="something">anchors on the same page.</a> <a name="something"></a> <a name="something">E.g.,</a> Chapter 1, Chapter 2, etc, allowing the user to scroll down to different content. There are also other links on this page that link to other pages. A few questions: Googlebot arrives on the page. Does it crawl links that point to anchors on the same page? When link juice is divided among all the links on the page, do these links count and page rank is then lost? Thanks!
Intermediate & Advanced SEO | | anthematic0 -
How to manage duplicate content?
I have a real estate site that contains a large amount of duplicate content. The site contains listings that appear both on my clients website and on my competitors websites(who have better domain authority). It is critical that the content is there because buyers need to be able to find these listings to make enquiries. The result is that I have a large number pages that contain duplicate content in some way, shape or form. My search results pages are really the most important ones because these are the ones targeting my keywords. I can differentiate these to some degree but the actual listings themselves are duplicate. What strategies exist to ensure that I'm not suffereing as a result of this content? Should I : Make the duplicate content noindex. Yes my results pages will have some degree of duplicate content but each result only displays a 200 character summary of the advert text so not sure if that counts. Would reducing the amount of visible duplicate content improve my rankings as a whole? Link back to the clients site to indicate that they are the original source Any suggestions?
Intermediate & Advanced SEO | | Mulith0