Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Duplicate without user-selected canonical excluded
-
We have pdf files uploaded in the media of wordpress and used in our website. As these pdfs are duplicate content of the original publishers, we have marked links to these pdf urls as nofollow. These pages are also disallowed in robots.txt
Now, Google Search Console has shown these pages Excluded as "Duplicate without user-selected canonical"
As it comes out we cannot use canonical tag with pdf pages so as to point to the original pdf source
If we embed a pdf viewer in our website and fetch the pdfs by passing the urls of the original publisher, would the pdfs be still read as text by google and again create duplicate content issue? Another thing, when the pdf expires and is removed, it would lead to 404 error.
If we direct our users to the third party website, then it would add up to our bounce rate.
What should be the appropriate way to handle duplicate pdfs?
Thanks
-
From what I have read, so much of the web is duplicate content so it really doesn't matter if the pdf is on other sites; let google figure it out. (example, every car brand dealer has a pdf of the same car model brochure on their dealer site) No big deal. Visitors will be landing on your site from other search relevance - the duplicate pdf doesn't matter. Just my take. Adrian
-
Sorry, I mean pdf files only
-
As the pdf pages are marked as a duplicate and not the pdf files, then you should check which page has duplicate content compared to it, and take the needed measures (canonical tags or 301 redirect) form the page with less rank to the page with more rank. Alternatively, you can edit the content so that it isn't anymore duplicate.
If I had a link to the site and duplicate pages, I would be able to give you a more detailed response.
Daniel Rika - Dalerio Consulting
https://dalerioconsulting.com/
info@dalerioconsulting.com -
Hello Daniel
The pdfs are duplicates from another site.
The thing is that we have already disallowed the pdfs in the robots.txt file.
Now, what happened is this - We have a set of pages (let's call them content pages) which we had disallowed in the robots file as they had thin content. Those pages have links to their respective third party pdfs, which have been marked as nofollow. The pdfs are also disallowed in the robots file.
Few days back, we improved our content pages and removed them from robots file so that they can be indexed. Pdfs are still disallowed. Despite being disallowed, we have come across this issue with the pdf pages as "Duplicate without user-selected canonical."
I hope I make myself clear. Any insights now please.
-
If the pdfs are duplicate within your own site, then the best solution would be for you to link to the same document from different sources. Then you can delete the duplicated documents and 301 redirect them to the original.
If the pdfs are duplicate from another site, then disallowing them on robots.txt will stop them from being marked as a duplicate, as the crawler will not be able to access them at all. It will just take some time for them to be updated on google search console.
If however, you want to add canonical tags to the pdf documents (or other non-HTML documents), you can add it to the HTTP header through the .htaccess file. You can find a tutorial on how to do that in this article.
Daniel Rika - Dalerio Consulting
https://dalerioconsulting.com/
info@dalerioconsulting.com
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Near Duplicate Title Tag Checker
Hi Everyone, I know there are a lot of tools like Siteliner, which can check the uniqueness of body copy, but are there any that can restrict the check to the title tags alone? Alternatively, is there an Excel or Google Sheets function that would allow me to do the same thing? Thanks, Andy
Intermediate & Advanced SEO | | AndyRSB0 -
How (or if) to apply re canonical tags to Shopify?
Anyone familiar with Shopify will understand the problems of their directory structure. Every time you add a product to a 'collection' it essentially creates a duplicate. For example... https://www.domain.com/products/product-slim-regular-bikini may also appear as: https://www.domain.com/collections/all/products/product-slim-regular-bikini https://www.domain.com/collections/new-arrivals/products/product-slim-regular-bikini https://www.domain.com/collections/bikinis/products/product-slim-regular-bikini etc, etc It's not uncommon to have up to six duplicates of each product. So my question is twofold: Firstly, should I worry about this from an SEO point of view? I understand the desire to minimise potential duplicate content issues and also in focussing the 'juice' on just one page per product. But I also planned on trying to build the authority of the collection pages. If I request Google not to index the product pages which link off the collections, does this not devalue these collections pages? Secondly, I understand the correct way to fix these is using 'rel canonical' tags, but I'm not clear about HOW to actually do this. Shopify support has not been very helpful. They have provided two different instructions, so just added to the confusion (see below). Shopify instruction #1: Add the following to the theme.liquid file... <title><br />{{ page_title }}{% if current_tags %} – tagged "{{ current_tags | join: ', ' }}"{% endif %}{% if current_page != 1 %} – Page {{ current_page }}{% endif %}{% unless page_title contains shop.name %} – {{ shop.name }}{% endunless %}<br /></title>
Intermediate & Advanced SEO | | muzzmoz
{% if page_description %} {% endif %} Shopify instruction #2: Add the following to each individual product page... So, can anyone help clarify: The best strategic approach to this inherent SEO issue with Shopify (besides moving to another platform!)? and If 'rel canonical' tags is the way to go, exactly where and how to apply them? Regards, Murray1 -
Directory with Duplicate content? what to do?
Moz keeps finding loads of pages with duplicate content on my website. The problem is its a directory page to different locations. E.g if we were a clothes shop we would be listing our locations: www.sitename.com/locations/london www.sitename.com/locations/rome www.sitename.com/locations/germany The content on these pages is all the same, except for an embedded google map that shows the location of the place. The problem is that google thinks all these pages are duplicated content. Should i set a canonical link on every single page saying that www.sitename.com/locations/london is the main page? I don't know if i can use canonical links because the page content isn't identical because of the embedded map. Help would be appreciated. Thanks.
Intermediate & Advanced SEO | | nchlondon0 -
Duplicate content on URL trailing slash
Hello, Some time ago, we accidentally made changes to our site which modified the way urls in links are generated. At once, trailing slashes were added to many urls (only in links). Links that used to send to
Intermediate & Advanced SEO | | yacpro13
example.com/webpage.html Were now linking to
example.com/webpage.html/ Urls in the xml sitemap remained unchanged (no trailing slash). We started noticing duplicate content (because our site renders the same page with or without the trailing shash). We corrected the problematic php url function so that now, all links on the site link to a url without trailing slash. However, Google had time to index these pages. Is implementing 301 redirects required in this case?1 -
Duplicate URLs ending with #!
Hi guys, Does anyone know why a site can contain duplicate URLs ending with hastag & exclamation mark e.g. https://site.com.au/#! We are finding a lot of these URLs (as duplicates) and i was wondering what they are from developer standpoint? And do you think it's worth the time and effort adding a rel canonical tag or 301 to these URLs eventhough they're not getting indexed by Google? Cheers, Chris
Intermediate & Advanced SEO | | jayoliverwright0 -
Pagination duplicate title and meta description
Hello, Getting a lot of duplicate title and meta description errors via google webmaster tools. For best SEO practices, do i no-index the page/2's, page/3's...? More importantly, i see how MOZ did it by adding "page 3" to their titles such as http://moz.com/blog?page=3. Is that a better way of doing it? If so, how do i do that on Yoast SEO? Thank you so much!
Intermediate & Advanced SEO | | Shawn1240 -
Duplicate Title tags even with rel=canonical
Hello, We were having duplicate content in our blog (a replica of each post automatically was done by the CMS), until we recently implemented a rel=canonical tag to all the duplicate posts (some 5 weeks ago). So far, no duplicate content were been found, but we are still getting duplicate title tags, though the rel=canonical is present. Any idea why is this the case and what can we do to solve it? Thanks in advance for your help. Tej Luchmun
Intermediate & Advanced SEO | | luxresorts0 -
No-index pages with duplicate content?
Hello, I have an e-commerce website selling about 20 000 different products. For the most used of those products, I created unique high quality content. The content has been written by a professional player that describes how and why those are useful which is of huge interest to buyers. It would cost too much to write that high quality content for 20 000 different products, but we still have to sell them. Therefore, our idea was to no-index the products that only have the same copy-paste descriptions all other websites have. Do you think it's better to do that or to just let everything indexed normally since we might get search traffic from those pages? Thanks a lot for your help!
Intermediate & Advanced SEO | | EndeR-0