Can PDF be seen as duplicate content? If so, how to prevent it?
-
I see no reason why PDF couldn't be considered duplicate content but I haven't seen any threads about it.
We publish loads of product documentation provided by manufacturers as well as White Papers and Case Studies. These give our customers and prospects a better idea off our solutions and help them along their buying process.
However, I'm not sure if it would be better to make them non-indexable to prevent duplicate content issues. Clearly we would prefer a solutions where we benefit from to keywords in the documents.
Any one has insight on how to deal with PDF provided by third parties?
Thanks in advance.
-
It looks like Google is not crawling tabs anymore, therefore if your pdf's are tabbed within pages, it might not be an issue: https://www.seroundtable.com/google-hidden-tab-content-seo-19489.html
-
Sure, I understand - thanks EGOL
-
I would like to give that to you but it is on a site that I don't share in forums. Sorry.
-
Thanks EGOL
That would be ideal.
For a site that has multiple authors and with it being impractical to get a developer involved every time a web page / blog post and the pdf are created, is there a single line of code that could be used to accomplish this in .htaccess?
If so, would you be able to show me an example please?
-
I assigned rel=canonical to my PDFs using htaccess.
Then, if anyone links to the PDFs the linkvalue gets passed to the webpage.
-
Hi all
I've been discussing the topic of making content available as both blog posts and pdf downloads today.
Given that there is a lot of uncertainty and complexity around this issue of potential duplication, my plan is to house all the pdfs in a folder that we block with robots.txt
Anyone agree / disagree with this approach?
-
Unfortunately, there's no great way to have it both ways. If you want these pages to get indexed for the links, then they're potential duplicates. If Google filters them out, the links probably won't count. Worst case, it could cause Panda-scale problems. Honestly, I suspect the link value is minimal and outweighed by the risk, but it depends quite a bit on the scope of what you're doing and the general link profile of the site.
-
I think you can set it to public or private (logged-in only) and even put a price-tag on it if you want. So yes setting it to private would help to eliminate the dup content issue, but it would also hide the links that I'm using to link-build.
I would imagine that since this guide would link back to our original site that it would be no different than if someone were to copy the content from our site and link back to us with it, thus crediting us as the original source. Especially if we ensure to index it through GWMT before submitting to other platforms. Any good resources that delve into that?
-
Potentially, but I'm honestly not sure how Scrid's pages are indexed. Don't you need to log in or something to actually see the content on Scribd?
-
What about this instance:
(A) I made an "ultimate guide to X" and posted it on my site as individual HTML pages for each chapter
(B) I made a PDF version with the exact same content that people can download directly from the site
(C) I uploaded the PDF to sites like Scribd.com to help distribute it further, and build links with the links that are embedded in the PDF.
Would those all be dup content? Is (C) recommended or not?
-
Thanks!. I am going to look into this. I'll let you know if I learn anything.
-
If they duplicate your main content, I think the header-level canonical may be a good way to go. For the syndication scenario, it's tough, because then you're knocking those PDFs out of the rankings, potentially, in favor of someone else's content.
Honestly, I've seen very few people deal with canonicalization for PDFs, and even those cases were small or obvious (like a page with the exact same content being outranked by the duplicate PDF). It's kind of uncharted territory.
-
Thanks for all of your input Dr. Pete. The example that you use is almost exactly what I have - hundreds of .pdfs on a fifty page site. These .pdfs rank well in the SERPs, accumulate pagerank, and pass traffic and link value back to the main site through links embedded within the .pdf. The also have natural links from other domains. I don't want to block them or nofollow them butyour suggestion of using header directive sounds pretty good.
-
Oh, sorry - so these PDFs aren't duplicates with your own web/HTML content so much as duplicates with the same PDFs on other websites?
That's more like a syndication situation. It is possible that, if enough people post these PDFs, you could run into trouble, but I've never seen that. More likely, your versions just wouldn't rank. Theoretically, you could use the header-level canonical tag cross-domain, but I've honestly never seen that tested.
If you're talking about a handful of PDFs, they're a small percentage of your overall indexed content, and that content is unique, I wouldn't worry too much. If you're talking about 100s of PDFs on a 50-page website, then I'd control it. Unfortunately, at that point, you'd probably have to put the PDFs in a folder and outright block it. You'd remove the risk, but you'd stop ranking on those PDFs as well.
-
@EGOL: Can you expend a bit on your Author suggestion?
I was wondering if there is a way to do rel=author for a pdf document. I don't know how to do it and don't know if it is possible.
-
To make sure I understand what I'm reading:
- PDFs don't usually rank as well as regular pages (although it is possible)
- It is possible to configure a canonical tag on a PDF
My concern isn't that our PDFs may outrank the original content but rather getting slammed by Google for publishing them.
Am right in thinking a canonical tag prevents to accumulate link juice? If so I would prefer to not use it, unless it leads to Google slamming.
Any one has experienced Google retribution for publishing PDF coming from a 3rd party?
@EGOL: Can you expend a bit on your Author suggestion?
Thanks all!
-
I think it's possible, but I've only seen it in cases that are a bit hard to disentangle. For example, I've seen a PDF outrank a duplicate piece of regular content when the regular content had other issues (including massive duplication with other, regular content). My gut feeling is that it's unusual.
If you're concerned about it, you can canonicalize PDFs with the header-level canonical directive. It's a bit more technically complex than the standard HTML canonical tag:
http://googlewebmastercentral.blogspot.com/2011/06/supporting-relcanonical-http-headers.html
I'm going to mark this as "Discussion", just in case anyone else has seen real-world examples.
-
I am really interested in hearing what others have to say about this.
I know that .pdfs can be very valuable content. They can be optimized, they rank in the SERPs, they accumulate PR and they can pass linkvalue. So, to me it would be a mistake to block them from the index...
However, I see your point about dupe content... they could also be thin content. Will panda whack you for thin and dupes in your PDFs?
How can canonical be used... what about author?
Anybody know anything about this?
-
Just like any other piece of duplicate content, you can use canonical link elements to specify the original piece of content (if there's indeed more than one identical piece). You could also block these types of files in the robots.txt, or use noindex-follow meta tags.
Regards,
Margarita
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Query based site; duplicate content; seo juice flow.
Hi guys, We're planning on starting a Saas based service where we'll be selling different skins. Let's say WordPress themes, though it's not about that. Say we have an url called site.com/ and we would like to direct all seo juice to the mother landing page /best-wp-themes/ but then have that juice flow towards our additional pages: /best-wp-themes/?id=Mozify
Intermediate & Advanced SEO | | andy.bigbangthemes
/best-wp-themes/?id=Fiximoz /best-wp-themes/?id=Mozicom Challenges: 1. our content would be formatted like this:
a. Same content - features b. Same content - price c. Different content - each theme will have its own set of features / design specs. d. Same content - testimonials. How would be go about not being penalised by SE's for the duplicate content, but still have the /?id=whatever pages be indexed with proper content? 2. How do we go about making sure SEO juice flows to the /?id pages too?Basically it's the same thing with different skins. Thanks for the help!0 -
Duplicate Content: Is a product feed/page rolled out across subdomains deemed duplicate content?
A company has a TLD (top-level-domain) which every single product: company.com/product/name.html The company also has subdomains (tailored to a range of products) which lists a choosen selection of the products from the TLD - sort of like a feed: subdomain.company.com/product/name.html The content on the TLD & subdomain product page are exactly the same and cannot be changed - CSS and HTML is slightly differant but the content (text and images) is exactly the same! My concern (and rightly so) is that Google will deem this to be duplicate content, therfore I'm going to have to add a rel cannonical tag into the header of all subdomain pages, pointing to the original product page on the TLD. Does this sound like the correct thing to do? Or is there a better solution? Moving on, not only are products fed onto subdomain, there are a handfull of other domains which list the products - again, the content (text and images) is exactly the same: other.com/product/name.html Would I be best placed to add a rel cannonical tag into the header of the product pages on other domains, pointing to the original product page on the actual TLD? Does rel cannonical work across domains? Would the product pages with a rel cannonical tag in the header still rank? Let me know if there is a better solution all-round!
Intermediate & Advanced SEO | | iam-sold0 -
Need help with huge spike in duplicate content and page title errors.
Hi Mozzers, I come asking for help. I've had a client who's reported a staggering increase in errors of over 18,000! The errors include duplicate content and page titles. I think I've found the culprit and it's the News & Events calender on the following page: http://www.newmanshs.wa.edu.au/news-events/events/07-2013 Essentially each day of the week is an individual link, and events stretching over a few days get reported as duplicate content. Do you have any ideas how to fix this issue? Any help is much appreciated. Cheers
Intermediate & Advanced SEO | | bamcreative0 -
Can videos be considered duplicate content?
I have a page that ranks 5 and to get a rich snippet I'm thinking of adding a relevant video to the page. Thing is, the video is already on another page which ranks for this keyword... but only at position 20. As it happens the page the video is on is the more important page for other keywords, so I won't remove it. Will having the same video on two pages be considered a duplicate?
Intermediate & Advanced SEO | | Brocberry0 -
Can you be penalized by a development server with duplicate content?
I developed a site for another company late last year and after a few months of seo done by them they were getting good rankings for hundreds of keywords. When penguin hit they seemed to benefit and had many top 3 rankings. Then their rankings dropped one day early May. Site is still indexed and they still rank for their domain. After some digging they found the development server had a copy of the site (not 100% duplicate). We neglected to hide the site from the crawlers, although there were no links built and we hadn't done any optimization like meta descriptions etc. The company was justifiably upset. We contacted Google and let them know the site should not have been indexed, and asked they reconsider any penalties that may have been placed on the original site. We have not heard back from them as yet. I am wondering if this really was the cause of the penalty though. Here are a few more facts: Rankings built during late March / April on an aged domain with a site that went live in December. Between April 14-16 they lost about 250 links, mostly from one domain. They acquired those links about a month before. They went from 0 to 1130 links between Dec and April, then back to around 870 currently According to ahrefs.com they went from 5 ranked keywords in March to 200 in April to 800 in May, now down to 500 and dropping (I believe their data lags by at least a couple of weeks). So the bottom line is this site appeared to have suddenly ranked well for about a month then got hit with a penalty and are not in top 10 pages for most keywords anymore. I would love to hear any opinions on whether a duplicate site that had no links could be the cause of this penalty? I have read there is no such thing as a duplicate content penalty per se. I am of the (amateur) opinion that it may have had more to do with the quick sudden rise in the rankings triggering something. Thanks in advance.
Intermediate & Advanced SEO | | rmsmall0 -
Duplicate content question? thanks
Hi, Im my time as an SEO I have never come across the following two scenarios, I am an advocate of using unique content, therefore always suggest and in cases demand that all content is written or re-written. This is the scenarios I am facing right now. For Example we have www.abc.com (has over 200 original recipes) and then we have www.xyz.com with the recipes but they are translated into another language as they are targeting different audiences, will Google penalize for duplicate content? The other issue is that the client got the recipes from www.abc.com (that have been translated) and use them in www.xyz.com aswell, both sites owned by the same company so its not pleagurism they have legal rights but I am not sure how Google will see it and if it will penalize the sites. Thanks!
Intermediate & Advanced SEO | | M_81 -
Having a hard time with duplicate page content
I'm having a hard time redirecting website.com/ to website.com The crawl report shows both versions as duplicate content. Here is my htaccess: RewriteEngine On
Intermediate & Advanced SEO | | cgman
RewriteBase /
#Rewrite bare to www
RewriteCond %{HTTP_HOST} ^mywebsite.com
RewriteRule ^(([^/]+/)*)index.php$ http://www.mywebsite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}.php -f
RewriteRule ^(.*)$ $1.php [NC,L]
RewriteCond %{HTTP_HOST} !^.localhost$ [NC]
RewriteRule ^(.+)/$ http://%{HTTP_HOST}$1 [R=301,L] I added the last 2 lines after seeing a Q&A here, but I don't think it has helped.0 -
Duplicate content
I have just read http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world and I would like to know which option is the best fit for my case. I have the website http://www.hotelelgreco.gr and every image in image library http://www.hotelelgreco.gr/image-library.aspx has a different url but is considered duplicate with others of the library. Please suggest me what should i do.
Intermediate & Advanced SEO | | socrateskirtsios0