PDF's - Dupe Content
-
Hi
I have some pdfs linked to from a page with little content. Hence thinking best to extract the copy from the pdf and have on-page as body text, and the pdf will still be linked too. Will this count as dupe content ?
Or is it best to use a pdf plugin so page opens pdf automatically and hence gives page content that way ?
Cheers
Dan
-
Should be different, but you would have to look at them to make sure.
-
ps - is a pdf to html coverter different from a plugin that loads the pdf as an open page when you click it ? or same thing ?
-
That is what I was going to suggest - setting up a canonical in the http header of the PDF back to the article
https://support.google.com/webmasters/answer/139394?hl=en
As another option, you can just block access to the PDFs to keep them out of the index as well.
-
thanks Chris
yes you can canonicalise the pdf to the html (according to the comments of that article i just linked to anyway)
-
Hi Dan,
Yes PDFs are crawlable (sorry for confusion!) if you were to put it into say a .zip or .rar (or similar) it wouldn't be crawled or you could no index the link i guess. You would need to stick the PDF (download) behind some thing that couldn't be crawled. You could try rel= canonical but I've never tried it with a PDF so i'm not sure how that would go.
Hope that enlightens you a bit.
-
Thanks Chris although i thought PDFS were crawlable??: http://www.lunametrics.com/blog/2013/01/10/seo-pdfs/
Hence why im worried about dupe content if use content of pdf as body text too OR are you saying should no-follow the link to the pdf if use its content as body text because it is considered dupe content in that scenario ?
Ideally i want both - the copy on it used as body text copy on page and the pdf a linkable download, or page as embed of open pdf via a plugin.
-
What would give the user the best experience is the really question,I would;d say put it on page then if the user is lacking a plugin they can still read it, if you have it as a downloadable PDF is shouldn't be able to get crawled and thus avoiding the problem.
Hope that helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Suggestions on dealing with duplicate content?
What are the best ways to protect / deal with duplicate content? I've added an example scenario, Nike Trainer model 1 – has an overview page that also links to a sub-page about cushioning, one about Gore-Tex and one about breathability. Nike Trainer model 2,3,4,5 – have an overview page that also links to sub-pages page about cushioning , Gore-Tex and breathability. In each of the sub-pages the URL is a child of the parent so a distinct page from each other e.g. /nike-trainer/model-1/gore-tex /nike-trainer/model-2/gore-tex. There is some differences in material composition, some different images and of course the product name is referred multiple times. This makes the page in the region of 80% unique. Any suggestions welcome about the above example or any other ways you guys know of dealing with duplicate content.
On-Page Optimization | | punchseo0 -
Dynamically populated content
We are developing a website for a school that has 19 campuses divided into 8 districts. Ideally, we would like to have one search page that dynamically populates when people search WHILE on the site. The question is what happens when someone does an organic search, will the search engine populate with the schools in that district. For instance, if i search on Google "Austin Schools", will the Austin district-that does not have a unique URL- show up in a Google search? What the generated page looks like is on this link http://imgur.com/stCQcP6. If yes, any special type of coding we need to add to the backend?
On-Page Optimization | | jgodwin0 -
Duplicate content from page links
So for the last month or so I have been going through fixing SEO content issues on our site. One of the biggest issues has been duplicate content with WHMCS. Some have been easy and other have been a nightmare trying to fix. Some of the duplicate content has been the login page when a page requires a login. For example knowledge base article that are only viewable by clients etc. Easily fixed for me as I dont really need them locked down like that. However pages like affiliate.php and pwreset.php that are only linked off of a page. I am unsure how to take care of these types. Here are some pages that are being listed as duplicate: Should this type of stuff be a 301 redirect to cart.php or would that break something. I am guessing that everything should point back to cart.php.
On-Page Optimization | | blueray
https://www.bluerayconcepts.com/brcl...art.php?a=view
https://www.bluerayconcepts.com/brcl...php?a=checkout These are the ones that are really weird to me. These are showing as duplicate content but pwreset is only a link of the KB category. It shows up as duplicate many times as does affilliate.php: https://www.bluerayconcepts.com/brcl...ebase/16/Email
https://www.bluerayconcepts.com/brcl...16/pwreset.php Any help is overly welcome.0 -
When Content creation isn't an option...
I currently work as an SEO/SEO in training. Oftentimes I get projects that require me to look at well established websites of big brands, the kind one would assume put a lot of effort into their sites, and make SEO changes. Additionally they want "actionable" changes that can be made on the fly so content creation, and most linkbuilding, is usually out of the question. Does this limit me to just changing meta titles and descriptions? What if all that seems alright too?
On-Page Optimization | | Resolute0 -
Google Treating these URL's as diff, but they are same. please help
Google is treating, below URL's as two different URL's when they are same. How to solve this. Please help. Case 1:/2570/Venture-Capital-and-Capital-Markets/2570/venture-capital-and-capital-marketsCase 2: /xxx/Java-Programming//xxx/Java-ProgrammingPlease help, how to solve this. Thanks in advance
On-Page Optimization | | AnkammaRao0 -
Number of characters to duplicate content
I wonder how much characters in a page title so it can be characterized for Googleas duplicate content?
On-Page Optimization | | imoveiscamposdojordao
Sorry for the English, I used Google Translator.
I'm from Brazil 😄
Thanks.0 -
Duplicat page content issue I don't know how to solve
I've got a few pages (click here to see the fist on with the others as side bar links). They are all thumbnail pages of different products. The tiles are pretty different but the page content is virtually the same for all of them as is the meta description tag. I'm getting error's on the SEOmoz crawl for those pages. I know the meta tag shouldn't be a problem in SEO but is the content of the page going to cause me issues? Are the error messages from SEOmoz a result of the page content or the meta description? The pages are very similar but they are different enough that I want to separate them onto different pages. There would be too many links on that single page as well if all the thumbs where on the same page. Should I just ignore the error messages?
On-Page Optimization | | JAARON0