How much content on PDF download page
-
Hello,
This is about content for an ecommerce site.
We have an article page that we also created a PDF out of.
We have an HTML page that doesn't have anything commercial on it that is the download page for the PDF page.
How much of the article do you recommend we put on the non-commercial HTML download page? Should we put most of the article on there?
We're trying to get people to link to the HTML Download page, not the PDF.
-
I do think you should make the whole article available on the download page. If the page is designed well and the content is laid out in a digestible manner I think it's probably more link worthy as an html page than a PDF. You could also convert it and share on SlideShare type sites for added exposure and co-citation.
-
Interesting idea, you don't think that will make it not as good link bait. I'm also considering putting the whole article on the download page.
-
Well if you want links to the content on an html page and not the PDF then make the content fully HTML compatible. Place a small form at the bottom of the page allowing users to submit their email for the PDF version and you won't have people trying to link directly to your PDF.
-
I forgot to mention:
I've robot,txt disallowed the pdf.
I put a rel="cononical" on the HTML non-commercial download page.
-
pdfs are indexed just as html pages, so you should avoid duplicating what is on the pdf, as it will look like two duplicate pages on your site.
You can add, summarise or make sensible and good use of the html page and add unique content and that would help.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How long will old pages stay in Google's cache index. We have a new site that is two months old but we are seeing old pages even though we used 301 redirects.
Two months ago we launched a new website (same domain) and implemented 301 re-directs for all of the pages. Two months later we are still seeing old pages in Google's cache index. So how long should I tell the client this should take for them all to be removed in search?
Intermediate & Advanced SEO | | Liamis0 -
Responsive Content
At the moment we are thinking about switching to another CMS. We are discussing the use of responsive content.Our developer states that the technique uses hidden content. That is sort of cloaking. At the moment I'm searching for good information or tests with this technique but I can't find anything solid. Do you have some experience with responsive content and is it cloaking? Referring to good articles is also a plus. Looking forward to your answers!
Intermediate & Advanced SEO | | Maxaro.nl0 -
Do I use H1 tag for logo or page content?
Should the h1 tag be used for the main page content or the logo? I understand the original method was too H1 the logo with the main search term, does this still hold true or should it be content focused?
Intermediate & Advanced SEO | | seoman100 -
Content with Read More..?
How does google see content that's static on page & content that has a "see more" or "read more" tag. Where the content collapses & de-collapses on a mouse click. On a condition that the complete is readable via the source code view as well as crawl-able by spiders?
Intermediate & Advanced SEO | | welcomecure0 -
How long takes to a page show up in Google results after removing noindex from a page?
Hi folks, A client of mine created a new page and used meta robots noindex to not show the page while they are not ready to launch it. The problem is that somehow Google "crawled" the page and now, after removing the meta robots noindex, the page does not show up in the results. We've tried to crawl it using Fetch as Googlebot, and then submit it using the button that appears. We've included the page in sitemap.xml and also used the old Google submit new page URL https://www.google.com/webmasters/tools/submit-url Does anyone know how long will it take for Google to show the page AFTER removing meta robots noindex from the page? Any reliable references of the statement? I did not find any Google video/post about this. I know that in some days it will appear but I'd like to have a good reference for the future. Thanks.
Intermediate & Advanced SEO | | fabioricotta-840380 -
Duplicate content reported on WMT for 301 redirected content
We had to 301 redirect a large number of URL's. Not Google WMT is telling me that we are having tons of duplicate page titles. When I looked into the specific URL's I realized that Google is listing an old URL's and the 301 redirected new URL as the source of the duplicate content. I confirmed the 301 redirect by using a server header tool to check the correct implementation of the 301 redirect from the old to the new URL. Question: Why is Google Webmaster Tool reporting duplicated content for these pages?
Intermediate & Advanced SEO | | SEOAccount320 -
RSS "fresh" content with static page
Hi SEOmoz members, Currently I am researching my competitor and noticed something what i dont really understand. They have hundreds of static pages that dont change, the content is already the same for over 6 months. Every time a customer orders a product they use their rss feed to publish: "Customer A just bought product 4" When i search in Google for product 4 in the last 24 hours, its always their with a new publishing date but the same old content. Is this a good SEO tactic to implant in my own site?
Intermediate & Advanced SEO | | MennoO0 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0