Dynamically-generated .PDF files, instead of normal pages, indexed by and ranking in Google
-
Hi,
I come across a tough problem. I am working on an online-store website which contains the functionlaity of viewing products details in .PDF format (by the way, the website is built on Joomla CMS), now when I search my site's name in Google, the SERP simply displays my .PDF files in the first couple positions (shown in normal .PDF files format: [PDF]...)and I cannot find the normal pages there on SERP #1 unless I search the full site domain in Google. I really don't want this! Would you please tell me how to figure the problem out and solve it. I can actually remove the corresponding component (Virtuemart) that are in charge of generating the .PDF files. Now I am trying to redirect all the .PDF pages ranking in Google to a 404 page and remove the functionality, I plan to regenerate a sitemap of my site and submit it to Google, will it be working for me? I really appreciate that if you could help solve this problem. Thanks very much.
Sincerely
SEOmoz Pro Member
-
Recently discovered this:
Indicate the canonical version of a URL by responding with the
Link rel="canonical"
HTTP header. Addingrel="canonical"
to thehead
section of a page is useful for HTML content, but it can't be used for PDFs and other file types indexed by Google Web Search. In these cases you can indicate a canonical URL by responding with theLink rel="canonical"
HTTP header, like this (note that to use this option, you'll need to be able to configure your server).Link: <http: www.example.com="" downloads="" white-paper.pdf="">; rel="canonical"</http:>
Google currently supports these link header elements for Web Search only.
-http://support.google.com/webmasters/bin/answer.py?hl=en&answer=139394
-
I would consider either excluding the PDFs from the index with your robots.txt in conjunction with resubmitting your sitemap (which you're all over), or placing a text link at the bottom of each PDF pointing back to the HTML version of that page (which, all things being equal, should cause the HTML version of the page to rank instead). I am not sure about serving 404 headers to Google instead of the PDFs that are currently in the index. Why not 301 to the HTML version of each PDF? Obviously that can't be a permanent solution, as you will eventually want to restore the functionality to users, right? But it will tell Googlebot that the content of each PDF is to be found from here on out at the URL containing the HTML version. This is a case where it would be handy to serve one thing to the bots and another to the human viewers, but I am afraid that doing so could get you into trouble.
I am interested in your case though—let us know what, if anything besides the 404s and sitemap resubmittal, you end up trying and what happens with it. I'm also curious to know what other mozzers suggest.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why does Google's search results display my home page instead of my target page?
Why does Google's search results display my home page instead of my target page?
Technical SEO | | h.hedayati6712365410 -
Redesigned and Migrated Website - Lost Almost All Organic Traffic - Mobile Pages Indexing over Normal Pages
We recently redesigned and migrated our site from www.jmacsupply.com to https://www.jmac.com It has been over 2 weeks since implementing 301 redirects, and we have lost over 90% of our organic traffic. Google seems to be indexing the mobile versions of our pages over our website pages. We hired a designer to redesign the site, and we are confident the code is doing something that is harmful for ranking our website. F or Example: If you google "KEEDEX-K-DS-FLX38" You should see our mobile page ranking: http://www.jmac.com/mobile/Product.aspx?ProductCode=KEEDEX-K-DS-FLX38 but the page that we want ranked (and we think should be, is https://www.jmac.com/Keedex_K_DS_FLX38_p/keedex-k-ds-flx38.htm) That second page isn't even indexed. (When you search for: "site:jmac.com Keedex K-DS-FLX38") We have implemented rel canonical, and rel alternate both ways. What are we doing wrong??? Thank you in advance for any help - it is much appreciated.
Technical SEO | | jmaccom0 -
Why has Google stopped indexing my content?
Mystery of the day! Back on December 28th, there was a 404 on the sitemap for my website. This lasted 2 days before I noticed and fixed. Since then, Google has not indexed my content. However, the majority of content prior to that date still shows up in the index. The website is http://www.indieshuffle.com/. Clues: Google reports no current issues in Webmaster tools Two reconsideration requests have returned "no manual action taken" When new posts are detected as "submitted" in the sitemap, they take 2-3 days to "index" Once "indexed," they cannot be found in search results unless I include url:indieshuffle.com The sitelinks that used to pop up under a basic search for "Indie Shuffle" are now gone I am using Yoast's SEO tool for Wordpress (and have been for years) Before December 28th, I was doing 90k impressions / 4.5k clicks After December 28th, I'm now doing 8k impressions / 1.3k clicks Ultimately, I'm at a loss for a possible explanation. Running an SEOMoz audit comes up with warnings about rel=canonical and a few broken links (which I've fixed in reaction to the report). I know these things often correct themselves, but two months have passed now, and it continues to get progressively worse. Thanks, Jason
Technical SEO | | indieshuffle0 -
Auto generated pages
Hi, I have two sites showing (crawl report from SEOMoz.org) extremely high numbers of duplicate titles and descriptions (e.g., 33,000). These sites have CMSs behind them and so the duplicate titles, etc., are a result of auto-generated pages. What is the best way to address these problems? Thanks! David
Technical SEO | | DWill0 -
Huge number of indexed pages with no content
Hi, We have accidentally had Google indexed lots os our pages with no useful content at all on them. The site in question is a directory site, where we have tags and we have cities. Some cities have suppliers for almost all the tags, but there are lots of cities, where we have suppliers for only a handful of tags. The problem occured, when we created a page for each cities, where we list the tags as links. Unfortunately, our programmer listed all the tags, so not only the ones, where we have businesses, offering their services, but all of them! We have 3,142 cities and 542 tags. I guess, that you can imagine the problem this caused! Now I know, that Google might simply ignore these empty pages and not crawl them again, but when I check a city (city site:domain) with only 40 providers, I still have 1,050 pages indexed. (Yes, we have some issues between the 550 and the 1050 as well, but first things first:)) These pages might not be crawled again, but will be clicked, and bounces and the whole user experience in itself will be terrible. My idea is, that I might use meta noindex for all of these empty pages and perhaps also have a 301 redirect from all the empty category pages, directly to the main page of the given city. Can this work the way I imagine? Any better solution to cut this really bad nightmare short? Thank you in advance. Andras
Technical SEO | | Dilbak0 -
Rankings for Google Play Pages
Hey all, I'm relatively new here and certainly new to posting in the forums and interacting with the community but I hope to be much more active in the coming months. I have what might be a silly question regarding search results for a Google Play store-specific query. The company in question has their main North American app that's been out for a month and a half and then an International version that was released just a few days ago. If you run a Google search (NOT a search witin Google Play) for 'Google Play Company Name' the more recent (but less used and ultimately less important, at least for the time being) International app is higher in the SERP than the more used and reviewed North American app. I'm guessing that this is something that will correct itself over the next week as the North American app establishes itself as the more important of the two, but I figured it couldn't hurt to ask just in case there's something they can do to affect the results a little quicker. Any advice, input or just a verification of my guess would be greatly appreciated!
Technical SEO | | JDMcNamara0 -
Odd Google Indexing Issue
I have encountered something odd with Google indexing. According to the Google cache my site was last updated on April 6. I had been making a series of changes on April 7th and none of them show up in the cached version of the site (naturally). Then, on the 8th, my rankings seem to have dropped about 6 places and the main SERP is showing a text that isn't even on the Web site. The cached version has the correct page title from the page that was indexed on the 6th. How do I learn where Google is picking this up from? There is a clean page title tag on my Web site. I've checked the server, etc to see what's going on. The text isn't completely unrelated, but it definitely impacted my ranking. Does Google ever have these hiccups when indexing?
Technical SEO | | VERBInteractive0 -
Ranking above PLACE PAGES
What does it take for results to show up above Place Page results. It seems like Google Local gets a lot of emphasis . Any thoughts?
Technical SEO | | musillawfirm0