How does a search engine bot navigate past a .PDF link?
-
We have a large number of product pages that contain links to a .pdf of the technical specs for that product. These are all set up to open in a new window when the end user clicks.
If these pages are being crawled, and a bot follows the link for the .pdf, is there any way for that bot to continue to crawl the site, or does it get stuck on that dangling page because it doesn't contain any links back to the site (it's a .pdf) and the "back" button doesn't work because the page opened in a new window?
If this situation effectively stops the bot in its tracks and it can't crawl any further, what's the best way to fix this?
1. Add a rel="nofollow" attribute
2. Don't open the link in a new window so the back button remains finctional
3. Both 1 and 2
or
4. Create specs on the page instead of relying on a .pdf
Here's an example page: http://www.ccisolutions.com/StoreFront/product/mackie-cfx12-mkii-compact-mixer - The technical spec .pdf is located under the "Downloads" tab [the content is all on one page in the source code - the tabs are just a design element]
Thoughts and suggestions would be greatly appreciated.
Dana
-
Thanks very much Christopher. This is an excellent explanation. What do you think of Charlie and EGOL's suggestions regarding making sure that there are links embedded in these PDFs pointing either back to the product page or even to the home page?
In your opinion, is this something worth doing? If so, why?
-
Hi Dana,
" ... you are right, one of the fundamental questions I still have is how does a bot behave when it finds an orphaned page like one of these? Does it just revert back to the sitemap and move one? Does it automatically go back to the last non-dead end page and move on from there? What does it do?"
Bots are not really like a single spider that has to crawl around the web that can get trapped when entering an orphaned page with no back-button. When a bot enters a site, it creates a list of all the internal pages that are linked from the home page. Then it visits each page on that list and keeps adding more linked pages to that list. Each time it adds more pages to the list, it only adds new unique pages and does not add duplicates. It also keeps track of which pages it has already visited. When all the pages have been visited once, and no new pages are discovered that are not already on the list, all of the pages have been crawled.
Best,
Christopher -
Hi Don,
Thanks so much for responding and while the answers I have received so far did give me some direction, you are right, one of the fundamental questions I still have is how does a bot behave when it finds an orphaned page like one of these? Does it just revert back to the sitemap and move one? Does it automatically go back to the last non-dead end page and move on from there? What does it do?
Thanks for chiming in. I'd love it if someone more familiar with how a bot actually crawls links like this on a page would jump in with an answer.
Dana
-
Thanks Charlie. I think this is a good suggestion. I work 9-6 too, and just so happen to be the in-house SEO strategist, so this stuff is what I'm there to do. I don't mind the mundane aspects of SEO because the payoff is usually pretty rewarding! Now I know what I'm doing on Monday (on top of a dozen other things!)
Thanks again!
-
I would spend the time needed to do an assessment of these pages.
** how many of them have external links
** how many of them pull traffic from search or other sites
** how many of them are currently useful (are people looking at them)
I would delete (and redirect the URL) of any page that answers "no" to the three items above. These are "dead weight" on your site.
Also, if these are .pdfs of print ads then they might simply be images in a pdf. (test this by searching for an exact phrase from one of them in quotes and include site:yourdomain.com in the query. Keep in mind that google can read the text in some images embedded in pdfs.
I had a lot of pdfs with images on one of my site and got hit with a panda problem. I think that Google thought that the .pdfs were thin content. So I used rel=canonical to assign them to the most relevant page using .htaccess. The panda problem was solved after a couple of months.
Also, keep in mind that .pdfs can be used for conversions. You can embed "add to cart" buttons and links into them and they will function just as on a web page.
If any of these pdfs are pulling in tons I traffic I would figure out how I can put the pdf to better use or create webpage (and redirect the pdf) to best monetize/convert or whatever you business goals dictate.
-
Can a bot navigate via a back button?
I don't think so. They can follow links but they can't "click".
-
Hi Dana
I think your question has been dodged a tad. I ways lead to understand that a .pdf or any page that opens in a new tab and does not link back to the original site, (dangling page) is not a problem. The reason being is that crawlers don't really care how a page is opened. Because the crawler will fork at every link and crawls each new page/link from each fork, when it finds a orphan or dangling page it just stops. This of course is not an issue since if the crawler has forked at each link.
So the question is how a SE treats .pdf's rather how does it treat orphan page. Maybe somebody who works with crawlers can confirm or educate us both on they work.
Don
-
Many thanks to both you and EGOL for excellent answers!
-
Thanks EGOL. Yes, many of these .pdfs could be and are referenced by other sites. Given that there's no link from the .pdf back to our site, we really are missing out on a huge opportunity. I thought this might be the case as I pondered the whole concept of "dangling links" that was discussed in a SEOMoz blog post this week.
I agree about the last point regarding opening in a new window being more of a usability issue than a problem for SEO. I agree with you completely that opening in the same window is way better for the end user.
Can a bot navigate via a back button?
Thanks very much to both you and Charlie for your excellent answers!
-
lol, thank heaven's they aren't spammy. However, they aren't particularly helfpul either. You see, about 3,000 of them are old .pdf versions of print advertising campaigns, going back as far as 2005. They contain obsolete pricing, products, etc. Unfortunately, instead of archiving them off the server, they've been continuously archived in a sub-directory of our main website.
Nearly all of it is indexed. It seems to me the best thing to do for these is to include a statement that the content is an old advertisement and include a llnk to our current "special offers" page.
What do you think of that as a strategy for at least giving engines and humans a means to navigate to someplace current on the site?
-
I see 6000 pdfs as an amazing opportunity. Get links on those pages and it will funnel a lot of power through your site.
If that was my site, we would be on that job immediately. Could be a huge gain for some easy work.
-
Go back and rework our .pdfs so they at least contain a link back to the homepage?
Yes! Absolutely! And, link them to other relevant pages. If these are reference documents they could be pulling in a lot of links and traffic from other sites.
In addition toAs well as configure the hyperlinks so they open in the same window instead of a new one?
In my opinion, this is not an SEO issue. This is a usability issue. I would have them open in the same window so the back button is available.
-
Thank you Charlie. In our case, our .pdfs contain no links in them at all. There is nothing for a bot (or a human) that will navigate them out of the .pdf....not even the back button.
Considering that, and EGOL's response below, would the best course of action be to include, at the very least, an active link back to our homepage from all of our .pdf files?
We have as many as 6,000 .pdfs.
Thanks,
Dana
-
Thanks EGOL,
Yes, I understand well that .pdf documents can be indexed. That's not my concern. My concern is that a bot that navigates to one of our many .pdf tech specs documents, which, incidentally, contains no outbound links to anything, would then become trapped and not be able to continue crawling the site. This is particularly true because we have them set up to open in a new window. In the example above, sure, there's a text reference back to the site "www.kingdom.com" - but it isn't a link in the .pdf. There are no links, in any of our .pdfs.
So, what is the best way to deal with this? Go back and rework our .pdfs so they at least contain a link back to the homepage? In addition toAs well as configure the hyperlinks so they open in the same window instead of a new one?
-
.pdf documents are crawled by bots and they accumulate pagerank just like .html pages.
You can include links in them to other documents on the web and bots will crawl those links and pagerank will flow through them.
.pdf documents can be given a "title tag" equivalent by opening their properties and giving the document a title. This title will display in the SERPs. .pfd documents can be hard to beat in the SERPs if they are optimized and have links from a competitive number of other web documents.
Lots of document formats behave this way. Excel, PowerPoint, Word for example.
In my opinion, .pdf documents can trigger a Panda problem for your site if you have a lot of them with trivial or duplicate content (as in print versions of web documents). They can be given rel=canonical through .htaccess to solve the Panda problem but Google often takes a long long time (sometimes months) to recognize the canonical and use that instruction.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Would a Search Engine treat a sitemap hosted in the cloud in the same way as if it was simply on /sitemap.htm?
Mainly to allow updates without the need for publishing - would Google interpret any differently? Thanks
Technical SEO | | RichCMF0 -
Thousands of external links
My site has supposedly over 4,000 external links from it. Is there a good piece of software that could scrape my site that tell me from which page all the links originate from and where they're all going? I'm surprised that the number is this high because our entire site is only a few hundred pages. Here's the site XXXkidecalsXXX.com (just remove the XXX for the URL.) If you want to weigh in on any other issues you see with the site, I'd be happy for any suggestions in general.
Technical SEO | | Santaur0 -
Added data to links
Hello I am in the process of cleaning a site and getting less pages cached. it is a magento site and I was wondering what is your advice fo pages that get this padded to the link ?material=139&price=10%2C12 accept the obvious canonical? thanks
Technical SEO | | ciznerguy0 -
Link building question
ok so we paid the top firm in seo to help us build an seo strategy and i think we have a good one. We are changing our link building tactics and making more Pr related links and creating awesome content on blogs or our own site to generate traffic and links to our site. We have data from our engineer which should be interesting and we are going to sponsor events, do some link baiting with some of our articles, get a pr firm to get us some good articles on major sites and go to events around phily where we will have unique content and a unique perspective such as car shows ect. The problem is even though all the content will be linked to our site how do we link them. We got hit by penguin but in these articles or blogs should we use the anchor text for the word we are using. The company says dont do it right now bc we got hit with penguin and should only use the brand. I have no idea how only using the brand and not the keywords will magically make us rank for certain keywords. Anyone have an opinion. Thank you and we do pretty well with seo but we did get little bit of a hit with penguin that we are eliminating links and making a new way of thinking when it comes to link building. We also just hired a designer so we are going to build 100s of pages on the site to increase seo with unique content and that is also a goal of ours for the year. We have two marketers on staff and 4 programmers so we are able to do anything. Our urls are terrible but the rest of the site is pretty good
Technical SEO | | goldjake17880 -
Google is somehow linking my two sites that aren't linked! HELP
Good Morning... In my Google webmaster account it is showing an increase of backlinks between one site i own to the other.... This should not happen, as there are no links from one site to the other. I have thoroughly checked many pages on the new site to see if i can find a backlink, but i can't. Does anyone know why this is showing like this (google now shows 50,000 links from one site to the other).. Can someone please take a look and see if you can find any link from one to the other... original site : http://goo.gl/JgK1e new site : http://goo.gl/Jb4ng Please let me know why you guys think this is happening or if you were actually able to find a link on the new site pointing back to the old site... thanks a lot
Technical SEO | | Prime850 -
Image Link
If I have an image that is well optimiswed for a keyword that the page it is on is ranking for but i put a no follow in the image link - is this going to lose the value of the image on that page. A strange question i know but this image i have on my homepage is optimised around a keyword, the image is also a link but when i changed the link in the image to no follow i seem to have dropped rankings for that keyword. Probably consicidence but i thought i would throw this question out there and get some views?
Technical SEO | | pauledwards0 -
How do search engines treat urls that end in hashtags?
How do search engines treat urls that end in hashtags? For example, www.domain.com/abc#xyz.
Technical SEO | | nicole.healthline0 -
External Links from own domain
Hi all, I have a very weird question about external links to our site from our own domain. According to GWMT we have 603,404,378 links from our own domain to our domain (see screen 1) We noticed when we drilled down that this is from disabled sub-domains like m.jump.co.za. In the past we used to redirect all traffic from sub-domains to our primary www domain. But it seems that for some time in the past that google had access to crawl some of our sub-domains, but in december 2010 we fixed this so that all sub-domain traffic redirects (301) to our primary domain. Example http://m.jump.co.za/search/ipod/ redirected to http://www.jump.co.za/search/ipod/ The weird part is that the number of external links kept on growing and is now sitting on a massive number. On 8 April 2011 we took a different approach and we created a landing page for m.jump.co.za and all other requests generated 404 errors. We added all the directories to the robots.txt and we also manually removed all the directories from GWMT. Now 3 weeks later, and the number of external links just keeps on growing: Here is some stats: 11-Apr-11 - 543 747 534 12-Apr-11 - 554 066 716 13-Apr-11 - 554 066 716 14-Apr-11 - 554 066 716 15-Apr-11 - 521 528 014 16-Apr-11 - 515 098 895 17-Apr-11 - 515 098 895 18-Apr-11 - 515 098 895 19-Apr-11 - 520 404 181 20-Apr-11 - 520 404 181 21-Apr-11 - 520 404 181 26-Apr-11 - 520 404 181 27-Apr-11 - 520 404 181 28-Apr-11 - 603 404 378 I am now thinking of cleaning the robots.txt and re-including all the excluded directories from GWMT and to see if google will be able to get rid of all these links. What do you think is the best solution to get rid of all these invalid pages. moz1.PNG moz2.PNG moz3.PNG
Technical SEO | | JacoRoux0