Will Google Count Links Loaded from JavaScript Files After the Page Loads
-
Hi,
I have a simple question. If I want to put an image with a link to another site like a banner ad on my page, but do not want it counted by Google. Can I simply load the link and banner using jQuery onload from a separate .js file?
The ideal result would be for Google to index a script tag instead of a link.
-
Good Answer. I completely abandoned the banner I was thinking of using. It was from one of those directories that will list your site for free if you show their banner on your site. Their code of course had a link to them with some optimized text. I was looking for a way to display the banner without becoming a link farm for them.
Then I just decided that I did not want that kind of thing on my site even if it is in a javascript onload event if Google is going to crawl it anyway, so I just decided not to add it.
Then I started thinking about user generated links. How could I let people cite a source in a way that the user can click on without exposing my site to hosting spammy links. I originally used an ASP.Net linkbutton with a confirm button extender from the AJAX Control ToolKit that would display the url and ask the user if they wanted to go there. Then they would click the confirm button and be redirected. The problem was that the URL of the page was in the head part of the DOM.
I replaced that with a feature using a modal popup that calls a javascript function when the link button is clicked. That function then makes an ajax call to a webservice that gets the link from the database. Then the javascript writes an iframe to a div in the modal's panel. The result should be the user being able to see the source without leaving the site, but a lot of sites appear to be blocking the frame by using stuff like X-Frame-Options, so I'm probably going to use a different solution that uses the modal without the iframe. I am thinking of maybe using something like curl to grab content from the page to write to the modal panel along with a clickable link. All of this of course after the user clicks the linkbutton so none of that will be in the source code when the page loads.
-
I think what we really need to understand is, what is the purpose of hiding the link from Google? If it's to prevent the discovery of a URL or prevent the indexation of a certain page (or set of pages) - it's easier to achieve the same thing by using Meta no-index directives or wildcard-based robots.txt rules or by simply denying Gooblebot's user-agent, access to certain pages entirely
Is is that important to hide the link, or is it that you want to prevent access to certain URLs from within Google's SERPs? Another option is obviously to block users / sessions referred from Google (specifically) from accessing the pages. There's lots can be done, but a bit of context would be cool
By the way, no-follow does not prevent Google from following links. It actually just stops PageRank from passing across. I know, it was named wrong
-
What about a form action? Where instead of an a element with a href attribute you add a form element with an action attribute to what the href would be in a link.
-
Thanks for that answer. You obviously know a lot about this issue. I guess they would be able to tell if the .js script file creates an a element with a specific href attribute and then add that element to a specific div tag after the page loads.
It sounds like it might be easier just to nofollow those links instead of going to all the trouble to redirect the .js file whenever Google Bot crawls the page. I fear that could be considered cloaking.
Another possibility would be a an alert that requires a user interaction before grabbing a url from a database. The user would click on the link without an href, the javascript onclick fires, the javascript grabs the the url from a database, the user is asked to click a button if they want to proceed, and then the user is redirected to the external url. That should keep the external URL out of the script code.
-
Google can crawl JavaScript and its contents, but most of the time they are unlikely to do so. In order to do this, Google has to do more than just a basic source code scrape. Like everyone else seeking to scrape data from inside of generated elements, Google has to actually check the modified source-code, after all of the scripts have run (the render) rather than the base (non-modified) source code before any scripts fire
Google's mission is to index the web. There's no doubt that, non-rendered crawls (which do not contain the generated HTML output of scripts) can be done in a fraction of the time it takes to get a rendered snapshot of the page-code. On average I have found rendered crawling to take 7x to 10x longer than basic source scraping
What we have found is that Google are indeed, capable of crawling generated text and links and stuff... but they won't do this all the time, or for everyone. Those resources are more precious to Google and they crawl more sparingly in that manner
If you deployed the link in the manner which you have described, my anticipation is that Google would not notice or evaluate the link for a month or two (if you're not super popular). Eventually, they would determine the presence of the link - at which point it would be factored and / or evaluated
I suppose you could embed the script as a link to a '.js' module, and then use Robots.txt to ban Google from crawling that particular JavaScript file. If they chose to obey that directive, the link would pretty much remain hidden from them. But remember, it's only a directive!
If you wanted to be super harsh you could block Googlebot (user agent) from that JS file and do something like, 301 them to the homepage when they tried to access it (instead of allowing them to open and read the JS file). That would be pretty hardcore but would stand a higher chance of actually working
Think about this kind of stuff though. It would be pretty irregular to go to such extremes and I'm not certain what the consequences of such action(s) would be
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Parsing jQuery Links as Real Links
While trying to diagnose a recent Google penalty I found out that links were being parsed by Google even though they were made using jQuery. I had the linkify plugin on my site and configured it to convert URLs to links on all of my pages. Today I found links to other sites of mine from sites that should not have been linking to them and found that the links came from pages whose links were generated via jQuery. This makes me wonder, how do I know if Google is counting javascript generated links? Is it possible that my native ad widgets are creating links that Google might count? Since I don't own any of the sites that advertise via the widgets I don't know how to tell if they are getting link juice or not. It used to be that Google didn't parse javascript, so you could add as many links to your site via javascript as you wanted without being seen by Google as linking to those sites. Does anyone know of a jQuery plugin that does turn URLs into clickable links that Google won't parse as real links?
On-Page Optimization | | STDCarriers0 -
When You Add a Robots.txt file to a website to block certain URLs, do they disappear from Google's index?
I have seen several websites recently that have have far too many webpages indexed by Google, because for each blog post they publish, Google might index the following: www.mywebsite.com/blog/title-of-post www.mywebsite.com/blog/tag/tag1 www.mywebsite.com/blog/tag/tag2 www.mywebsite.com/blog/category/categoryA etc My question is: if you add a robots.txt file that tells Google NOT to index pages in the "tag" and "category" folder, does that mean that the previously indexed pages will eventually disappear from Google's index? Or does it just mean that newly created pages won't get added to the index? Or does it mean nothing at all? thanks for any insight!
On-Page Optimization | | williammarlow0 -
Should I have content on my home page or links to my articles
Hi, i have asked this question a couple of times without any luck so i am hoping third time lucky. My site www.in2town.co.uk has dropped in the rankings for two of my important keywords, lifestyle magazine and lifestyle news, so i am just wondering if i have to much content on the page for google to understand what the page is about. i am thinking to just have the links on my page instead of the intro to the articles, for example another online magazine does this, http://www.femalefirst.co.uk/ Can anyone please let me know if i should keep the intro to the articles or if i should go with the links idea like femalefirst does to help google understand that we are a lifestyle magazine any advice would be great
On-Page Optimization | | ClaireH-1848860 -
Too Many On-Page Links error
Hello, I am new to this. The crawl of y website reveals that "Too Many On-Page Links" were found on many pages of the website. However, when I check those pages, not more than 5 links are found- I have not included links outside of the post (sidebar/comments/related posts - are these counted in the crawl report ?). I do use SEO SmartLinks Plugin where in some keywords point to Wordpress Categories but am not sure whether that could be the problem at all. Can someone guide what the issue could be and how to debug ?
On-Page Optimization | | sradhey0 -
Too Many On-Page Links
Hi All, New to SEOMoz, so thanks in advance for any answers! Looking at our Crawl Diagnostics and "Too Many On-Page Links" is first on the list. The site was build with the intention of users being able to quickly get to where they want to go with drop down menus (sub nav), so we built the navigation using bullet points/css. Yes, agreed there are too many links on each page from our navigation, main nav cats are 4 with sub nav about 40, but what is the best way to resolve the problem other then removing most of the links (from the sub nav drop down)? Could we just use the attribute rel=nofollow for the sub nav links? TIA
On-Page Optimization | | bmmedia0 -
Tag clouds: good for internal linking and increase of keyword relevant pages?
As Matt Cutts explained, tag clouds are OK if you're not engaged in keyword stuffing (http://www.youtube.com/watch?v=bYPX_ZmhLqg) - i.e. if you're not putting in 500 tags. I'm currently creating tags for an online-bookseller; just like Amazon this e-commerce-site has potentially a couple of million books. Tag clouds will be added to each book detail page in order to enrich each of these pages with relevant keywords both for search engines and users (get a quick overview over the main topics of the book; navigate the site and find other books associated with each tag). Each of these book-specific tag clouds will hold up to 50 tags max, typically rather in the range of up to 10-20. From an SEO perspective, my question is twofold: 1. Does the site benefit from these tag clouds by improving the internal linking structure? 2. Does the site benefit from creating lots of additional tag-specific-pages (up to 200k different tags) or can these pages become a problem, as they don't contain a lot of rich content as such but rather lists of books associated with each tag? Thanks in advance!
On-Page Optimization | | semantopic0 -
What are the benefits of targeting one keyword phrase per page vs. multiple keywords per page
What are the benefits of optimizing a page for one keyword phrase versus a group of similar keywords, like this one that Rand posted on another blog entry http://bit.ly/7LzTxY: Ted Baker Ted Baker London Ted Baker Clothing Ted Baker Mens Ted Baker Mens Clothing Ted Baker Mens Collection
On-Page Optimization | | EricVallee340