Will Google Count Links Loaded from JavaScript Files After the Page Loads
-
Hi,
I have a simple question. If I want to put an image with a link to another site like a banner ad on my page, but do not want it counted by Google. Can I simply load the link and banner using jQuery onload from a separate .js file?
The ideal result would be for Google to index a script tag instead of a link.
-
Good Answer. I completely abandoned the banner I was thinking of using. It was from one of those directories that will list your site for free if you show their banner on your site. Their code of course had a link to them with some optimized text. I was looking for a way to display the banner without becoming a link farm for them.
Then I just decided that I did not want that kind of thing on my site even if it is in a javascript onload event if Google is going to crawl it anyway, so I just decided not to add it.
Then I started thinking about user generated links. How could I let people cite a source in a way that the user can click on without exposing my site to hosting spammy links. I originally used an ASP.Net linkbutton with a confirm button extender from the AJAX Control ToolKit that would display the url and ask the user if they wanted to go there. Then they would click the confirm button and be redirected. The problem was that the URL of the page was in the head part of the DOM.
I replaced that with a feature using a modal popup that calls a javascript function when the link button is clicked. That function then makes an ajax call to a webservice that gets the link from the database. Then the javascript writes an iframe to a div in the modal's panel. The result should be the user being able to see the source without leaving the site, but a lot of sites appear to be blocking the frame by using stuff like X-Frame-Options, so I'm probably going to use a different solution that uses the modal without the iframe. I am thinking of maybe using something like curl to grab content from the page to write to the modal panel along with a clickable link. All of this of course after the user clicks the linkbutton so none of that will be in the source code when the page loads.
-
I think what we really need to understand is, what is the purpose of hiding the link from Google? If it's to prevent the discovery of a URL or prevent the indexation of a certain page (or set of pages) - it's easier to achieve the same thing by using Meta no-index directives or wildcard-based robots.txt rules or by simply denying Gooblebot's user-agent, access to certain pages entirely
Is is that important to hide the link, or is it that you want to prevent access to certain URLs from within Google's SERPs? Another option is obviously to block users / sessions referred from Google (specifically) from accessing the pages. There's lots can be done, but a bit of context would be cool
By the way, no-follow does not prevent Google from following links. It actually just stops PageRank from passing across. I know, it was named wrong
-
What about a form action? Where instead of an a element with a href attribute you add a form element with an action attribute to what the href would be in a link.
-
Thanks for that answer. You obviously know a lot about this issue. I guess they would be able to tell if the .js script file creates an a element with a specific href attribute and then add that element to a specific div tag after the page loads.
It sounds like it might be easier just to nofollow those links instead of going to all the trouble to redirect the .js file whenever Google Bot crawls the page. I fear that could be considered cloaking.
Another possibility would be a an alert that requires a user interaction before grabbing a url from a database. The user would click on the link without an href, the javascript onclick fires, the javascript grabs the the url from a database, the user is asked to click a button if they want to proceed, and then the user is redirected to the external url. That should keep the external URL out of the script code.
-
Google can crawl JavaScript and its contents, but most of the time they are unlikely to do so. In order to do this, Google has to do more than just a basic source code scrape. Like everyone else seeking to scrape data from inside of generated elements, Google has to actually check the modified source-code, after all of the scripts have run (the render) rather than the base (non-modified) source code before any scripts fire
Google's mission is to index the web. There's no doubt that, non-rendered crawls (which do not contain the generated HTML output of scripts) can be done in a fraction of the time it takes to get a rendered snapshot of the page-code. On average I have found rendered crawling to take 7x to 10x longer than basic source scraping
What we have found is that Google are indeed, capable of crawling generated text and links and stuff... but they won't do this all the time, or for everyone. Those resources are more precious to Google and they crawl more sparingly in that manner
If you deployed the link in the manner which you have described, my anticipation is that Google would not notice or evaluate the link for a month or two (if you're not super popular). Eventually, they would determine the presence of the link - at which point it would be factored and / or evaluated
I suppose you could embed the script as a link to a '.js' module, and then use Robots.txt to ban Google from crawling that particular JavaScript file. If they chose to obey that directive, the link would pretty much remain hidden from them. But remember, it's only a directive!
If you wanted to be super harsh you could block Googlebot (user agent) from that JS file and do something like, 301 them to the homepage when they tried to access it (instead of allowing them to open and read the JS file). That would be pretty hardcore but would stand a higher chance of actually working
Think about this kind of stuff though. It would be pretty irregular to go to such extremes and I'm not certain what the consequences of such action(s) would be
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will adding canonical affect traffic to the non canonical page?
We have three URLs that have the same content but all three are getting traffic.
On-Page Optimization | | NanditaKraman1 -
How will it effect SEO to have multiple h1 tags on a page?
I have a client who recieved this advice from his marketing consultant: "If there are multiple h1 tags on a page, this can confuse Google and it may have a negative impact on the keyword rankings. If you could ask your web developer to go in and remove the h1 tags on the header images that would be helpful. This way it will be easier for Google to index your site and will help your keyword rankings." How will it effect SEO to have multiple h1 tags on a page?
On-Page Optimization | | GRIP-SEO0 -
Does Google use 302's to pass value to the target page?
Hi, I've received the below advice, is this correct? Throughout the site, the 302 (moved temporarily) status code is used for redirects, which Google will use to pass value to the target page. Is this correct? I was under the impression a 301 was used to pass value to the target page? Could someone explain the difference between a 301 and a 302, I'm not 100% sure. Thanks, Nathan
On-Page Optimization | | Heehaw0 -
Will Google penalise me for duplicating my own website on a new domain?
Hi guys, Currently, we are using Magento Enterprise to run two separate ecommerce websites, for arguments sake lets say these two websites are ElectronicsX.com and electronicsY.co.uk. ElectronicsX.com isn't doing too well, it has little traffic, makes next to nothing each month and is just a waste of resources. ElectronicsY.co.uk on the other hand is getting 30,000 unique visits every month and is over exceeding every month in terms of revenue in sales. At the moment, the only traffic we get from Europe tends to be the odd "word of mouth" customer and English customers who have emigrated to Europe but still like to use English companies. We can't rank on Google FR or ES or anything like that because the website we have "electronicsY" is on a .co.uk TLD (a GeoTLD). So what we want to do is take ALL the content from electronicsY.co.uk and place it on electronicsX.com so that we can start targeting Europe and ranking in the serps internationally. What sort of effect will that have in terms of penalisation from Google? Or because the websites are on the same C block, would it have any effect at all? Thanks guys Tom
On-Page Optimization | | tomhall900 -
Deleted pages still registering as 404 pages.
I have a an all html site that I can only work on through the ftp. The previous marketing company ran a script that built thousands of location landing pages, but all they did was change the tags and headers and the keywords in the pages, other than that they are all duplicate pages. I removed them, but Google is reading them as 404 pages. How do I tell Google those pages don't exist? or do I just need to let the bots crawl it a few times and it will see that eventually?
On-Page Optimization | | SwanJob0 -
Which pages should use rel="canonical" links?
I have many pages showing up as multiple content. Most of the them belong to product pages for my store, login pages that show up everywhere on the site, etc. I know that I need to use the rel=canonical link in the header but after searching the forum I'm still unsure of what pages need it. Is it the pages that I don't want searched/crawled by Google or the other way around? Thanks! Crystal
On-Page Optimization | | COfashionista0 -
Why does my on-page report card say my page title is 403 forbidden when its not?
I'm trying to get on top of my on page stuff and I'm going through the SEO Moz on-page report cards and it says I'm scoring a fail on certain elements within the 'critical' and 'high importance' factors as my page title is '403 forbidden' but when I go on to my site, my sites CMS it's not '403 forbidden' it's the text I entered?
On-Page Optimization | | jamesj35mm0 -
Page Authority
I have recently optimised a set of images for a client of ours: I'm looking through all the PA of these newly optimised images, and have varying PA {from SEOmoz toolbar} I understand that internal linking will pass link juice, and obviously external links will add to the overall PA. I have several pages with a PA of 36: { Fairly deep pages} Yet they have no external or internal links going to them. My question is "How can a page gain any authority when it has no visible links pointing at it?" Obviously there must be a link pointing at it {internally} as Google wouldn't have crawled the page right? Also lets say all the keywords are of equal competitiveness would the keywords with highest PA rank higher than those on O PA pages. Many Thanks
On-Page Optimization | | Yozzer0