How much javascript does Googlebot read
-
We have a site where we have certain navigational links solely for the human user. These links help the user experience and lead to pages that we don't need crawled by googlebot. We have these links in javascript so if you disable javascript these links are invisible. Will these links be considered cloaking even though our intention is not to cloak but save our Google crawl for pages we do want indexed?
-
Hi CruiseControl, If you want to see how Google views your website you can download a tool called Lynx, Lynx is a text based browser which is very very similar to how Google's crawler views your website.
-
Thank you all for your input.
-
I wrote up a nice reply then decided to investigate a point and found a nice interview with Matt Cutts from 2010. The relevant quotes are:
Matt Cutts: For a while, we were scanning within JavaScript, and we were looking for links. Google has gotten smarter about JavaScript and can execute some JavaScript. I wouldn't say that we execute all JavaScript, so there are some conditions in which we don't execute JavaScript.
Eric Enge: If someone did choose to do that (JavaScript encoded links or use an iFrame), would that be viewed as a spammy activity or just potentially a waste of their time?
Matt Cutts: I am not sure that it would be viewed as a spammy activity, but the original changes to NoFollow to make PageRank Sculpting less effective are at least partly motivated because the search quality people involved wanted to see the same or similar linkage for users as for search engines. In general, I think you want your users to be going where the search engines go, and that you want the search engines to be going where the users go.
Article link: http://www.stonetemple.com/articles/interview-matt-cutts-012510.shtml
-
There are circumstances where you are allowed to use 'cloaking' as some very influential websites have done however in your particular situation a nofollow tag and noindex tag would be the 'normal' procedure.
Personally, I think it is a grey area. You are not using the javascript to hide content as such and provided you are clearly not trying to manipulate the system there should be no reason why you would be penalised for it.
-
I would say yes they are cloaked links. I would suggest using HTML links only for maximum juice and to not anger the Googlebot. Serving different content to the user with and without javascript is a no-no. As for your crawl budget - best practice is to use a nofollow tag on the link and a noindex on the target page if you don't want it in the SERPS.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console "Text too small to read" Errors
What are the guidelines / best practices for clearing these errors? Google has some pretty vague documentation on how to handle this sort of error. User behavior metrics in GA are pretty much in line with desktop usage and don't show anything concerning Any input is appreciated! Thanks m3F3uOI
Technical SEO | | Digital_Reach2 -
How much of an issue is JS?
Hey folks, So, I got two pages. Page A has a lot more content but in a tabular format which uses javascript and a Title Tag which is a synonym for our keyword, but not the actual keyword. Page B has less content, and the title tag is the exact keyword phrase we want to rank for. Page A has a bigger backlink profile (though not enormous by any extent). Page A ranks in 30th. Page B ranks in 7th. Importance of Title tag? Importance of JS? Both? Discuss! Cheers, Rhys
Technical SEO | | SwanseaMedicine0 -
Mobilegeddon Help - Googlebot Mobile cHTML & Mobile: XHTML/WML
Our website is (www.billboard.com) and we have a mobile website on a sub-domain (www.m.billboard.com). We are currently only redirecting Googlebot Type "Mobile: Smartphone" to our m.billboard.com domain. We are not redirecting Googlebot Mobile: cHTML & Mobile: XHTML/WML Using this URL as an example: http://www.billboard.com/articles/news/1481451/ravi-shankar-to-receive-lifetime-achievement-grammy, I fetched the URL via Google webmaster tools: http://goo.gl/8m4lQD As you can see only the 3rd Googlebot mobile was redirected, while the first 2 Google bot mobile spiders resolved 200 for the desktop page. QUESTION: could this be hurting our domain / any page that is not redirecting properly post mobilegeddon?
Technical SEO | | Jay-T0 -
All other things equal, do server rendered websites rank higher than JavaScript web apps that follow the AJAX Crawling Spec?
I instinctively feel like server rendered websites should rank higher since Google doesn't truly know that the content its getting from an AJAX site is what the user is seeing and Google isn't exactly sure of the page load time (and thus user experience). I can't find any evidence that would prove this, however. A website like Monocle.io uses pushstate, loads fast, has good page titles, etc., but it is a JavaScript single page application. Does it make any difference?
Technical SEO | | jeffwhelpley0 -
I've consolidated other domains to a single one with 301 redirects, yet the new domain authority in MOZ is much less that the redirected ones. Is that right?
I'm trying to increase the domain authority of my main site, so decided to consolidate other sites. One of the other sites has a much higher domain authority, but I don't know why after a 301 redirect, the new site's domain authority hasn't changed on over a month. Does MOZ take account of thes types of things?
Technical SEO | | bytecgroup2 -
Do the engine spiders see this javascript?
http://www.drsfostersmith.com/general.cfm?gid=259 I'm looking at building something like these banners, and wondering if the engines a) see b) value links like these in the drop-down selector? I guess I could test it but wondering if anyone else has before I do it.
Technical SEO | | ATShock0 -
Blocking URL's with specific parameters from Googlebot
Hi, I've discovered that Googlebot's are voting on products listed on our website and as a result are creating negative ratings by placing votes from 1 to 5 for every product. The voting function is handled using Javascript, as shown below, and the script prevents multiple votes so most products end up with a vote of 1, which translates to "poor". How do I go about using robots.txt to block a URL with specific parameters only? I'm worried that I might end up blocking the whole product listing, which would result in de-listing from Google and the loss of many highly ranked pages. DON'T want to block: http://www.mysite.com/product.php?productid=1234 WANT to block: http://www.mysite.com/product.php?mode=vote&productid=1234&vote=2 Javacript button code: onclick="javascript: document.voteform.submit();" Thanks in advance for any advice given. Regards,
Technical SEO | | aethereal
Asim0 -
How much authority does a 301 pass to a different domain?
Hi, A client of mine is selling his business to a brand new company. The brand new company will be using a brand new domain (no way to avoid that unfortunately) and the current domain (which has tons of authority, links, shares, tweets, etc.) will not be used. Added to that, the new company will be taking over all the current content with just a few minor changes. (I know, I wish we could use the old domain but we can't.) Obviously, I am redirecting all pages on the current domain to the new domain via 301 redirects on a page by page basis. So, current.com/product-page-x.html redirects to new.com/product-page-x.html. My client and the new company both are asking me how much link juice (and other factors) are passed along to the new domain from the old domain. All I can find is "not the full value" or variants thereof.My experience with 301 redirects in the past has been within a single domain and I've seen some of those pages have decent authority and decent rankings as a result of the 301 (no other optimization work was done or links were added). Are there any studies out there that I'm missing that show how much authority/juice gets passed and/or lost via a 301 redirect? Anybody with a similar issue see any trends in page/domain authority and/or rankings? Thanks for any insights and opinions you have.
Technical SEO | | Matthew_Edgar0