Can Google read onClick links?
-
Can Google read and pass link juice in a link like this?
<a <span="">href</a><a <span="">="#Link123" onClick="window.open('http://www.mycompany.com/example','Link123')">src="../../img/example.gif"/></a>
Thanks!
-
Yes, there may be some tricky JS that can fool them, but they have got very good at it.
I should add, every link leaks link juice, even if the result is that the linked page does not recive it, such as a no-follow or a JS link that is broken or appears broken to the SE
-
As long as the onclick elements doesn’t change where the user goes and just track clicks, there shouldn’t be any problems. As long as the href= is in place you’ll be fine with the javascript in the link.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Does not find Internal links
Hi guys I involved in difficult situation. in google webmaster tools -> internal links some important pages doesn't have any links from all pages. for example home page just have 9000 internal inks but there are 29000 indexed pages by google and some not important pages have 27000 internal links.(more than home page) Site made by angular v1 Is there anyone can help me why google could not find all internal links?
Technical SEO | | cafegardesh0 -
Any SEO-wizards out there who can tell me why Google isn't following the canonicals on some pages?
Hi, I am banging my head against the wall regarding the website of a costumer: In "duplicate title tags" in GSC I can see that Google is indexing a whole bunch parametres of many of the url's on the page. When I check the rel=canonical tag, everything seems correct. My costumer is the biggest sports retailer in Norway. Their webshop has approximately 20 000 products. Yet they have more than 400 000 pages indexed by Google. So why is Google indexing pages like this? What is missing in this canonical?https://www.gsport.no/herre/klaer/bukse-shorts?type-bukser-334=regnbukser&order=price&dir=descWhy isn't Google just cutting off the ?type-bukser-334=regnbukser&order=price&dir=desc part of the url?Can it be the canonical-tag itself, or could the problem be somewhere in the CMS? Looking forward to your answers Sigurd
Technical SEO | | Inevo0 -
"non-WWW" vs "WWW" in Google SERPS and Lost Back Link Connection
A Screaming Frog report indicates that Google is indexing a client's site for both: www and non-www URLs. To me this means that Google is seeing both URLs as different even though the page content is identical. The client has not set up a preferred URL in GWMTs. Google says to do a 301 redirect from the non-preferred domain to the preferred version but I believe there is a way to do this in HTTP Access and an easier solution than canonical.
Technical SEO | | RosemaryB
https://support.google.com/webmasters/answer/44231?hl=en GWMTs also shows that over the past few months this client has lost more than half of their backlinks. (But there are no penalties and the client swears they haven't done anything to be blacklisted in this regard. I'm curious as to whether Google figured out that the entire site was in their index under both "www" and "non-www" and therefore discounted half of the links. Has anyone seen evidence of Google discounting links (both external and internal) due to duplicate content? Thanks for your feedback. Rosemary0 -
Can Anybody Understand This ?
Hey guyz,
Technical SEO | | atakala
These days I'm reading the paperwork from sergey brin and larry which is the first paper of Google.
And I dont get the Ranking part which is: "Google maintains much more information about web documents than typical search engines. Every hitlist includes position, font, and capitalization information. Additionally, we factor in hits from anchor text and the PageRank of the document. Combining all of this information into a rank is difficult. We designed our ranking function so that no particular factor can have too much influence. First, consider the simplest case -- a single word query. In order to rank a document with a single word query, Google looks at that document's hit list for that word. Google considers each hit to be one of several different types (title, anchor, URL, plain text large font, plain text small font, ...), each of which has its own type-weight. The type-weights make up a vector indexed by type. Google counts the number of hits of each type in the hit list. Then every count is converted into a count-weight. Count-weights increase linearly with counts at first but quickly taper off so that more than a certain count will not help. We take the dot product of the vector of count-weights with the vector of type-weights to compute an IR score for the document. Finally, the IR score is combined with PageRank to give a final rank to the document. For a multi-word search, the situation is more complicated. Now multiple hit lists must be scanned through at once so that hits occurring close together in a document are weighted higher than hits occurring far apart. The hits from the multiple hit lists are matched up so that nearby hits are matched together. For every matched set of hits, a proximity is computed. The proximity is based on how far apart the hits are in the document (or anchor) but is classified into 10 different value "bins" ranging from a phrase match to "not even close". Counts are computed not only for every type of hit but for every type and proximity. Every type and proximity pair has a type-prox-weight. The counts are converted into count-weights and we take the dot product of the count-weights and the type-prox-weights to compute an IR score. All of these numbers and matrices can all be displayed with the search results using a special debug mode. These displays have been very helpful in developing the ranking system. "0 -
Can Silos and Exact Anchor Text In Links Hurt a Site Post Penguin?
Just got a client whose site dropped from a PR of 3 to zero. This happened shortly after the Penguin release, June, 2012. Examining the site, I couldn't find any significant duplicate content, and where I did find duplicate content (9%), a closer look revealed that the duplication was totally coincidental (common expressions). Looking deeper, I found no sign of purchased links or linking patterns that would hint at link schemes, no changes to site structure, no change of hosting environment or IP address. I also looked at other factors, too many to mention here, and found no evidence of black hat tactics or techniques. The site is structured in silos, "services", "about" and "blog". All page titles that fall under services are categorized (silo) under "services", all blog entries are categorized under "blogs", and all pages with company related information are categorized under "about". When exploring the site's links in Site Explorer (SE), I noticed that SE is identifying the "silo" section of links (i.e. services, about, blog, etc.) and labeling it as an anchor text. For example, domain.com/(services)/page-title, where the page title prefix (silo), "/services/", is labeled as an anchor text. The same is true for "blog" and "about". BTW, each silo has its own navigational menu appearing specifically for the content type it represents. Overall, though there's plenty of room for improvement, the site is structured logically. My question is, if Site Explorer is picking up the silo (services) and identifying it as an anchor text, is Google doing the same? That would mean that out of the 15 types of service offerings, all 15 links would show as having the same exact anchor text (services). Can this type of site structure (silo) hurt a website post Penguin?
Technical SEO | | UplinkSpyder0 -
DropDown Menu with 175 links in headers, Can it hurt SEO?
I'm planning to add a dropdown menu in my online store header. The dropdown menu will have about 175 options with 175 internal links to different products. Can it hurt my SEO for aving more then 175 internal links on my header. This header will be on every pages. Thank you, BigBlaze
Technical SEO | | BigBlaze2050 -
I cannot find a way to implement to the 2 Link method as shown in this post: http://searchengineland.com/the-definitive-guide-to-google-authorship-markup-123218
Did Google stop offering the 2 link method of verification for Authorship? See this post below: http://searchengineland.com/the-definitive-guide-to-google-authorship-markup-123218 And see this: http://www.seomoz.org/blog/using-passive-link-building-to-build-links-with-no-budget In both articles the authors talk about how to set up Authorship snippets for posts on blogs where they have no bio page and no email verification just by linking directly from the content to their Google+ profile and then by linking the from the the Google+ profile page (in the Contributor to section) to the blog home page. But this does not work no matter how many ways I trie it. Did Google stop offering this method?
Technical SEO | | jeff.interactive0 -
Google is keeping very old title tags in the SERPs for my site. How can I fix this?
Hi Around 6 months ago a site I work with changed its brand. One company became two. Despite changing the title when a new site went live around 6 months ago Google still picks up the old title for certain search results relevant to the old title. When a search result is relevant to the new title it shows that. It's very frustrating as we are trying to re-brand and do not want the old brand name showing for some very important search results. Thanks in advance for your help Paul
Technical SEO | | pauldoffman0