In order for Google to recognize a hyper-link on your website, does it have to be written in a specific java script?
-
Does it have to read as the following script?
-
Not a problem I find that all too often, if the question is a bit ambiguous - people will ignore it. If there are only a handful of interpretations, I will still try to answer
-
Thank you, that was extremely insightful and helpful.
-
Just so you are aware, the code-sample which you supplied is HTML and not JavaScript (or for that matter, any type of script. Scripting languages include JavaScript, Python, Ruby, Perl etc).
You may be asking one of two things (I think!):
1) Is there a set HTML format for hyperlinks which Google knows how to read?
Yes, and you can find **information all about ** conventional use of the <a></a><a>(HTML) tag here:</a>
<a></a>
<a></a>
HTML is a static language and is not (unlike many scripting languages) 'object oriented'. You don't define "<a>" and as such</a> <a>is not interpreted based upon your programmed parameters.</a> <a>always means the same thing (to a a web browser). Sure stuff like CSS can style links in different ways, JavaScript can modify</a> <a>tags by injecting event-tracking attributes etc (also a common use of jQuery) but fundamentally the usage of</a> <a>is</a> <a>(mostly) universally agreed. So yes - links are coded according to conventions and Google will interpret those widely accepted conventional use-cases, as well as a few more experimental deployments (possibly through error handling in Google's algorithms). In general, you should follow W3C / W3 Schools guidelines. There are many forms of link (no-followed links, text links, image links) and all are valid but yes - they are predetermined</a>
<a>2) This is the HTML which my JavaScript will output - is it ok?
Yeah it's fine dude. If you can handle JS, you can handle HTML (it's way simpler). One thing though, although Google can deploy rendered (JS-enabled) crawling, that involves using headless browsers and such to render the 'modified' source code (so, what you see in 'inspect element' is the modified source. What you see in "view page source" is different, that's the pre-modified or base-source code).
Usually speaking this takes 10x longer than simple DOM / base-source scrapes. As such if Google were to deploy that tech on every crawl for every page on the web, the efficiency hit to their 'index the web' mission would be colossal. Many studies show that Google will not render JS on all sites (especially one perceived to be low value). Even on sites where they will use this tech, they won't deploy it all of the time. There really is no substitute for forcing your links and content to be readable in the base-source code (un-modified). It's way better for crawlers, way more efficient for them to work with. Just because Google ' can' do something, it doesn't mean they always will. It doesn't mean it's a good idea to ignore basic SEO principles!
Hope that helps</a>
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
GWT Lists URLs with links to my site, but those URLs don't have links to my site
There are a number of URLs that show up for my website in Webmaster Tools as having links to my site when in fact they don't. I know this to be a fact because I actually wrote some of the sites that Google is saying are linking to my site and I know there isn't any links on the specific pages to that site. For example, the site in question is: https://www.liftproducts.com/ As an example, Google is saying that about 100 URLs from https://www.lift-tables.net/ are linking to it when, I'm fairly confident that there is only 1 link to it found here: https://www.lift-tables.net/info.php?countrytabs=3 Yet Google says links exist on this (and many more) page: https://www.lift-tables.net/liftproducts/max-lift/lpt4w-030-48.php I thought at first it might have been due to linking to a image file directly instead of locally hosting it, but after checking the source, that's not the case. I am also seeing this type of non-linking being reported on sites I have no clue what they are. For example, some Spanish site about tandem bicycles is being listed as linking to liftproducts.com: http://206-225-85-34.dedicated.abac.net/tandem/inc Yet I can find no evidence of an actual link existing. Am I missing something here? Any insight of why this is happening and how I can clean up my link signature would be appreciated.
Link Building | | Nivik231 -
Thoughts on creating a resource/recommend links page in order to attain links?
Hey guys, Just wondering, how does Google view reciprocal link building nowadays? I've heard in the past that it's something that Google isn't particularly keen on. However, more recently, I've also read that - as long as the links are decent (and in moderation) - link exchanging is by no means a bad thing. The reason I ask is that I'm thinking of creating a "resource/recommended links page" for a client of mine. Because of their industry, it's really apparent that attaining links is often only possible if there is the opportunity of a reciprocal link in return. Therefore, I'd need to have a resource/recommend links page of my own. Is this something I should go ahead with to make it much easier to attain links? Or should I try to avoid reciprocal links and attempt to pinpoint one-way linking opportunities - even if they're thin on the ground? Any help much as appreciated as always. Cheers.
Link Building | | Webrevolve0 -
Link Relevance vs Link Authority ?
Hi guys, I have a quick question regarding some low quality links my site is getting. From the information in Open Site Explorer, the website Im talking about has a page authority of 16 while the domain authority is 11\. These are very low quality scores in my oppinion and Im thinking of removing them with Google Disavow Tool. The thing is that, although the domain is low quality, the links have relevance as they're in the same business sector as us. More exactly, the name of the domain is taxisavona and the company I work for is also a taxi company. So, now I`m thinking, should I disavow the link because of it's low scores or should I leave it because it is relevant to our business ? Thank you in advance! Best regards, Tiberiu Iavorenciuc
Link Building | | Tiberiu0 -
Has anyone changed domain after "unnatural link message" from Google?
We received a "unnatural" email from Google approx 6 months ago and have tried removing all links but it is proving to be near impossible - has anyone undergone a domain change and moved their sites to get rid of the penalty?
Link Building | | jj34340 -
Disavow Links tool just launched by Google
http://searchengineland.com/google-launches-disavow-links-tool-136826 Just thought I would share this.
Link Building | | activitysuper0 -
Can high SERPS and/or social signals minimize Google penalties and a back linking removal question
As I am continually sizing up my competition in the SERPS I have scanned their sites with a fine tooth and comb. I have found that these sites practice in the very things that I have practiced in the past and have removed thinking that may be some of the reasons I was hit with Penguin. Some of these factors are: Link Scheme with sites they own (C Blocks) Content for Search Engines (Keyword rich text) Exact anchor text in back linking profile Yet even though my competition practices in these methods (One site even places exact anchor text in the footer and header of every page for one of their other forum site) they seem to have not even been touched with any of the recent updates. In fact it seems their ranking have increased. In scanning these sites the only major difference that I have been able to see between them and I is that their SERPS are higher than mine and they have way more social signals than me. One site has about 73k facebook likes where I only have about 300. My question is Can Google ignore penalties for sites that have higher SERPS and /or social signals that would effect another site that had lower ones? My other question is related to back links My main site has links from another site I built a long time ago (Pre SEO and not knowing what I was doing) somewhere in the 73k range. Obviously a HUGE signal to Google that this might be spam and I am aware. I have removed the links from that site but unfortunately the average crawl rate per day is very low so it is taking a very long time for Google to find those pages and re-crawl them to find the links gone. Since that site I have than has those links pointing to my main site has very low traffic I am totally willing to kill that entire site with a 404. Can this help speed up the removal of those links from that site? I figure since the site no longer exists all links from that site will be removed almost immediately from my main site. Any thoughts?
Link Building | | cbielich0 -
Is it worth it to link to sites that link to you in guest posts?
Suppose you published a guest post on a quality site and you link to a previous guest post you have written for another site (which links to you). In theory you could send link juice to the page that links to you for a second order effect. Has anyone seen results from this tactic?
Link Building | | ProjectLabs0 -
Linking Analyse
Hello, Im doing a research on the link-building analyze of 2 sites. What I want to know is whats the most important fact to analyze in the Open Site Explorer. And also can I see all the links connected to those sites in the Open Site Exporer or should I use other program to verify that? Tks in advance! Pedro Menezes Pereira
Link Building | | PedroM0