Would using javascript onclick functions to override href target be ok?
-
Hi all,
I am currently working on a new search facility for me ecommerce site... it has very quickly dawned on me that this new facility is far better than my standard product pages - from a user point of view - i.e lots of product attributes for customers to find what they need faster, ability to compare products etc... All in all just better. BUT NO SEO VALUE!!!
i want to use this search facility instead of my category/product pages... however as they are search pages i have "robots noindex them" and dont think its wise to change that...
I have spoken to the developers of this software and they suggested i could use some javascript in the navigation to change the onlclick function to take the user to the search equivelant of the page...
They said this way my normal pages are the ones that are still indexed by google etc, but the user has the benefit of using the improved search pages...
This sounds perfect, however it also sounds a little deceptive... and i know google has loads of rules about these kinds of things, the last thing i want is to get any kind of penalty or any negative reaction from an SEO point of view... I am only considering this as it will improve the user experience on my website...
Can any one advise if this is OK, or a "no no"...
P.s for those wondering i use an "off the shelf" cart system and it would cost me an arm and a leg to have these features built into my actual category / product pages.
-
Hello James,
Why do these pages have "no SEO value"? Is it because they are AJAX pages or because you have them noindexed? Or both?
To answer your original question, using an on-click javascript event to send a user to a page other than the URL listed in the href tag is borderline. It goes beyond the risk level I would feel comfortable with on an eCommerce site, but a lot of affiliate sites do this. For instance, all of their links out to merchant sites may go through a directory called /outlink/ so the href tag might look like .../outlink/link1234 and appear to send the user to another page on their domain, when actually the user gets redirected to the merchant's (e.g. Amazon.com, Best Buy...) website. Sometimes the user is redirected from the /outlink/... URL and sometimes they never even get that far because the javascript sends them to the merchant's URL first.
It is not cloaking unless you are specifically treating Google differently. If Google doesn't understand your site that is their problem. If you have code that essentially says "IF Google, THEN do this. ELSE do that" it is your problem because you are cloaking. Make sense? There is a very distinct line there.
The bottom line is if you want to show users a certain page then you should be showing that page to Google as well. If the problem is the content on that page doesn't appear for Google (e.g. AJAX) then you should look into optimizing that type of content to the best of your ability. For example, look into the use of hashbangs (#!) as in:
https://developers.google.com/webmasters/ajax-crawling/docs/getting-started
-
1. Google understands simple JS that is inline with your HTML. So Google understands that
is a link to domain.com. You can obfuscate this further and Google might not understand it. I've not seen Google try to parse or execute JS but that doesn't mean they can't or won't in the future.3. Google is very unlikely to spider AJAX. Many AJAX pages don't return any user readable content (most of mine return things like JSON, which is not for end user consumption) and , as such, are beyond the scope of indexation. Again, as in #2, you might want this content to be shown elsewhere if you want it indexed. https://developers.google.com/webmasters/ajax-crawling/
-
ok, i am not keen on this approach, the developers have offered an alternative... but again, i'm not sure about it, they have said they can use ajax to force their search results / navigation over my current navigation / products on my category / product pages...
this gets rid of having to use javascript to send to different url... but up above Alan mentions cloaking, which to my understanding is basically serving anything different for a search engine / person... and thats what this will do... it serves up a different navigation to people... and the products could be listed in a different order etc... search engines do not see the ajax...
Is this any better? or just as negative?
-
Are they identical, you say the search equivalent, I just wouldn't treat search engines any different
-
even thou the content is identical?
It is only the way that content can then be navigated that is different...
-
Well then, yes I would be concerned, you are serving up different content to users, that is cloaking.
-
Hi Alan,
i think i may have explained incorrectly - my search page does have the meta tag noindex,follow - it also has a canonical link back to the main search page (i.e search.html) so i do not think any of the search results will be indexed. So my concern is not duplicate content, this should not happen...
My concern is the fact i am using javascript to litterally divert customers from one page to another... its almost like the static pages are there only for the benefit of google... and thats concerning me...
-
Google can follow JavaScript links, unless you are very good at hiding them.
I would not worry too much about the duplicate content, don't expect the duplicates to rank, but your not likely to be penalized for them. you can use a canonical tag to point all search results back to the one page.
I would not no index any pages, any links pointed to a no-index page are pouring their link juice away. if you want to no index a page use the meta tag no-index,follow, this way the search engine will follow the links and flow back out to your site
read about page rank and how link juice flows
http://thatsit.com.au/seo/tutorials/a-simple-explanation-of-pagerank
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can OG titles be used as a substitute for Meta titles
We use og (open graph) titles in lieu of meta titles. Is there any downside to using just one. Should we be using both og and meta titles on our page. Appreciate any insight. Himanshu
Technical SEO | | patilhimanshu0 -
Using the same domain for two websites (for different geographical locations)
Hi all, My client has a new E-commerce site coming out in few months.
Technical SEO | | skifr
His requirement is to use the same domain (lets call it www.domain.com for now) for two seperate websites:
The first site, for users with ip addresses from USA - which will include prices in US dollars.
The second site - for users outside of the US - will not include any prices, and will have different pages and design. Now, lets say that googlebot crawls the websites from different ip ranges. How can i make sure a user from France, for example, won't see crawled pages from the US? Sure, once he will click the result, I can redirect him to a "Sorry, but this content is unavailable in your country" page. The problem is, I don't want a user from France to see the in the search results the meta description snippets of pages related only to users in the US (in some cases, the snippets may include the prices in $).
Is Geotargeting through Webmaster Tools can help in this case? I know I can target a part of the website for a specific country (e.g. - www.domain.com/us/), but how can I make sure global users won't see the pages targeted only to the US in the search results? Thanks in Advance0 -
Instead of a 301, my client uses a 302 to custom 404
I've found about 900 instances of decommissioned pages being redirected via 302 to a 404 custom page, even when there's a comparable page elsewhere on the site or on a new subdomain. My recommendation would be to always do a 301 from the legacy page to the new page, but since they're are so many instances of this 302->404 it seems to be standard operating procedure by the dev team. Given that at least one of these pages has links coming from 48 root domains, wouldn't it obviously be much better to 301 redirect it to pass along that equity? I don't get why the developers are doing this, and I have to build a strong case about what they're losing with this 302->404 protocol. I'd love to hear your thoughts on WHY the dev team has settled on this solution, in addition to what suffers as a result. I think I know, but would love some more expert input.
Technical SEO | | Jen_Floyd0 -
Is it better to have URLs of internal pages that are geo-targeted or point geo-targeted links to the homepage?
For example... Having links that are geo-targeted and pointing to this URL www.test.com/state-service/ or Not having any geo-targeted internal pages and just having links that are geo-targeted and pointing to this URL www.test.com Eventually the site will be a national campaign, so I am concerned about having so many geo-targeted internal pages. Thanks in advance!
Technical SEO | | Cyclone0 -
Targeting US search traffic
Hello, I've noticed the site I'm working on gets about 30-40% of Google organic search traffic from the US and the rest comes from around the world. All the site's customers are in the US and so the thought is to focus getting traffic more from the US. I know google webmaster tools has a geo targeting mechanism for the site in question but what I don't want to do is turn that on and then traffic from non-US sources goes away; I suppose that's not so bad if traffic from the US bumps up accordingly. Do you have any experience on this area? thanks -Mike
Technical SEO | | mattmainpath0 -
What Google uses in search result descriptions
Recently, Google has started including certain information from our web pages in their search results description that is a bit puzzling. For example if you google 'Wedding Band Raleigh' the description they are using for our site's (GigMasters) page begins with the text 'Results 1 - 10 of 1005' Not sure why they are pulling that information. That is in on the page but its not high up on the page or marked with any special h1, h2, or h3 tag. We do have that information inside of a div which we have named 'Results'. Maybe that's why? Did we inadvertently use some sort of Google rich snippet or schema.org naming convention?! Any insight would be hugely appreciated.
Technical SEO | | gigmasters0 -
Should I use www. or not in my main URL?
I have backlinks coming into my homepage, which has both a www. URL and one that's merely http://mysite.com. Which is the preferred URL for best optimization for search engines and how do I find this out?
Technical SEO | | NetPicks0 -
Domain Authority and Page Rank concerns when using CNAME
In the event that a person uses a service like Blogger or a photo service like Photo Shelter, but use a CNAME to resolve example.blogspot.com or example.photoshelter.com to example.com, how does that affect Domain Authority and Page Rank in real world results, and how does it affect the user when/if they leave the service and establish their own site? For example: A client has a blog on Blogger called johndoephotography.blogspot.com but uses CNAME so what is shown is johndoephotography.com. The Domain Authority is quite high since he is really on Yahoo's domain. How does that affect SERP rankings? Is it ignored, since it is merely a sub-domain, or does the parent domain actually give a benefit? The second part: If John Doe decides to host his own WordPress blog, what happens to that domain authority? Has he lost it all?
Technical SEO | | WilliamBay0