Honeypot Captcha - rated as "cloaked content"?
-
Hi guys,
in order to get rid of our very old-school captcha on our contact form at troteclaser.com, we would like to use a honeypot captcha.
The idea is to add a field that is hidden to human visitors but likely to be filled in by spam-bots. In this way we can sort our all those spam contact requests.
More details on "honeypot captchas":
http://haacked.com/archive/2007/09/11/honeypot-captcha.aspxAny idea if this single cloaked field will have negative SEO-impacts? Or is there another alternative to keep out those spam-bots?
Greets from Austria,
Thomas -
Just in case anyone stumbles across this topic:
We started using honeypot captchas in 2011 and it really paid off. Not only because we got rid of the old captchas, but also because they are keeping out 99,99% of all bot inquiries or spam.
-
Hey Casey,
Thanks for the reply. Will have this tested soon. Really looking forward to getting rid of that captcha.
Regards,
Thomas
-
Hi Thomas,
I've done some studies on this and you will be fine using this technique and Google won't give you any problems doing it. Check out my post on the Honeypot Technique, http://www.seomoz.org/blog/captchas-affect-on-conversion-rates. The technique works quite well blocking about 98% of SPAM.
Casey
-
Hi Keri,
Those are users without Java-Support.
Does that mean that Java Script is no issue then? -
Thomas, double-check if that stat is for users without Java, or users without javascript.
-
Good point, thanks.
As 15% of our visitors don't have Java, this won't work out
Actually we're trying to get rid of the captcha to increase our CR, that's why the "honeypot" version is very appealing.
-
You won't get any SEO impact, think about it for all the form with JS interaction on big sites
One easy solution is to use ajax post of the form only, very effective BUT you won't be able to get contact from visitors without javascript enabled. Maybe a good alternative.
Otherwise, you can use Recaptcha : http://www.google.com/recaptcha
This is free and easy to setup, works well with bots and access to everyone !
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content
I have one client with two domains, identical products to appear on both domains. How should I handle this?
Technical SEO | | Hazel_Key0 -
Does CAPTCHA Block Crawlbots?
Hi, My client's website occasionally asks users to verify their usage by checking the CAPTCHA box. This is only done once per session and is done randomly. Does having CAPTCHA before content loads block crawl bots from properly indexing pages? Does it have any negative impact on SEO? Thanks, Kevin
Technical SEO | | kevinpark1910 -
Yoast's Magento Guide "Nofollowing unnecessary link" is that really a good idea?
I have been following Yoast's Magento guide here: https://yoast.com/articles/magento-seo/ Under section 3.2 it says: Nofollowing unnecessary links Another easy step to increase your Magento SEO is to stop linking to your login, checkout, wishlist, and all other non-content pages. The same goes for your RSS feeds, layered navigation, add to wishlist, add to compare etc. I always thought that nofollowing internal links is a bad idea as it just throwing link juice out the window. Why would Yoast recommend to do this? To me they are suggesting link sculpting via nofollowing but that has not worked since 2009!
Technical SEO | | PaddyDisplays0 -
Handling "legitimate" duplicate content in an online shop.
The scenario: Online shop selling consumables for machinery. Consumable range A (CA) contains consumables w, x, y, z. The individual consumables are not a problem, it is the consumables groups I'm having problems with. The Problem: Several machines use the same range of consumables. i.e. Machine A (MA) consumables page contains the list (CA) with the contents w,x,y,z. Machine B (MB) consumables page contains exactly the same list (CA) with contents w,x,y,z. Machine A page = Machine B page = Consumables range A page Some people will search Google for the consumables by the range name (CA). Most people will search by individual machine (MA Consumables, MB Consumables etc). If I use canonical tags on the Machine consumable pages (MA + MB) pointing to the consumables range page (CA) then I'm never going to rank for the Machine pages which would represent a huge potential loss of search traffic. However, if I don't use canonical tags then all the pages get slammed as duplicate content. For somebody that owns machine A, then a page titled "Machine A consumables" with the list of consumables is exactly what they are looking for and it makes sense to serve it to them in that format. However, For somebody who owns machine B, then it only makes sense for the page to be titled "Machine B consumables" even though the content is exactly the same. The Question: What is the best way to handle this from both a user and search engine perspective?
Technical SEO | | Serpstone0 -
Tags and Duplicate Content
Just wondering - for a lot of our sites we use tags as a way of re-grouping articles / news / blogs so all of the info on say 'government grants' can be found on one page. These /tag pages often come up with duplicate content errors, is it a big issue, how can we minimnise that?
Technical SEO | | salemtas0 -
Duplicate Content
Hi - We are due to launch a .com version of our site, with the ability to put prices into local currency, whereas our .co.uk site will be solely £. If the content on both the .com and .co.uk sites is the same (at product level mainly), will we be penalised? What is the best way to get around this?
Technical SEO | | swgolf1230 -
Why is this url showing as "not crawled" on opensiteexplorer, but still showing up in Google's index?
The below url is showing up as "not crawled" on opensitexplorer.com, but when you google the title tag "Joel Roberts, Our Family Doctors - Doctor in Clearwater, FL" it is showing up in the Google index. Can you explain why this is happening? Thank you http://doctor.webmd.com/physician_finder/profile.aspx?sponsor=core&pid=14ef09dd-e216-4369-99d3-460aa3c4f1ce
Technical SEO | | nicole.healthline0 -
How do I use the Robots.txt "disallow" command properly for folders I don't want indexed?
Today's sitemap webinar made me think about the disallow feature, seems opposite of sitemaps, but it also seems both are kind of ignored in varying ways by the engines. I don't need help semantically, I got that part. I just can't seem to find a contemporary answer about what should be blocked using the robots.txt file. For example, I have folders containing site comps for clients that I really don't want showing up in the SERPS. Is it better to not have these folders on the domain at all? There are also security issues I've heard of that make sense, simply look at a site's robots file to see what they are hiding. It makes it easier to hunt for files when they know the directory the files are contained in. Do I concern myself with this? Another example is a folder I have for my xml sitemap generator. I imagine google isn't going to try to index this or count it as content, so do I need to add folders like this to the disallow list?
Technical SEO | | SpringMountain0