Honeypot Captcha - rated as "cloaked content"?
-
Hi guys,
in order to get rid of our very old-school captcha on our contact form at troteclaser.com, we would like to use a honeypot captcha.
The idea is to add a field that is hidden to human visitors but likely to be filled in by spam-bots. In this way we can sort our all those spam contact requests.
More details on "honeypot captchas":
http://haacked.com/archive/2007/09/11/honeypot-captcha.aspxAny idea if this single cloaked field will have negative SEO-impacts? Or is there another alternative to keep out those spam-bots?
Greets from Austria,
Thomas -
Just in case anyone stumbles across this topic:
We started using honeypot captchas in 2011 and it really paid off. Not only because we got rid of the old captchas, but also because they are keeping out 99,99% of all bot inquiries or spam.
-
Hey Casey,
Thanks for the reply. Will have this tested soon. Really looking forward to getting rid of that captcha.
Regards,
Thomas
-
Hi Thomas,
I've done some studies on this and you will be fine using this technique and Google won't give you any problems doing it. Check out my post on the Honeypot Technique, http://www.seomoz.org/blog/captchas-affect-on-conversion-rates. The technique works quite well blocking about 98% of SPAM.
Casey
-
Hi Keri,
Those are users without Java-Support.
Does that mean that Java Script is no issue then? -
Thomas, double-check if that stat is for users without Java, or users without javascript.
-
Good point, thanks.
As 15% of our visitors don't have Java, this won't work out
Actually we're trying to get rid of the captcha to increase our CR, that's why the "honeypot" version is very appealing.
-
You won't get any SEO impact, think about it for all the form with JS interaction on big sites
One easy solution is to use ajax post of the form only, very effective BUT you won't be able to get contact from visitors without javascript enabled. Maybe a good alternative.
Otherwise, you can use Recaptcha : http://www.google.com/recaptcha
This is free and easy to setup, works well with bots and access to everyone !
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console "Text too small to read" Errors
What are the guidelines / best practices for clearing these errors? Google has some pretty vague documentation on how to handle this sort of error. User behavior metrics in GA are pretty much in line with desktop usage and don't show anything concerning Any input is appreciated! Thanks m3F3uOI
Technical SEO | | Digital_Reach2 -
How to de-index a page with a search string with the structure domain.com/?"spam"
The site in question was hacked years ago. All the security scans come up clean but the seo crawlers like semrush and ahrefs still show it as an indexed page. I can even click through on it and it takes me to the homepage with no 301. Where is the page and how to deindex it? domain/com/?spam There are multiple instances of this. http://www.clipular.com/c/5579083284217856.png?k=Q173VG9pkRrxBl0b5prNqIozPZI
Technical SEO | | Miamirealestatetrendsguy1 -
"Yet-to-be-translated" Duplicate Content: is rel='canonical' the answer?
Hi All, We have a partially internationalized site, some pages are translated while others have yet to be translated. Right now, when a page has not yet been translated we add an English-language page at the url https://our-website/:language/page-name and add a bar for users to the top of the page that simply says "Sorry, this page has not yet been translated". This is best for our users, but unfortunately it creates duplicate content, as we re-publish our English-language content a second time under a different url. When we have untranslated (i.e. duplicate) content I believe the best thing we can do is add which points to the English page. However here's my concern: someday we _will_translate/localize these pages, and therefore someday these links will _not _have duplicate content. I'm concerned that a long time of having rel='canonical' on these urls, if we suddenly change this, that these "recently translated, no longer pointing to cannonical='english' pages" will not be indexed properly. Is this a valid concern?
Technical SEO | | VectrLabs0 -
Why does my mobile site have a "?mobiRedirect=1" string at the end of the URL?
Hello, When trying to access my site from a smart-phone, I'm getting a redirected to the mobile version (which is correct), however at the end of the URL there is a redirect string that shows every time. I'm not sure why its its showing or how it automatically gets appended to the end of the URL each time. How can I configure my mobile site to prevent the ?mobiRedirect=1" from showing? For example, if you search for "Columbus Regional Health" on Google with a smart-phone, the first result should be for www.crh.org. If you click that, you should get redirected to www.crh.org/mobile , however its displaying the URL as http://www.crh.org/mobile/default.aspx?mobiRedirect=1 Does anyone know how to fix this? Thank you,
Technical SEO | | Liamis
Brian0 -
"Extremely high number of URLs" warning for robots.txt blocked pages
I have a section of my site that is exclusively for tracking redirects for paid ads. All URLs under this path do a 302 redirect through our ad tracking system: http://www.mysite.com/trackingredirect/blue-widgets?ad_id=1234567 --302--> http://www.mysite.com/blue-widgets This path of the site is blocked by our robots.txt, and none of the pages show up for a site: search. User-agent: * Disallow: /trackingredirect However, I keep receiving messages in Google Webmaster Tools about an "extremely high number of URLs", and the URLs listed are in my redirect directory, which is ostensibly not indexed. If not by robots.txt, how can I keep Googlebot from wasting crawl time on these millions of /trackingredirect/ links?
Technical SEO | | EhrenReilly0 -
Duplicate Content Problems
Hi I am new to the seomoz community I have been browsing for a while now. I put my new website into the seomoz dashboard and out of 250 crawls I have 120 errors! So the main problem is duplicate content. We are a website that finds free content sources for popular songs/artists. While seo is not our main focus for driving traffic I wanted to spend a little time to make sure our site is up to standards. With that said you can see when two songs by an artist are loaded. http://viromusic.com/song/125642 & http://viromusic.com/song/5433265 seomoz is saying that it is duplicate content even though they are two completely different songs. I am not exactly sure what to do about this situation. We will be adding more content to our site such as a blog, artist biographies and commenting maybe this will help? Although if someone was playing multiple bob marley songs the biography that is loaded will also be the same for both songs. Also when a playlist is loaded http://viromusic.com/playlist/sldvjg on the larger playlists im getting an error for to many links on the page. (some of the playlists have over 100 songs) any suggestions? Thanks in advance and any tips or suggestions for my new site would be greatly appreciated!
Technical SEO | | mikecrib10 -
"/blogroll" causing 404 error
I'm running a campaign, and the crawling report for my site returned a lot of 4xx errors. When I look at the URLs, they all have a "/blogroll" in the end, like: mysite.com/post-number-1/blogroll mysite.com/post-number-2/blogroll And so on, for pretty much all the pages. The thing is, I removed the blogroll widget completely, so I really wouldn't know what can possibly point to links like that. Is there anything to fix on the site? Thanks
Technical SEO | | Baffo0