Block Googlebot from submit button
-
Hi,
I have a website where many searches are made by the googlebot on our internal engine. We can make noindex on result page, but we want to stop the bot to call the ajax search button - GET form (because it pass a request to an external API with associate fees).
So, we want to stop crawling the form button, without noindex the search page itself. The "nofollow" tag don't seems to apply on button's submit.
Any suggestion?
-
Hey Olivier,
You could detect the user agent and hide the button. The difference isn't substantial enough to be called cloaking.
Or you could make the button not actually a button tag, but another tag with that traps clicks with a JS event. I'm not sure Google's headless browser is smart enough to automate that. I would try this first and if it doesn't work switch to the user agent detection idea.
Let us know how it goes!
-Mike
-
-
Can always do it in a programme the bot's can't use or hide it behind a log in field etc.
I also give you the following for consumption :
http://moz.com/blog/12-ways-to-keep-your-content-hidden-from-the-search-engines
Good luck!
-
Hi Bernard
Are you able to provide a link to the web form containing the submit button?
Peter
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
Intermediate & Advanced SEO | | Gabriele_Layoutweb0 -
We used to speak of too many links from same C block as bad, have CDN's like CloudFlare made that concept irrelevant?
Over lunch with our head of development, we were discussing the way CloudFlare and other CDN's help prevent DDOS attacks, etc. and I began to wonder about the IP address vs. the reverse proxy IP address. Before we would look to see commonalities in the IP as a way that search engines would modify the value to given links and most link software showed this. For ahrefs, I know they still show common IPs using the C block as the reference point. I began to get curious about what was the real IP when our head of dev said, that is the IP from CloudFlare... So, I ran a site in ahrefs and we got an older site we had developed years ago that showed up as follows: Actos-lawsuit.org 104.28.13.57 and again as 104.28.12.57 (duplicate C block is first three sets of numbers are the same and obviously, this has a .12 and a .13 so not duplicate.) Then we looked at our host to see what was the IP shown there: 104.239.226.120. So, this really begs a question of is C Block data or even IP address data still relevant with regard to links? What do the search engines see when they look for IP address now? Yes, I have an opinion, but would love to hear yours first!
Intermediate & Advanced SEO | | RobertFisher0 -
Robots.txt Blocking - Best Practices
Hi All, We have a web provider who's not willing to remove the wildcard line of code blocking all agents from crawling our client's site (user-agent: *, Disallow: /). They have other lines allowing certain bots to crawl the site but we're wondering if they're missing out on organic traffic by having this main blocking line. It's also a pain because we're unable to set up Moz Pro, potentially because of this first line. We've researched and haven't found a ton of best practices regarding blocking all bots, then allowing certain ones. What do you think is a best practice for these files? Thanks! User-agent: * Disallow: / User-agent: Googlebot Disallow: Crawl-delay: 5 User-agent: Yahoo-slurp Disallow: User-agent: bingbot Disallow: User-agent: rogerbot Disallow: User-agent: * Crawl-delay: 5 Disallow: /new_vehicle_detail.asp Disallow: /new_vehicle_compare.asp Disallow: /news_article.asp Disallow: /new_model_detail_print.asp Disallow: /used_bikes/ Disallow: /default.asp?page=xCompareModels Disallow: /fiche_section_detail.asp
Intermediate & Advanced SEO | | ReunionMarketing0 -
I've submitted a disavow file in my www version account, should I submit one in my non-www version account as well?
My non-www version account is my preferred domain. Should I submit in both account? Or one account will take care of the disavow?
Intermediate & Advanced SEO | | ChelseaP0 -
Received "Googlebot found an extremely high number of URLs on your site:" but most of the example URLs are noindexed.
An example URL can be found here: http://symptom.healthline.com/symptomsearch?addterm=Neck%20pain&addterm=Face&addterm=Fatigue&addterm=Shortness%20Of%20Breath A couple of questions: Why is Google reporting an issue with these URLs if they are marked as noindex? What is the best way to fix the issue? Thanks in advance.
Intermediate & Advanced SEO | | nicole.healthline0 -
Can I, in Google's good graces, check for Googlebot to turn on/off tracking parameters in URLs?
Basically, we use a number of parameters in our URLs for event tracking. Google could be crawling an infinite number of these URLs. I'm already using the canonical tag to point at the non-tracking versions of those URLs....that doesn't stop the crawling tho. I want to know if I can do conditional 301s or just detect the user agent as a way to know when to NOT append those parameters. Just trying to follow their guidelines about allowing bots to crawl w/out things like sessionID...but they don't tell you HOW to do this. Thanks!
Intermediate & Advanced SEO | | KenShafer0 -
Best way to view Global Navigation bar from GoogleBot's perspective
Hi, Links in the global navigation bar of our website do not show up when we look at Google cache --> text only version of the page. These links use "style="<a class="attribute-value">display:none;</a>" when we looked at HTML source. But if I use "user agent switcher" add-on in Firefox and set it to Googlebot, the links in global nav are displayed. I am wondering what is the best way to find out if Google can/can not see the links. Thanks for the help! Supriya.
Intermediate & Advanced SEO | | SShiyekar0 -
Is it worth submitting a blog's RSS feed...
to as many RSS feed directories as possible? Or would this have a similar negative impact that you'd get from submitting a site to loads to "potentially spammy" site directories?
Intermediate & Advanced SEO | | PeterAlexLeigh0