How to add a disclaimer to a site but keep the content accessible to search robots?
-
Hi,
I have a client with a site regulated by the UK FSA (Financial Services Authority). They have to display a disclaimer which visitor must accept before browsing. This is for real, not like the EU cookie compliance debacle
Currently the site 302 redirects anyone not already cookied (as having accepted) to a disclaimer page/form. Do you have any suggestions or examples of how to require acceptance while maintaining accessibility?
I'm not sure just using a jquery lightbox would meet the FSA's requirements, as it wouldn't be shown if JS was not enabled.
Thanks,
-Jason
-
Joshua thanks for your suggestions.
Fixed div idea is good but not sure it will pass FSA compliance.
Google search appliance config article is interesting and provides some ideas but not sure how to go about implementing for Googlebot.
Suppose reverse dns lookup (http://support.google.com/webmasters/bin/answer.py?hl=en&answer=80553) may provide a solution. Was hoping someone that had implemented something similar may share their experience.
Cheers.
-
That is rough,
maybe a legitimate situation for user agent sniffing (albeit fraught with danger)? If you can't rely on javascript then it would seem that any option will have significant downsides.
This may be a hair-brained suggestion but what about appending a server parameter to all links for those who do not have a cookie set? if the user agent is google or bing (or any other search bot) the server could ignore that parameter and send them on their way to the correct page, however if the user agent is not a search engine then they would be forced to the disclaimer page.
This would allow for a user to see the initial content (which may not be allowed?) but not navigate the site, however it would also allow you to present the same info to both user and agent while making the user accept the terms.
Alternatively serve up a version of the page that has the div containing the disclaimer form expand to fill the whole viewport to non-cookied visitors and set the style to position:fixed which should keep the visitor from scrolling past the div, but it will still render the content below the viewport. Thus cookied visitors don't see a form but non-cookied visitors get the same page content but can't scroll to it until they accept the form (mobile does weird things with position fixe, so this again might not work, and a savy user could get around it).
Edit: Just found this article which looks promising. It is a google doc on how to allow crawls on a cookied domain https://developers.google.com/search-appliance/documentation/50/help_gsa/crawl_cookies might solve the problem in a more elegant, safe way.
Would be interested to hear what you come up with. If you could rely on javascript then there are many ways to do it.
Cheers!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
We are migrating a site and are seeing alot of 301s and 302s already in the old site is it ok to leave those as is?
For the 3xx’s I’m not sure if it’s okay for us to redirect to these so please advise on that
Technical SEO | | lina_digital0 -
Sites for English speaking countries: Duplicate Content - What to do?
HI, We are planning to launch sites specific to target market (geographic location) but the products and services are similar in all those markets as we sell software.So here's the scenario: Our target markets are all English speaking countries i.e. Britain, USA and India We don't have the option of using ccTLD like .co.uk, co.in etc. How should we handle the content? Because product, its features, industries it caters to and our services are common irrespective of market. Whether we go with sub-directory or sub-domain, the content will be in English. So how should we craft the content? Is writing the unique content for the same product thrice the only option? Regards
Technical SEO | | IM_Learner0 -
Moving content
I have www.SiteA.com which contains a number of sections of content, a section of which (i.e. www.SiteA.com/sectionA), we would like to move to a new domain www.SiteB.com Definitely we will ensure that a redirect strategy is in place and that we submit a sitemap for SiteB Three Questions 1. Anything else I am missing from the migration plan? 2. Since we are only moving part of SiteA to SiteB, is there another way of telling Google that we changed address for that section or are the 301s enough? 3. Currently, Section A (under SiteA) contains a subsection where we were posting an article a day. In the new site (SiteB), we decided to drop this subsection and write content (but not "exactly" the same content) under a new section. During migration, how should we handle the subsection that we have decided to stop writing? Should we: A. Import the content into SiteB and call it archives and then redirect all the urls from subsection under SiteA to the archives under SiteB? OR B. Do not move the content but redirect all the pages (365 in total) to where we think the user would be more interested in going to on SiteB? Note: A colleague of mine is worried that since the subsection has good content he thinks its necessary to actually move the content to SiteB. But again, looking at the views for the archives it caters for 1% of the the total views of this section. In other words, people only view the article on the day it is written. I hope I was clear 🙂 Your help is appreciated Thank you
Technical SEO | | seo12120 -
Searching in Google using the Site:www.example.com specification - is it in an order?
Hi Gurus, Just a quick searching question. If you do a Google search using the site: specification eg. site:www.example.com Is the list returned by Google in an order of something similar to 'Page Authority' or some other order eg. page first seen date etc. Because you are looking at your single site, is Google listing your pages back to you in it's perceived order of current 'popularity'? Thanks, Brad
Technical SEO | | BM70 -
How ro write a robots txt file to point to your site map
Good afternoon from still wet & humid wetherby UK... I want to write a robots text file that instruct the bots to index everything and give a specific location to the sitemap. The sitemap url is:http://business.leedscityregion.gov.uk/CMSPages/GoogleSiteMap.aspx Is this correct: User-agent: *
Technical SEO | | Nightwing
Disallow:
SITEMAP: http://business.leedscityregion.gov.uk/CMSPages/GoogleSiteMap.aspx Any insight welcome 🙂0 -
Getting home page content at top of what robots see
When I click on the text-only cache of nlpca(dot)com on the home page http://webcache.googleusercontent.com/search?q=cache:UIJER7OJFzYJ:www.nlpca.com/&hl=en&gl=us&strip=1 our H1 and body content are at the very bottom. How do we get the h1 and content at the top of what the robots see? Thanks!
Technical SEO | | BobGW0 -
Robots exclusion
Hi All, I have an issue whereby print versions of my articles are being flagged up as "duplicate" content / page titles. In order to get around this, I feel that the easiest way is to just add them to my robots.txt document with a disallow. Here is my URL make up: Normal article: www.mysite.com/displayarticle=12345 Print version of my article www.mysite.com/displayarticle=12345&printversion=yes I know that having dynamic parameters in my URL is not best practise to say the least, but I'm stuck with this for the time being... My question is, how do I add just the print versions of articles to my robots file without disallowing articles too? Can I just add the parameter to the document like so? Disallow: &printversion=yes I also know that I can do add a meta noindex, nofollow tag into the head of my print versions, but I feel a robots.txt disallow will be somewhat easier... Many thanks in advance. Matt
Technical SEO | | Horizon0 -
Content loc and player log tags for XML video site maps
I need a little help understanding how to create two of the required tags for a XML video site map for Google. 1. video:content_loc2.<video:player_loc< p=""></video:player_loc<></video:content_loc> Google explains their Video XML Site map requirements here:
Technical SEO | | dsexton10
www.google.com/support/webmasters/bin/answer.py?answer=80472
Using the example on this Google Web Master Help page (where they explain all six of the required tags) , here are examples of the two tags I need help with: video:content_locwww.example.com/video123.flv</video:content_loc> <video:player_loc allow_embed="yes" autoplay="ap=1">www.example.com/videoplayer.swf?video=12...video:player_loc></video:player_loc> The video I am trying to optimize is located on a page on my site:
www.mountainbikingmaine.com/races/bradbury_hawk.html
This page has an embedded Vimeo video. So I don't have the video file on my domain. It is on Vimeo. Here is source code from my page that I think provides the information I need to create the two tags that Google requires. <iframe src="<a rel=" nofollow"="" href="http://player.vimeo.com/video/24580638?title=0&byline=0&portrait=0"" target="_blank">player.vimeo.com/video/24580638?title=0&...amp;portrait=0"</a> width="400" height="533" frameborder="0"></iframe> [vimeo.com/24580638">Bradbury](<a rel=) Mountain Maine Hawk Migration Count from [vimeo.com/user3219915">dan](<a rel=) sexton Using this source from my site, can you suggest what to put in the two tags? Thanks! Dan0