What's the best practice for implementing a "content disclaimer" that doesn't block search robots?
-
Our client needs a content disclaimer on their site. This is a simple "If you agree to these rules then click YES if not click NO" and you're pushed back to the home page.
I have this gut feeling that this may cause an upset with the search robots.
Any advice?
R/
John
-
Hi John. I've seen some websites that use a simple box that is "lightboxed" on top of the content. When you click Yes, the lightbox appears and the content is shown as normal. To a search engine, this would look like a perfectly normal website.
However, if your "click yes or click no" refers the end-user to another page ONLY AFTER they click yes, then this would be a huge issue with search engines.
I'd recommend using the "User Agent Switcher" in Firefox to view your site as a Googlebot. This should tell you whether or not it's seeing the entire site or just a portion of your site:
https://addons.mozilla.org/en-US/firefox/addon/user-agent-switcher/
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does a JS script who scroll automaticaly into pages could make some content "hidden" ?
Hello everybody, Sorry for my english (I'm French), I will try to do my best... We've got an e-commerce website : kumulusvape.fr
On-Page Optimization | | KumulusVape
On each categories, to improve our conversion rate, we put a javascript to automaticaly scroll into the page to the product list. You can see an example here : http://www.kumulusvape.fr/44-e-liquide-savourea-smookies This script scroll and make some content "hidden".
It's not really a scroll, just changing page position. Do you think that our h1 and our category content could be consider "hidden" by Google ? Thank you very much for your help0 -
Is there a limit to the number of duplicate pages pointing to a rel='canonical ' primary?
We have a situation on twiends where a number of our 'dead' user pages have generated links for us over the years. Our options are to 404 them, 301 them to the home page, or just serve back the home page with a canonical tag. We've been 404'ing them for years, but i understand that we lose all the link juice from doing this. Correct me if I'm wrong? Our next plan would be to 301 them to the home page. Probably the best solution but our concern is if a user page is only temporarily down (under review, etc) it could be permanently removed from the index, or at least cached for a very long time. A final plan is to just serve back the home page on the old URL, with a canonical tag pointing to the home page URL. This is quick, retains most of the link juice, and allows the URL to become active again in future. The problem is that there could be 100,000's of these. Q1) Is it a problem to have 100,000 URLs pointing to a primary with a rel=canonical tag? (Problem for Google?) Q2) How long does it take a canonical duplicate page to become unique in the index again if the tag is removed? Will google recrawl it and add it back into the index? Do we need to use WMT to speed this process up? Thanks
On-Page Optimization | | dsumter0 -
Timeline on Moz's About Page
There has been a lot of talk about improving “About” pages on websites as of late. Moz actually has a really interesting About page, which includes a timeline. Are there any recommended WordPress plugins that can achieve a similar timeline effect?
On-Page Optimization | | VicMarcusNWI0 -
Login webpage blocked by robots
Hi, the SEOMOZ crawl diagnostics shows that this page: www.tarifakitesurfcamp.com/wp-login.php is blocked (noindex, nofollow) Is there any problem with that?
On-Page Optimization | | juanmiguelcr0 -
Is my blog simply duplicate content of my authors' profiles?
www.example.com/blog is the full list of blog posts by various writers. The list contains the title of each article and the first paragraph from the article. In addition to /blog being indexed, each author's contribution list is being indexed separately. It's not a profile, really, just a list of articles in the same title & paragraph format of the /blog page. So if /blog a list of 10 articles written by two writers, I have three pages: /blog/author1 is a list of 4 articles /blog/author2 is a list of 6 different articles /blog is a list of 10 articles (the 4+6 from the two writers) Is this going to be considered duplicate content?
On-Page Optimization | | Brocberry0 -
User experience regarding dulpicate content and managing this content with google.
Hi long title i know! We are moving on to magento and have chosen to allocate a specific colour to each category using corresponding tabbed navigation for user experience.All products within each of the coloured tabs then inherit the repective colour, giving the products a category identiy within the store. This layout has had a positive feedback from our "testers" As a lot of our products are seasonal and can be represented in different categories there is a significant amount of duplicate content. ATM i see our options as being: Alter the site structure so that the category is not shown in the url, therefore eliminating our duplicate products. The downside of this is that the colour co-ordination of the categories would not work at product level as its the category path that assigns the colour. create canonical links for every duplicate, can this be damaging? keep the duplicates and do nothing let google decide the most important version of a product. any guidance would be appreciated!
On-Page Optimization | | LadyApollo0 -
How do you block development servers with robots.txt?
When we create client websites the urls are client.oursite.com. Google is indexing theses sites and attaching to our domain. How can we stop it with robots.txt? I've heard you need to have the robots file on both the main site and the dev sites... A code sample would be groovy. Thanks, TR
On-Page Optimization | | DisMedia0