Cloaking? Best Practices Crawling Content Behind Login Box
-
Hi-
I'm helping out a client, who publishes sale information (fashion sales etc.)
In order for the client to view the sale details (date, percentage off etc.) they need to register for the site.
If I allow google bot to crawl the content, (identify the user agent) but serve up a registration light box to anyone who isn't google would this be considered cloaking?
Does anyone know what the best practice for this is? Any help would be greatly appreciated.
Thank you,
Nopadon
-
Can I say I admire your inventiveness? You go to some lengths to not register and really, apart from the majority of people not knowing how to do a reverse image search, probably reflects people's attitude to those sorts of lightbox registration forms.
-
I'm going to respond from a human point of view and not a technical point of view.
I've been searching for houses recently on Craigslist. There are a couple of real estate agents who post ads on CL with a link to their site. When you click the link, you get a lightbox requiring that you fill out the lead form to be able to see the details of the house. I do one of two things:
-
I open up IE in private browsing mode and paste in the URL. The private browsing mode has something that prevents this script from running and I can see the house details just fine.
-
If the house address is not provided in the CL ad, I'll copy the image URL of one of the CL photos and put that into a Google reverse image search. I'll find a different website that has posted the same house and use their site that doesn't require me to register. (I realize this may not happen in your scenario above).
I agree what the other people say about not wanting provide one thing to Google and another to users, and wanted to add that people will try to find ways around the registration. I don't have a solution for you, sadly.
-
-
Heya there,
Thanks for asking your question here
My first point would be that human visitors don't like to be given forms when they first visit a site, so would suggest you don't do this.
My alternative strategy would be to provide a home page of good content talking about the data etc that is available on your site and then provide a button for people to register if they want to.
Don't detect the user agent and provide alternative content as, however good your intentions are, that could be considered cloaking. Google is against you providing Google different content to humans, so don't do it.
Do things differently
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Glossary Page - best practice
Hi guys, We have a glossary on our website. All terms are accessible via a 'view all' URL, however we also have each letter on their own URL, e.g /a. Currently the rel=canonical tag for all the individual letter pages points to the view all URL. I'm just wondering whether that is best practice or not, as currently not all the individual letter pages are being indexed. Thanks 🙂
Technical SEO | | brian-madden0 -
Is the content on my website is garbage?
I received a mail from google webmasters, that my website is having low quality content. Website - nowwhatmoments.com
Technical SEO | | Green.landon0 -
Best practice around removing large section of the website
We are looking at removing a large section of our website that is getting low/no traffic. My current thought of removing this would be to delete the pages and add 301 redirects to a similar page within the site that is not being deleted. This will be removing 400+ pages, does it this make sense? Or should we point them to the homepage? Finally should we do this in one batch or should we slowly remove the pages over the course of a couple weeks. Thanks - appreciate the help in understanding the best practice in terms of SEO.
Technical SEO | | webactive0 -
Duplicate content on user queries
Our website supports a unique business industry where our users will come to us to look for something very specific (a very specific product name) to find out where they can get it. The problem that we're facing is that the products are constantly changing due to the industry. So, for example, one month, one product might be found on our website, and the next, it might be removed completely... and then might come back again a couple months later. All things that are completely out of our control - and we have no way of receiving any sort of warning when these things might happen. Because of this, we're seeing a lot of duplicate content issues arise... For Example... Product A is not active today... so www.mysite.com/search/productA will return no results... Product B is also not active today... so www.mysite.com/search/productB will also return no results. As per Moz Analytics, these are showing up as duplicate content because both pages indicate "No results were found for {your searched term}." Unfortunately, it's a bit difficult to return a 204 in these situations (which I don't know if a 204 would help anyway) or a 404, because, for a faster user experience, we simultaneously render different sections of the page... so in the very beginning of the page load - we start rendering the faster content (template type of content) that says "returning 200 code, we got the query successfully & we're loading the page".. the unique content results finish loading last since they take the longest. I'm still very new to the SEO world, so would greatly appreciate any ideas or suggestions that might help with this... I'm stuck. 😛 Thanks in advance!
Technical SEO | | SFMoz0 -
How to avoid duplicate content
Hi, I have a website which is ranking on page 1: www.oldname.com/landing-page But because of legal reason i had to change the name.
Technical SEO | | mikehenze
So i moved the landing page to a different domain.
And 301'ed this landing page to the new domain (and removed all products). www.newname.com/landing-page All the meta data, titles, products are still the same. www.oldname.com/landing-page is still on the same position
And www.newname.com/landing-page was on page 1 for 1 day and is now on page 4. What did i do wrong and how can I fix this?
Maybe remove www.oldname.com/landing-page from Google with Google Webmaster Central or not allow crawling of this page with .htaccess ?0 -
Google doesn't rank the best page of our content for keywords. How to fix that?
Hello, We have a strange issue, which I think is due to legacy. Generally, we are a job board for students in France: http://jobetudiant.net (jobetudiant == studentjob in french) We rank quite well (2nd or 3rd) on "Job etudiant <city>", with the right page (the one that lists all job offers in that city). So this is great.</city> Now, for some reason, Google systematically puts another of our pages in front of that: the page that lists the jobs offers in the 'region' of that city. For example, check this page. the first link is a competitor, the 3rd is the "right" link (the job offers in annecy), but the 2nd link is the list of jobs in Haute Savoie (which is the 'departement'- equiv. to county) in which Annecy is... that's annoying. Is there a way to indicate Google that the 3rd page makes more sense for this search? Thanks
Technical SEO | | jgenesto0 -
I am trying to correct error report of duplicate page content. However I am unable to find in over 100 blogs the page which contains similar content to the page SEOmoz reported as having similar content is my only option to just dlete the blog page?
I am trying to correct duplicate content. However SEOmoz only reports and shows the page of duplicate content. I have 5 years worth of blogs and cannot find the duplicate page. Is my only option to just delete the page to improve my rankings. Brooke
Technical SEO | | wianno1680 -
Which one is best? Parameters or Meta
I have issue regarding duplicate pages on website as follow. http://www.vistastores.com/review.html?pr_page_id=90344 http://www.vistastores.com/review.html?pr_page_id=90345 I checked my Google webmaster tools and found that Google have already set Parameter with pr_page_id. So, what is this? Will Google index all that pages? Can I use following Meta tag to block indexing? Which one is best?
Technical SEO | | CommercePundit0