Log in, sign up, user registration and robots
-
Hi all,
We have an accommodation site that asks users only to register when they want to book a room, in the last step. Though this is the ideal situation when you have tons of users, nowadays we are having around 1500 - 2000 per day and making tests we found out that if we ask for a registration (simple, 1 click FB) we mail them all and through a good customer service we are increasing our sales.
That is why, we would like to ask users to register right after the home page ie Home/accommodation or and all the rest. I am not sure how can I make to make that content still visible to robots.
Will the authentication process block google crawling it? Maybe something we can do?We are not completely sure how to proceed so any tip would be appreciated.
Thank you all for answering.
-
For implementing early user registration without hindering SEO, consider using dynamic rendering to serve content to Google’s crawlers; this method can maintain visibility while capturing user details upfront. For more tailored strategies, consult with experts at First Growth Agency.
-
The registration process on most websites is pretty straightforward. You enter your email, you create a password, and then you are done with it. Also try instagram mod apk unlimited likes and followers.
-
-
-
Yes it is better to ask the users to register right after the homepage but it will take some time that is the main reason you should apply some different tactics.
-
Just to give you un update, our IT solved that with CSS. The code is visible but it appears a CSS login over that does not really allow you to see much more until you log in.
It is working.
-
Correct. If you have a wall Googlebot won't index it unless you make some sort of exception for it (and even then Google frowns on walled off content). SEM had great article on this talking about Google's rules for walled news content (may not apply to you but interesting nonetheless).
I would put your wall behind your content, not in front.
-
Thank you Highland for the answer.
Therefore, I understand that there is not any way for robots to pass where there autentification requirements. Right? Just to confirm. This is our main concern, we get 30% of our SEO results directly to rooms and we would not like to loose those.
We already made the A/B tests and check the conversion rates and though we know we are loosing some users and making bounce higher the sales rates are much higher (about 30%).
We are working to solve this as well improving the product and the site but that would be other completely different thing.Seems like making a decission where to ask to register is the real important thing then
-
You can go this route easily enough, it just requires a deliberate decision as to which is public and which is behind your login-wall. Put a different way, you're going to need some public pages that explain your site, how the process works, etc. Once you've established what is necessary from a user and SEO perspective, then you can wall off your content behind a login.
You also need to experiment some with your funnel. If you present your wall on the first page after the home page, is that going to drive conversions (registrations in this case) up or down? Maybe your users are reading 3-4 pages before registering. Where is the sweet spot? A-B test. Funnel test. Be careful that you don't go "Hey, registrations increased sales so we need everyone to register!" because you might hurt sales down the road if less people register.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console and User-declared canonical is actually Hreflang tag
Hey, We recently launched a US version of UK based ecommerce website on the us.example.com subdomain. Both websites are on Shopify so canonical tags are handled automatically and we have implemented Hreflang tags across both websites. Suddenly our rankings in the UK have dropped and after looking in search console for the UK site ive found that a lot of pages are now no longer indexed in Google because the User-declared canonical is the Hreflang tag for the US URL. Below is an example https://www.example.com/products/pac-man-arcade-cabinet - is the product page is the canonical tag rel="alternate" href="https://www.example.com/products/pac-man-arcade-cabinet" hreflang="en-gb" /> - UK hreflang tag rel="alternate" href="https://us.example.com/products/pac-man-arcade-cabinet" hreflang="en-us" /> - US Hreflang tag then in Google search console the user-defined canonical is https://us.example.com/products/pac-man-arcade-cabinet but it should be https://www.example.com/products/pac-man-arcade-cabinet The UK website has been assigned to target the United Kingdom in Search Console and the US website has been assigned to target the United States. We also do not have access to robots.txt file unfortunately. Any help or insight would be greatly appreciated.
Technical SEO | | PeterRubber0 -
Robots file set up
The robots file looks like it has been set up in a very messy way.
Technical SEO | | mcwork
I understand the # will comment out a line, does this mean the sitemap would
not be picked up?
Disallow: /js/ should this be allowed like /*.js$
Disallow: /media/wysiwyg/ - this seems to be causing alerts in webmaster tools as it can not access
the images within.
Can anyone help me clean this up please #Sitemap: https://examplesite.com/sitemap.xml Crawlers Setup User-agent: *
Crawl-delay: 10 Allowable Index Mind that Allow is not an official standard Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/ Allow: /catalogsearch/result/ Allow: /media/catalog/ Directories Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /errors/
Disallow: /includes/
Disallow: /js/
Disallow: /lib/
Disallow: /magento/ Disallow: /media/ Disallow: /media/captcha/ Disallow: /media/catalog/ #Disallow: /media/css/
#Disallow: /media/css_secure/
Disallow: /media/customer/
Disallow: /media/dhl/
Disallow: /media/downloadable/
Disallow: /media/import/
#Disallow: /media/js/
Disallow: /media/pdf/
Disallow: /media/sales/
Disallow: /media/tmp/
Disallow: /media/wysiwyg/
Disallow: /media/xmlconnect/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /scripts/
Disallow: /shell/
#Disallow: /skin/
Disallow: /stats/
Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalog/product/gallery/
Disallow: */catalog/product/upload/
Disallow: /catalogsearch/
Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt
Disallow: /get.php # Magento 1.5+ Paths (no clean URLs) #Disallow: /.js$
#Disallow: /.css$
Disallow: /.php$
Disallow: /?SID=
Disallow: /rss*
Disallow: /*PHPSESSID Disallow: /:
Disallow: /😘 User-agent: Fatbot
Disallow: / User-agent: TwengaBot-2.0
Disallow: /0 -
Duplicate content on user queries
Our website supports a unique business industry where our users will come to us to look for something very specific (a very specific product name) to find out where they can get it. The problem that we're facing is that the products are constantly changing due to the industry. So, for example, one month, one product might be found on our website, and the next, it might be removed completely... and then might come back again a couple months later. All things that are completely out of our control - and we have no way of receiving any sort of warning when these things might happen. Because of this, we're seeing a lot of duplicate content issues arise... For Example... Product A is not active today... so www.mysite.com/search/productA will return no results... Product B is also not active today... so www.mysite.com/search/productB will also return no results. As per Moz Analytics, these are showing up as duplicate content because both pages indicate "No results were found for {your searched term}." Unfortunately, it's a bit difficult to return a 204 in these situations (which I don't know if a 204 would help anyway) or a 404, because, for a faster user experience, we simultaneously render different sections of the page... so in the very beginning of the page load - we start rendering the faster content (template type of content) that says "returning 200 code, we got the query successfully & we're loading the page".. the unique content results finish loading last since they take the longest. I'm still very new to the SEO world, so would greatly appreciate any ideas or suggestions that might help with this... I'm stuck. 😛 Thanks in advance!
Technical SEO | | SFMoz0 -
I have a mobile version and a standard version of my website. I'd like to show users some pages on the non-mobile site but keep googlebot mobile out. Is that ok?
On the mobile version not all the content of the normal site is available to the users. Since we didn't want googlebot mobile to index the non-mobile site, all the non-existent pages were returned with a 404 error. But now we'd like to show the mobile users these pages and send them to the normal site. If we allow the users to see these pages, is it ok to block googlebot mobile so these non-mobile pages are not indexed by googlebot mobile or will that create some issues for google?
Technical SEO | | bgs0 -
RegEx help needed for robots.txt potential conflict
I've created a robots.txt file for a new Magento install and used an existing site-map that was on the Magento help forums but the trouble is I can't decipher something. It seems that I am allowing and disallowing access to the same expression for pagination. My robots.txt file (and a lot of other Magento site-maps it seems) includes both: Allow: /*?p= and Disallow: /?p=& I've searched for help on RegEx and I can't see what "&" does but it seems to me that I'm allowing crawler access to all pagination URLs, but then possibly disallowing access to all pagination URLs that include anything other than just the page number? I've looked at several resources and there is practically no reference to what "&" does... Can anyone shed any light on this, to ensure I am allowing suitable access to a shop? Thanks in advance for any assistance
Technical SEO | | MSTJames0 -
User Created Subdomain Help
Have I searched FAQ: Yes My issue is unique because of the way our website works and I hope that someone can provide some guidance on this.Our website http://breezi.com is a website builder where users can build their own website. When users build their site it creates a sub-domain route to their created site, for example: http://mike.breezi.com. Now that I have explained how our site works here is the problem: Google Webmaster Tools and Bing Webmaster Tools are indexing ALL the user created websites under our TLD and thus it is our impression that any content created in those sub-domains can confuse the search engine to thinking that the user created website and content is relevant to _OUR _main sitehttp://breezi.com. So, what we would like to know if there is a way to let search engines know that the user created sites and content is not related to our TLD site. Thanks for any help and advise.
Technical SEO | | breezi0 -
How to add a disclaimer to a site but keep the content accessible to search robots?
Hi, I have a client with a site regulated by the UK FSA (Financial Services Authority). They have to display a disclaimer which visitor must accept before browsing. This is for real, not like the EU cookie compliance debacle 🙂 Currently the site 302 redirects anyone not already cookied (as having accepted) to a disclaimer page/form. Do you have any suggestions or examples of how to require acceptance while maintaining accessibility? I'm not sure just using a jquery lightbox would meet the FSA's requirements, as it wouldn't be shown if JS was not enabled. Thanks, -Jason
Technical SEO | | GroupM_APAC0 -
Using robots.txt to deal with duplicate content
I have 2 sites with duplicate content issues. One is a wordpress blog. The other is a store (Pinnacle Cart). I cannot edit the canonical tag on either site. In this case, should I use robots.txt to eliminate the duplicate content?
Technical SEO | | bhsiao0