Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Googlebot on paywall made with cookies and local storage
-
My question is about paywalls made with cookies and local storage. We are changing a website with free content to a open paywall with a 5 article view weekly limit.
The paywall is made to work with cookies and local storage. The article views are stored to local storage but you have to have your cookies enabled so that you can read the free articles. If you don't have cookies enable we would pass an error page (otherwise the paywall would be easy to bypass).
Can you say how this affects SEO? We would still like that Google would index all article pages that it does now.
Would it be cloaking if we treated Googlebot differently so that when it does not have cookies enabled, it would still be able to index the page?
-
Thank you for your answer!
Yes, that is exactly the case.
We have been testing this and it seems that "Googlebot" doesn't hit the wall at all with the normal settings on. With these results it seems that we don't need to treat "Googlebot" differently because it doesn't seem to store any cookie or local storage data.
Tech savvy users can bypass the pay wall by other means as well so that's not a big concern for us.
-
To make sure that I'm getting your question correct. You want Google to crawl and index all your content but you want visitors to use an open paywall that shows 5 free articles then resorts to a paywall.
Yes, it would be treated as cloaking but you have a legitimate reason for doing so and intent matters a great deal. You could check for a search engine user-agent string such as "Googlebot" and then serve the full content. This would ensure that your content is still crawled and indexed.
The only downside is any tech savvy individual can spoof the server header by setting their user-agent to "Googlebot" and bypass your paywall.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to get local search volumes?
Hi Guys, I want to get search volumes for "carpet cleaning" for certain areas in Sydney, Australia. I'm using this process: Choose to ‘Search for new keyword and ad group ideas’. Enter the main keywords regarding your product / service Remove any default country targeting Specify your chosen location (s) by targeting specific cities / regions Click to ‘Get ideas’ The problem is none of the areas, even popular ones (like north sydney, surry hills, newtown, manly) are appearing and Google keyword tool, no matches. Is there any other tools or sources of data i can use to get accurate search volumes for these areas? Any recommendations would be very much appreciated. Cheers
Intermediate & Advanced SEO | | wozniak650 -
What's the best possible URL structure for a local search engine?
Hi Mozzers, I'm working at AskMe.com which is a local search engine in India i.e if you're standing somewhere & looking for the pizza joints nearby, we pick your current location and share the list of pizza outlets nearby along with ratings, reviews etc. about these outlets. Right now, our URL structure looks like www.askme.com/delhi/pizza-outlets for the city specific category pages (here, "Delhi" is the city name and "Pizza Outlets" is the category) and www.askme.com/delhi/pizza-outlets/in/saket for a category page in a particular area (here "Saket") in a city. The URL looks a little different if you're searching for something which is not a category (or not mapped to a category, in which case we 301 redirect you to the category page), it looks like www.askme.com/delhi/search/pizza-huts/in/saket if you're searching for pizza huts in Saket, Delhi as "pizza huts" is neither a category nor its mapped to any category. We're also dealing in ads & deals along with our very own e-commerce brand AskMeBazaar.com to make the better user experience and one stop shop for our customers. Now, we're working on URL restructure project and my question to you all SEO rockstars is, what can be the best possible URL structure we can have? Assume, we have kick-ass developers who can manage any given URL structure at backend.
Intermediate & Advanced SEO | | _nitman0 -
Would you rate-control Googlebot? How much crawling is too much crawling?
One of our sites is very large - over 500M pages. Google has indexed 1/8th of the site - and they tend to crawl between 800k and 1M pages per day. A few times a year, Google will significantly increase their crawl rate - overnight hitting 2M pages per day or more. This creates big problems for us, because at 1M pages per day Google is consuming 70% of our API capacity, and the API overall is at 90% capacity. At 2M pages per day, 20% of our page requests are 500 errors. I've lobbied for an investment / overhaul of the API configuration to allow for more Google bandwidth without compromising user experience. My tech team counters that it's a wasted investment - as Google will crawl to our capacity whatever that capacity is. Questions to Enterprise SEOs: *Is there any validity to the tech team's claim? I thought Google's crawl rate was based on a combination of PageRank and the frequency of page updates. This indicates there is some upper limit - which we perhaps haven't reached - but which would stabilize once reached. *We've asked Google to rate-limit our crawl rate in the past. Is that harmful? I've always looked at a robust crawl rate as a good problem to have. Is 1.5M Googlebot API calls a day desirable, or something any reasonable Enterprise SEO would seek to throttle back? *What about setting a longer refresh rate in the sitemaps? Would that reduce the daily crawl demand? We could set increase it to a month, but at 500M pages Google could still have a ball at the 2M pages/day rate. Thanks
Intermediate & Advanced SEO | | lzhao0 -
Why would our server return a 301 status code when Googlebot visits from one IP, but a 200 from a different IP?
I have begun a daily process of analyzing a site's Web server log files and have noticed something that seems odd. There are several IP addresses from which Googlebot crawls that our server returns a 301 status code for every request, consistently, day after day. In nearly all cases, these are not URLs that should 301. When Googlebot visits from other IP addresses, the exact same pages are returned with a 200 status code. Is this normal? If so, why? If not, why not? I am concerned that our server returning an inaccurate status code is interfering with the site being effectively crawled as quickly and as often as it might be if this weren't happening. Thanks guys!
Intermediate & Advanced SEO | | danatanseo0 -
How to optimize for local when client has a regus office?
Anyone know how to optimize for local when client has a regus office? I heard it doesn't work so well because the offices are temporary and so many have used the same exact address over and over. True? Any way around it? Thanks!!
Intermediate & Advanced SEO | | BBuck0 -
Can MadCap Flare WebHelp be made SEO Friendly?
A team member is porting over documentation from a .org wiki that will be placed on the company's root domain. The problem with MadCap is that it uses frames as well as javascript navigation. Has anyone encountered this problem before? I'm unfamiliar with the software and the project is pretty far into the pipeline at this point (I'm new at the company as well). Any advice on work-arounds or alternatives would be greatly appreciated.
Intermediate & Advanced SEO | | AnthonyYoung1 -
Robots.txt is blocking Wordpress Pages from Googlebot?
I have a robots.txt file on my server, which I did not develop, it was done by the web designer at the company before me. Then there is a word press plugin that generates a robots.txt file. How Do I unblock all the wordpress pages from googlebot?
Intermediate & Advanced SEO | | ENSO0 -
800 Number vs. Local Phone
I have a client with multiple locations throughout the US. They are currently using different 800 numbers on their site for their different locations. As they try to optimize their local presence but submitting to local directories, we are trying to determine two things: Does having a local number reroute to an 800 number devalue the significance of it being a local number (I've never heard of this, but someone told them it did) Locality and consistency are important. Assuming they can't remove the 800 numbers from the site, are they better off keeping the 800 numbers on their site and using local numbers every else online OR just using the 800 numbers for all of their local listings?
Intermediate & Advanced SEO | | Caleone0