How to allow googlebot past paywall
-
Does anyone know of any ways or ideas to allow Google/Bing etc. to index your content, but have it behind a paywall for users?
-
Thanks Mark,
I have been researching this idea from Google, but it is only for Google News and not Google Web Search.
Also, users would be able to jump the paywall by returning to Google News to search fro more links through to the site.
-
Google has a program called first click free - basically, you need to allow google bot, along with users, to view the first full article they land on. So if you have multiple page articles, you need to give them access to the entire article. After that though, the rest of the content can be behind a paywall.
You can read more about it here - http://support.google.com/webmasters/bin/answer.py?hl=en&answer=74536
And here are the technical guidelines for implementation - http://support.google.com/news/publisher/bin/answer.py?hl=en&answer=40543
Hope this helps,
Mark
-
Not possible. Google's not going to index something that is not accessible to everyone.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Googlebots and cache
Our site checks whether visitors are resident in the same country or live abroad. If it recognises that the visitor comes from abroad, the content is made more appropriate for them. Basically, instead of encouraging the visitor to come and visit a showroom, it tells them that we export worldwide. It does this by IP checking. So far so good! But I noticed that if I look at cached pages in Google's results, that the cached pages are all export pages. I've also used Google Webmaster Tools (Search Console) and rendered pages as Google - and they also render export pages. Does anybody have a solution to this?
Technical SEO | | pulcinella2uk
Is it a problem?
Can Google see the properly (local - as in UK) version of the site?0 -
Googlebot cannot access your site
Hello, I have a website http://www.fivestarstoneinc.com/ and earlier today I got an emil from webmaster tools saying "Googlebot cannot access your site" Wondering what the problem could be and how to fix it.
Technical SEO | | Rank-and-Grow0 -
One server, two domains - robots.txt allow for one domain but not other?
Hello, I would like to create a single server with two domains pointing to it. Ex: domain1.com -> myserver.com/ domain2.com -> myserver.com/subfolder. The goal is to create two separate sites on one server. I would like the second domain ( /subfolder) to be fully indexed / SEO friendly and have the robots txt file allow search bots to crawl. However, the first domain (server root) I would like to keep non-indexed, and the robots.txt file disallowing any bots / indexing. Does anyone have any suggestions for the best way to tackle this one? Thanks!
Technical SEO | | Dave1000 -
SEOMoz Crawler vs Googlebot Question
I read somewhere that SEOMoz’s crawler marks a page in its Crawl Diagnostics as duplicate content if it doesn’t have more than 5% unique content.(I can’t find that statistic anywhere on SEOMoz to confirm though). We are an eCommerce site, so many of our pages share the same sidebar, header, and footer links. The pages flagged by SEOMoz as duplicates have these same links, but they have unique URLs and category names. Because they’re not actual duplicates of each other, canonical tags aren’t the answer. Also because inventory might automatically come back in stock, we can’t use 301 redirects on these “duplicate” pages. It seems like it’s the sidebar, header, and footer links that are what’s causing these pages to be flagged as duplicates. Does the SEOMoz crawler mimic the way Googlebot works? Also, is Googlebot smart enough not to count the sidebar and header/footer links when looking for duplicate content?
Technical SEO | | ElDude0 -
Why is either Rogerbot or (if it is the case) Googlebots not recognizing keyword usage in my body text?
I have a client that does liposuction as one of their main services, they have been ranked in the top 1-5 for their keywords "sarasota liposuction" with different variations of the words for a long time, and suddenly have dropped about 10-12 places down to #15 in the engine. I went to investigate this and actually came to the "on-page analysis" tool for SEOmoz pro, where oddly enough it says that there is no mention of the target keyword in the body content (on-page analysis tool screenshot attached). I didn't quite understand why it would not recognize the obvious keywords in the body text so I went back to the page and inspected further. The keywords have an odd featured link that links up to an internally hosted keyword glossary for definitions of terms that people might not know directly. These definitions pop up in a lightbox upon clicking the keyword (liposuction lightbox screenshots attached). I have no idea why google would not recognize these words as they have the text in between the link, yet if there is something wrong with the code syntax etc. it might possibly hender the engine from seeing the body text of the link? any help would be greatly appreciated! Thank you so much! Phn2m Phn2m.png bWr5K.png V36CL.png
Technical SEO | | jbster130 -
Does no preferred domain allow interlinking spammers to double their output?
Doing research on new client's links. Found 151 linking root domains all from same interlinking scam. Here are duplicated domains for one site (not my client) on his scam: DrainageHouston.com, patiosandponds.net, patioshouston.net, houstonlandscape.org, drainagehouston.com, houstonoutdoorlighting.com. I have attached an img. showing these in OSE with each having a www and a non www linking to the site. Note: When I found this it was by checking other domains they owned that they did not know had sites on them. They literally were all cloned with other domain names. We took the three additional sites and did 301 redirects from those to main site. Since there were only three additional and only about 30 pages per, I do not see it as a problem with redirect. So the question is: By doing this without preferred domain and 301 in .htaccess of non-www to www, is he able to double his dubious enterprise?
Technical SEO | | RobertFisher0 -
How to Block Urls with specific components from Googlebot
Hello, I have around 100,000 Error pages showing in Google Webmaster Tools. I want to block specific components like com_fireboard, com_seyret,com_profiler etc. Few examples: http://www.toycollector.com/videos/generatersslinks/index.php?option=com_fireboard&Itemid=824&func=view&catid=123&id=16494 http://www.toycollector.com/index.php?option=com_content&view=article&id=6932:tomica-limited-nissan-skyline-r34--nissan-skyline-gt-r-r34-vspec&catid=231&Itemid=634 I tried blocking using robots.txt. Just used this Disallow: /com_fireboard/
Technical SEO | | TheMartingale
Disallow: /com_seyret/ But its not working. Can anyone suggest me to solve this problem. Many Thanks Shradda0 -
Blocking URL's with specific parameters from Googlebot
Hi, I've discovered that Googlebot's are voting on products listed on our website and as a result are creating negative ratings by placing votes from 1 to 5 for every product. The voting function is handled using Javascript, as shown below, and the script prevents multiple votes so most products end up with a vote of 1, which translates to "poor". How do I go about using robots.txt to block a URL with specific parameters only? I'm worried that I might end up blocking the whole product listing, which would result in de-listing from Google and the loss of many highly ranked pages. DON'T want to block: http://www.mysite.com/product.php?productid=1234 WANT to block: http://www.mysite.com/product.php?mode=vote&productid=1234&vote=2 Javacript button code: onclick="javascript: document.voteform.submit();" Thanks in advance for any advice given. Regards,
Technical SEO | | aethereal
Asim0