Premium Content
-
Hey Guys
I woking on a site that publishes hundreds of new content a day and part of the content is only available for users for 30 days. After 30 days the content is only accessible to premium users.
After 30 days, the page removes the content and replaces it with a log in/ sign up option. The same URL is kept for each page and the title of the article.
I have 2 concerns about this method.- Is it healthy for the site to be removing tons of content of live pages and replace with a log in options
- Should I worry about Panda for creating tons of pages with unique URL but very similar source /content - the log in module and the text explaining that it is only available to premium users. The site is pretty big so google has some tolerance of things we can get away with it.
Should I add a noindex attribute for those pages after 30 days? Even though it can takes months until google actually removes from the index.
Is there a proper way for performing this type of feature in sites with a log in option after a period of time (first click free is not an option)
Thanks Guys and I appreciate any help!
-
Can you show the beginning of the article after 30 days is up? Similar to what New Scientist do: http://www.newscientist.com/article/mg22229720.600-memory-implants-chips-to-fix-broken-brains.html
-
I don't think this is the best option of "forcing" users to create an account, or to have them sign up.
First, I think you may run into an issue in the future of having Google come to reindex a page that is completely different or empty. As a solution, I would use Javascript or another method to load the form you want the user to fill out, without a way to close it and view the page of content beneath. Once they view the popup, have a link at the bottom of the box that states something light like:
"Oops! Looks like your 30 trial subscription has run out. This content is only available to premium users. To sign up for a premium account, please fill out the form fields above. If you do not wish to upgrade your account at this time, click here to return to the home page."
Another option would be to allow restricted access to certain features, so that people actually want to sign up because it provides more value.
"The site is pretty big so google has some tolerance of things we can get away with it."
I doubt that. Google is not going to play favorites with a site due to the number of pages or unique URL's. Did someone at Google tell you this? If not, then no, no, no. Chances are, you are going to have more non-registered or non-premium users try and access your content than registered, thus increasing the amount of pages that have the deleted or non-viewable content. Also, if you have that many pages, why risk losing them in the index?
-
One thing you can do is add an unavailable after meta tag to the pages. This tells google to drop the pages from the index after a certain day. I don't think it is unhealthy, it is the way a lot of sites work to be honest.
Also for the thin content, you should try to send a 410 status code for people trying to open the page after it is behind the paywall, but at the same time display the login page that tells them what is happening. Google should take note of that.
Here is a video I saw posted yesterday on another question, around the 35 minute mark Matt Cutts address this question.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages with Duplicate Content
When I crawl my site through moz, it shows lots of Pages with Duplicate Content. The thing is all that pages are pagination pages. How should I solve this issue?
Technical SEO | | 100offdeal0 -
Content in Accordion doesn't rank as well as Content in Text box?
Does content rank better in a full view text layout, rather than in a clickable accordion? I read somewhere because users need to click into an accordion it may not rank as well, as it may be considered hidden on the page - is this true? accordion example: see features: https://www.workday.com/en-us/applications/student.html
Technical SEO | | DigitalCRO1 -
Not ranking - Scarped content
Hi, I have a problem with a website, that never compe up with before. The website is: https://www.enallaktikidrasi.com It has a bunch of excellent articles, good enough on-page SEO and a medium backlink profile. However, it is ranking just for very very few keywords. The major problem is that there are original articles that searched by their title won't appear in top100 results but they will appear in other websites that scapre them (even if they give a backlink to our original article!) Also, the website has good rankings in Bing and Yahoo but not in Google. There are keywords ranking in #1 in Bing but nowhere in top10 pages in Google.... I am guessing for 3 issues: 1. Majestic shows a very low trust score (just 13). However, the website has not got any kind of penalty in the last 3 years. 2. There are many scarpers. The odd is that scarpers with no real value outrank our content. (Scarpers with almost zero backlink profile) 3. We ran Sucuri on website as there were a large bots attack. Is there a correlation between it bots attack and Google results? (but why not in Bing and Yahoo too?) It seems like Google underestimates the website when indexing websites for some reason. Moreover, some of the articles are really the best around but the keywords they are targeted are not either within the 30 first pages... Any help?? Thanks..
Technical SEO | | alex33andros0 -
Https Duplicate Content
My previous host was using shared SSL, and my site was also working with https which I didn’t notice previously. Now I am moved to a new server, where I don’t have any SSL and my websites are not working with https version. Problem is that I have found Google have indexed one of my blog http://www.codefear.com with https version too. My blog traffic is continuously dropping I think due to these duplicate content. Now there are two results one with http version and another with https version. I searched over the internet and found 3 possible solutions. 1 No-Index https version
Technical SEO | | RaviAhuja
2 Use rel=canonical
3 Redirect https versions with 301 redirection Now I don’t know which solution is best for me as now https version is not working. One more thing I don’t know how to implement any of the solution. My blog is running on WordPress. Please help me to overcome from this problem, and after solving this duplicate issue, do I need Reconsideration request to Google. Thank you0 -
Does Google know what footer content is?
We plan to do away with fixed footer content and make, for the most part, the content in the traditional footer area unique just like the 'main' part of the content. This begs the question, do Google know what is footer content as opposed to main on page content?
Technical SEO | | NeilD0 -
Duplicate content with same URL?
SEOmoz is saying that I have duplicate content on: http://www.XXXX.com/content.asp?ID=ID http://www.XXXX.com/CONTENT.ASP?ID=ID The only difference I see in the URL is that the "content.asp" is capitalized in the second URL. Should I be worried about this or is this an issue with the SEOmoz crawl? Thanks for any help. Mike
Technical SEO | | Mike.Goracke0 -
Duplicate Content Errors
Ok, old fat client developer new at SEO so I apologize if this is obvious. I have 4 errors in one of my campaigns. two are duplicate content and two are duplicate title. Here is the duplicate title error Rare Currency And Old Paper Money Values and Information.
Technical SEO | | Banknotes
http://www.antiquebanknotes.com/ Rare Currency And Old Paper Money Values and Information.
http://www.antiquebanknotes.com/Default.aspx So, my question is... What do I need to do to make this right? They are the same page. in my page load for default.aspx I have this: this.Title = "Rare Currency And Old Paper Money Values and Information."; And it occurs only once...0 -
Complex duplicate content question
We run a network of three local web sites covering three places in close proximity. Each sitehas a lot of unique content (mainly news) but there is a business directory that is shared across all three sites. My plan is that the search engines only index the business in the directory that are actually located in the place the each site is focused on. i.e. Listing pages for business in Alderley Edge are only indexed on alderleyedge.com and businesses in Prestbury only get indexed on prestbury.com - but all business have a listing page on each site. What would be the most effective way to do this? I have been using rel canonical but Google does not always seem to honour this. Will using meta noindex tags where appropriate be the way to go? or would be changing the urls structure to have the place name in and using robots.txt be a better option. As an aside my current url structure is along the lines of: http://dev.alderleyedge.com/directory/listing/138/the-grill-on-the-edge Would changing this have any SEO benefit? Thanks Martin
Technical SEO | | mreeves0