Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Getting Pages Requiring Login Indexed
-
Somehow certain newspapers' webpages show up in the index but require login. My client has a whole section of the site that requires a login (registration is free), and we'd love to get that content indexed. The developer offered to remove the login requirement for specific user agents (eg Googlebot, et al.). I am afraid this might get us penalized.
Any insight?
-
My guess: It's possible, but it would be an uphill battle. The reason being Google would likely see the page as a duplicate of all the other pages on your site with a login form. Not only does Google tend to drop duplicate pages from it's index (especially if it has a duplicate title tag - more leeway is giving the more unique elements you can place on a page) but now you face a situation where you have lots of duplicate or "thin" pages, which is juicy meat for a Panda-like penalty. Generally, you want to keep this pages out of the index, so it's a catch 22.
-
That makes sense. I am looking into whether any portion of our content can be made public in a way that would still comply with industry regulations. I am betting against it.
Does anyone know whether a page requiring login like this could feasibly rank with a strong backlink profile or a lot of quality social mentions?
-
The reason Google likes the "first click free" method is because they want the user to have a good result. They don't want users to click on a search result, then see something else on that page entirely, such as a login form.
So technically showing one set of pages to Google and another to users is considered cloaking. It's very likely that Google will figure out what's happening - either through manual review, human search quality raters, bounce rate, etc - and take appropriate actions against your site.
Of course, there's no guarantee this will happen, and you could argue that the cloaking wasn't done to deceive users, but the risk is high enough to warrant major consideration.
Are there any other options for displaying even part of the content, other than "first-click-free"? For example, can you display a snippet or few paragraphs of the information, then require login to see the rest? This at least would give Google something to index.
Unfortunately, most other methods for getting anything indexed without actually showing it to users would likely be considered blackhat.
Cyrus
-
Should have read the target:
"Subscription designation, snippets only: If First Click Free isn't a feasible option for you, we will display the "subscription" tag next to the publication name of all sources that greet our users with a subscription or registration form. This signals to our users that they may be required to register or subscribe on your site in order to access the article. This setting will only apply to Google News results.
If you prefer this option, please display a snippet of your article that is at least 80 words long and includes either an excerpt or a summary of the specific article. Since we do not permit "cloaking" -- the practice of showing Googlebot a full version of your article while showing users the subscription or registration version -- we will only crawl and display your content based on the article snippets you provide. If you currently cloak for Googlebot-news but not for Googlebot, you do not need to make any changes; Google News crawls with Googlebot and automatically uses the 80-word snippet.
NOTE: If you cloak for Googlebot, your site may be subject to Google Webmaster penalties. Please review Webmaster Guidelines to learn about best practices."
-
"In order to successfully crawl your site, Google needs to be able to crawl your content without filling out a registration form. The easiest way to do this is to configure your webservers not to serve the registration page to our crawlers (when the user-agent is "Googlebot") so that Googlebot can crawl these pages successfully. You can choose to allow Googlebot access to some restricted pages but not others. More information about technical requirements."
-http://support.google.com/webmasters/bin/answer.py?hl=en&answer=74536
Any harm in doing this while not implementing the rest of First Click Free??
-
What would you guys think about programming the login requirement behavior in such a way that only Google can't execute it--so Google wouldn't know that it is the only one getting through?
Not sure whether this is technically possible, but if it were, would it be theoretically likely to incur a penalty? Or is it foolish for other reasons?
-
Good idea--I'll have to determine precisely what I can and cannot show publicly and see if there isn't something I can do to leverage that.
I've heard about staying away from agent-specific content, but I wonder what the data are and whether there are any successful attempts?
-
First click free unfortunately won't work for us.
How might I go about determining how adult content sites handle this issue?
-
Have you considered allowing only a certain proportion of each page to show to any visitors including search engines. This way your pages will have some specific content that can be indexed and help you rank in the SERPs.
I have seen it done where publications behind a pay wall only allow the first paragraph or two to show - just enough to get them ranked appropriately but not enough to stop user wanting to register to access the full articles when they find them either through the SERPs, other sites or directly.
However for this to work it all depends on what the regualtions you mention require - would a proportion of the content being shown to all be ok??
I would definitely stay away from serving up different content to different users if I were you as this is likely to end up causing you trouble in the search engines..
-
I believe newspapers use a feature called "first click free" that enables this to work. I don't know if that will work with your industry regulations or not, however. You may also want to see how sites that deal with adult content, such as liquor sites, have a restriction for viewing let allow indexing.
-
Understood. The login requirement is necessary for compliance with industry regulations. My questions is whether I will be penalized for serving agent-specific content and/or whether there is a better way to get these pages in the index.
-
Search engines aren't good at completing online forms (such as a login), and thus any content contained behind them may remain hidden, so the developers option sounds like a good solution.
You may want to read:
http://www.seomoz.org/beginners-guide-to-seo/why-search-engine-marketing-is-necessary
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is it ok to repeat a (focus) keyword used on a previous page, on a new page?
I am cataloguing the pages on our website in terms of which focus keyword has been used with the page. I've noticed that some pages repeated the same keyword / term. I've heard that it's not really good practice, as it's like telling google conflicting information, as the pages with the same keywords will be competing against each other. Is this correct information? If so, is the alternative to use various long-winded keywords instead? If not, meaning it's ok to repeat the keyword on different pages, is there a maximum recommended number of times that we want to repeat the word? Still new-ish to SEO, so any help is much appreciated! V.
Intermediate & Advanced SEO | | Vitzz1 -
How do internal search results get indexed by Google?
Hi all, Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget. The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages. The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function? Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page? Thanks for your thoughts guys.
Intermediate & Advanced SEO | | Mat_C0 -
Redirecting homepage to internal page (2nd Tier page)
We are planning to experiment redirecting our homepage to one of the 2nd tier page. I mean....example.com to example.com/page. We need this page to rank well, but it doesn't have much internal links or external back-links, so we opt for this redirect. Advantage with this page is, it has "keyword" we want to rank for in URL. "page" in example.com/page. Will this help or hurt us in SEO? I think we are missing keyword in our root domain, so interested to highlight this page. Thanks, Satish
Intermediate & Advanced SEO | | vtmoz0 -
How can I prevent duplicate pages being indexed because of load balancer (hosting)?
The site that I am optimising has a problem with duplicate pages being indexed as a result of the load balancer (which is required and set up by the hosting company). The load balancer passes the site through to 2 different URLs: www.domain.com www2.domain.com Some how, Google have indexed 2 of the same URLs (which I was obviously hoping they wouldn't) - the first on www and the second on www2. The hosting is a mirror image of each other (www and www2), meaning I can't upload a robots.txt to the root of www2.domain.com disallowing all. Also, I can't add a canonical script into the website header of www2.domain.com pointing the individual URLs through to www.domain.com etc. Any suggestions as to how I can resolve this issue would be greatly appreciated!
Intermediate & Advanced SEO | | iam-sold0 -
No-index pages with duplicate content?
Hello, I have an e-commerce website selling about 20 000 different products. For the most used of those products, I created unique high quality content. The content has been written by a professional player that describes how and why those are useful which is of huge interest to buyers. It would cost too much to write that high quality content for 20 000 different products, but we still have to sell them. Therefore, our idea was to no-index the products that only have the same copy-paste descriptions all other websites have. Do you think it's better to do that or to just let everything indexed normally since we might get search traffic from those pages? Thanks a lot for your help!
Intermediate & Advanced SEO | | EndeR-0 -
How do you de-index and prevent indexation of a whole domain?
I have parts of an online portal displaying in SERPs which it definitely shouldn't be. It's due to thoughtless developers but I need to have the whole portal's domain de-indexed and prevented from future indexing. I'm not too tech savvy but how is this achieved? No index? Robots? thanks
Intermediate & Advanced SEO | | Martin_S0 -
How important is the number of indexed pages?
I'm considering making a change to using AJAX filtered navigation on my e-commerce site. If I do this, the user experience will be significantly improved but the number of pages that Google finds on my site will go down significantly (in the 10,000's). It feels to me like our filtered navigation has grown out of control and we spend too much time worrying about the url structure of it - in some ways it's paralyzing us. I'd like to be able to focus on pages that matter (explicit Category and Sub-Category) pages and then just let ajax take care of filtering products below these levels. For customer usability this is smart. From the perspective of manageable code and long term design this also seems very smart -we can't continue to worry so much about filtered navigation. My concern is that losing so many indexed pages will have a large negative effect (however, we will reduce duplicate content and be able provide much better category and sub-category pages). We probably should have thought about this a year ago before Google indexed everything :-). Does anybody have any experience with this or insight on what to do? Thanks, -Jason
Intermediate & Advanced SEO | | cre80 -
Tool to calculate the number of pages in Google's index?
When working with a very large site, are there any tools that will help you calculate the number of links in the Google index? I know you can use site:www.domain.com to see all the links indexed for a particular url. But what if you want to see the number of pages indexed for 100 different subdirectories (i.e. www.domain.com/a, www.domain.com/b)? is there a tool to help automate the process of finding the number of pages from each subdirectory in Google's index?
Intermediate & Advanced SEO | | nicole.healthline0