Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Getting Pages Requiring Login Indexed
-
Somehow certain newspapers' webpages show up in the index but require login. My client has a whole section of the site that requires a login (registration is free), and we'd love to get that content indexed. The developer offered to remove the login requirement for specific user agents (eg Googlebot, et al.). I am afraid this might get us penalized.
Any insight?
-
My guess: It's possible, but it would be an uphill battle. The reason being Google would likely see the page as a duplicate of all the other pages on your site with a login form. Not only does Google tend to drop duplicate pages from it's index (especially if it has a duplicate title tag - more leeway is giving the more unique elements you can place on a page) but now you face a situation where you have lots of duplicate or "thin" pages, which is juicy meat for a Panda-like penalty. Generally, you want to keep this pages out of the index, so it's a catch 22.
-
That makes sense. I am looking into whether any portion of our content can be made public in a way that would still comply with industry regulations. I am betting against it.
Does anyone know whether a page requiring login like this could feasibly rank with a strong backlink profile or a lot of quality social mentions?
-
The reason Google likes the "first click free" method is because they want the user to have a good result. They don't want users to click on a search result, then see something else on that page entirely, such as a login form.
So technically showing one set of pages to Google and another to users is considered cloaking. It's very likely that Google will figure out what's happening - either through manual review, human search quality raters, bounce rate, etc - and take appropriate actions against your site.
Of course, there's no guarantee this will happen, and you could argue that the cloaking wasn't done to deceive users, but the risk is high enough to warrant major consideration.
Are there any other options for displaying even part of the content, other than "first-click-free"? For example, can you display a snippet or few paragraphs of the information, then require login to see the rest? This at least would give Google something to index.
Unfortunately, most other methods for getting anything indexed without actually showing it to users would likely be considered blackhat.
Cyrus
-
Should have read the target:
"Subscription designation, snippets only: If First Click Free isn't a feasible option for you, we will display the "subscription" tag next to the publication name of all sources that greet our users with a subscription or registration form. This signals to our users that they may be required to register or subscribe on your site in order to access the article. This setting will only apply to Google News results.
If you prefer this option, please display a snippet of your article that is at least 80 words long and includes either an excerpt or a summary of the specific article. Since we do not permit "cloaking" -- the practice of showing Googlebot a full version of your article while showing users the subscription or registration version -- we will only crawl and display your content based on the article snippets you provide. If you currently cloak for Googlebot-news but not for Googlebot, you do not need to make any changes; Google News crawls with Googlebot and automatically uses the 80-word snippet.
NOTE: If you cloak for Googlebot, your site may be subject to Google Webmaster penalties. Please review Webmaster Guidelines to learn about best practices."
-
"In order to successfully crawl your site, Google needs to be able to crawl your content without filling out a registration form. The easiest way to do this is to configure your webservers not to serve the registration page to our crawlers (when the user-agent is "Googlebot") so that Googlebot can crawl these pages successfully. You can choose to allow Googlebot access to some restricted pages but not others. More information about technical requirements."
-http://support.google.com/webmasters/bin/answer.py?hl=en&answer=74536
Any harm in doing this while not implementing the rest of First Click Free??
-
What would you guys think about programming the login requirement behavior in such a way that only Google can't execute it--so Google wouldn't know that it is the only one getting through?
Not sure whether this is technically possible, but if it were, would it be theoretically likely to incur a penalty? Or is it foolish for other reasons?
-
Good idea--I'll have to determine precisely what I can and cannot show publicly and see if there isn't something I can do to leverage that.
I've heard about staying away from agent-specific content, but I wonder what the data are and whether there are any successful attempts?
-
First click free unfortunately won't work for us.
How might I go about determining how adult content sites handle this issue?
-
Have you considered allowing only a certain proportion of each page to show to any visitors including search engines. This way your pages will have some specific content that can be indexed and help you rank in the SERPs.
I have seen it done where publications behind a pay wall only allow the first paragraph or two to show - just enough to get them ranked appropriately but not enough to stop user wanting to register to access the full articles when they find them either through the SERPs, other sites or directly.
However for this to work it all depends on what the regualtions you mention require - would a proportion of the content being shown to all be ok??
I would definitely stay away from serving up different content to different users if I were you as this is likely to end up causing you trouble in the search engines..
-
I believe newspapers use a feature called "first click free" that enables this to work. I don't know if that will work with your industry regulations or not, however. You may also want to see how sites that deal with adult content, such as liquor sites, have a restriction for viewing let allow indexing.
-
Understood. The login requirement is necessary for compliance with industry regulations. My questions is whether I will be penalized for serving agent-specific content and/or whether there is a better way to get these pages in the index.
-
Search engines aren't good at completing online forms (such as a login), and thus any content contained behind them may remain hidden, so the developers option sounds like a good solution.
You may want to read:
http://www.seomoz.org/beginners-guide-to-seo/why-search-engine-marketing-is-necessary
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Substantial difference between Number of Indexed Pages and Sitemap Pages
Hey there, I am doing a website audit at the moment. I've notices substantial differences in the number of pages indexed (search console), the number of pages in the sitemap and the number I am getting when I crawl the page with screamingfrog (see below). Would those discrepancies concern you? The website and its rankings seems fine otherwise. Total indexed: 2,360 (Search Consule)
Intermediate & Advanced SEO | | Online-Marketing-Guy
About 2,920 results (Google search "site:example.com")
Sitemap: 1,229 URLs
Screemingfrog Spider: 1,352 URLs Cheers,
Jochen0 -
Links from non-indexed pages
Whilst looking for link opportunities, I have noticed that the website has a few profiles from suppliers or accredited organisations. However, a search form is required to access these pages and when I type cache:"webpage.com" the page is showing up as non-indexed. These are good websites, not spammy directory sites, but is it worth trying to get Google to index the pages? If so, what is the best method to use?
Intermediate & Advanced SEO | | maxweb0 -
How long takes to a page show up in Google results after removing noindex from a page?
Hi folks, A client of mine created a new page and used meta robots noindex to not show the page while they are not ready to launch it. The problem is that somehow Google "crawled" the page and now, after removing the meta robots noindex, the page does not show up in the results. We've tried to crawl it using Fetch as Googlebot, and then submit it using the button that appears. We've included the page in sitemap.xml and also used the old Google submit new page URL https://www.google.com/webmasters/tools/submit-url Does anyone know how long will it take for Google to show the page AFTER removing meta robots noindex from the page? Any reliable references of the statement? I did not find any Google video/post about this. I know that in some days it will appear but I'd like to have a good reference for the future. Thanks.
Intermediate & Advanced SEO | | fabioricotta-840380 -
XML Sitemap index within a XML sitemaps index
We have a similar problem to http://www.seomoz.org/q/can-a-xml-sitemap-index-point-to-other-sitemaps-indexes Can a XML sitemap index point to other sitemaps indexes? According to the "Unique Doll Clothing" example on this link, it seems possible http://www.seomoz.org/blog/multiple-xml-sitemaps-increased-indexation-and-traffic Can someone share an XML Sitemap index within a XML sitemaps index example? We are looking for the format to implement the same on our website.
Intermediate & Advanced SEO | | Lakshdeep0 -
How to get content to index faster in Google.....pubsubhubbub?
I'm curious to know what tools others are using to get their content to index faster (other than html sitmap and pingomatic, twitter, etc) Would installing the wordpress pubsubhubbub plugin help even though it uses pingomatic? http://wordpress.org/extend/plugins/pubsubhubbub/
Intermediate & Advanced SEO | | webestate0 -
Should pages of old news articles be indexed?
My website published about 3 news articles a day and is set up so that old news articles can be accessed through a "back" button with articles going to page 2 then page 3 then page 4, etc... as new articles push them down. The pages include a link to the article and a short snippet. I was thinking I would want Google to index the first 3 pages of articles, but after that the pages are not worthwhile. Could these pages harm me and should they be noindexed and/or added as a canonical URL to the main news page - or is leaving them as is fine because they are so deep into the site that Google won't see them, but I also won't be penalized for having week content? Thanks for the help!
Intermediate & Advanced SEO | | theLotter0 -
Number of Indexed Pages are Continuously Going Down
I am working on online retail stores. Initially, Google have indexed 10K+ pages of my website. I have checked number of indexed page before one week and pages were 8K+. Today, number of indexed pages are 7680. I can't understand why should it happen and How can fix it? I want to index maximum pages of my website.
Intermediate & Advanced SEO | | CommercePundit0 -
How to resolve Duplicate Page Content issue for root domain & index.html?
SEOMoz returns a Duplicate Page Content error for a website's index page, with both domain.com and domain.com/index.html isted seperately. We had a rewrite in the htacess file, but for some reason this has not had an impact and we have since removed it. What's the best way (in an HTML website) to ensure all index.html links are automatically redirected to the root domain and these aren't seen as two separate pages?
Intermediate & Advanced SEO | | ContentWriterMicky0