Cloaking for better user experience and deeper indexing - grey or black?
-
I'm working on a directory that has around 800 results (image rich results) in the top level view. This will likely grow over time so needs support thousands.
The main issue is that it is built in ajax so paginated pages are dynamically generated and look like duplicate content to search engines.
If we limit the results, then not all of the individual directory listing pages can be found.
I have an idea that serves users and search engines what they want but uses cloaking. Is it grey or black?
I've read http://moz.com/blog/white-hat-cloaking-it-exists-its-permitted-its-useful and none of the examples quite apply.
To allow users to browse through the results (without having a single page that has a slow load time) we include pagination links but which are not shown to search engines.
This is a positive user experience.
For search engines we display all results (since there is no limit the number of links so long as they are not spammy) on a single page.
This requires cloaking, but is ultimately serving the same content in slightly different ways.
1. Where on the scale of white to black is this?
2. Would you do this for a client's site?
3. Would you do it for your own site?
-
-
I wish I could accurately place this on a scale for you. In my opinion I would consider this to be white hat. You have no intent of manipulating search results here - this is completely a usability issue and this is the obvious fix.
-
Yes, I certainly would
-
yes, I certainly would
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Recent 2017 Disavow Experience - How long is it taking?
Hello All A client's site recently got hit with links from an XXX neighborhood. My clients site is on the periphery of adult entertainment, think Maxim Magazine, but not in the porn space. These links could be natural, or pushed by a competitor, we definitely did not solicit them. Regardless, dozens of links were established and then found by Google starting in February and a few very important keyword rankings disappeared about 2 months later (after Google found more and more XXX links). The linked to page is the only one that was really hit and it's not a manual action - seems completely algorithmic. We have disavowed all that we can put our finger on but I'm trying to provide guidance as to how long it has taken others to see some type of recovery...?
White Hat / Black Hat SEO | | seoaustin0 -
Competitor Black Hat Link Building?
Hello big-brained Moz folks, We recently used Open Site Explorer to compile a list of inbound linking domains to one of our clients, alongside domains linking to a major competitor. This competitor, APBSpeakers.com, is dominating the search results with many #1 rankings for highly competitive phrases, even though their onsite SEO is downright weak. This competitor also has exponentially more links(602k vs. 2.4k) and way more content(indexed pages) reported than any of their competitors, which seems physically impossible to me. Linking root domains are shown as 667 compared to 170 for our client, who has been in business for 10+ years. Taking matters a step further, linking domains for this competitor include such authoritative domains as: Cnn.com TheGuardian.com PBS.org HuffingtonPost.com LATimes.com Time.com CBSNews.com NBCNews.com Princeton.edu People.com Sure, I can see getting a few high profile linking domains but the above seems HIGHLY suspicious to me. Upon further review, I searched CNN, The Guardian and PBS for all variations of this competitors name and domain name and found no immediate mentions of their name. I smell a rat and I suspect APB is using some sort behind-the-scenes programming to make these "links" happen, but I have no idea how. If this isn't the case, they must have a dedicated PR person with EXTREMELY strong connections to secure this links, but even this seems like a stretch. It's conceivable that APB is posting comments on all of the above sites, along with links, however, I was under the impression that all such posts were NoFollow and carried no link juice. Also, paid advertisements on the above sites should be NoFollow as well, right? Anyway, we're trying to get to the bottom of this issue and determine what's going on. If you have any thoughts or words of wisdom to help us compete with these seemingly Black Hat SEO tactics, I'd sure love to hear from you. Thanks for your help. I appreciate it very much. Eric
White Hat / Black Hat SEO | | EricFish0 -
Site De-Indexed except for Homepage
Hi Mozzers,
White Hat / Black Hat SEO | | emerald
Our site has suddenly been de-indexed from Google and we don't know why. All pages are de-indexed in Google Webmaster Tools (except for the homepage and sitemap), starting after 7 September: Please see screenshot attached to show this: 7 Sept 2014 - 76 pages indexed in Google Webmaster Tools 28 Sept until current - 3-4 pages indexed in Google Webmaster Tools including homepage and sitemaps. Site is: (removed) As a result all rankings for child pages have also disappeared in Moz Pro Rankings Tracker. Only homepage is still indexed and ranking. It seems like a technical issue blocking the site. I checked for robots.txt, noindex, nofollow, canonical and site crawl for any 404 errors but can't find anything. The site is online and accessible. No warnings or errors appear in Google Webmaster Tools. Some recent issues were that we moved from Shared to Dedicated Server around 7 Sept (using same host and location). Prior to the move our preferred domain was www.domain.com WITH www. However during the move, they set our domain as domain.tld WITHOUT the www. Running a site:domain.tld vs site:www.domain.tld command now finds pages indexed under non-www version, but no longer as www. version. Could this be a cause of de-indexing? Yesterday we had our host reset the domain to use www. again and we resubmitted our sitemap, but there is no change yet to the indexing. What else could be wrong? Any suggestions appeciated. Thanks. hDmSHN9.gif0 -
Does Lazy Loading Create Indexing Issues of products?
I have store with 5000+ products in one category & i m using Lazy Loading . Does this effects in indexing these 5000 products. as google says they index or read max 1000 links on one page.
White Hat / Black Hat SEO | | innovatebizz0 -
Does Backlinks from User Profile Page Still Matter
Though, lot of algorithmic changes, updates have happened. However, backlinks from quality source or trusted sites has hardly lost its importance and thats why Open SIte Explorer or Majestic seo is still there to find quality of backlinks, trust factor and authority factor of the backlinks. I need to know does having backlink (dofollow) from Profile Page - still matters and if yes - will a do follow backlink from Moz Profile Page or any authority site or forum page, having dofollow link from user profile page - does it still count as a mark to authority of your site.
White Hat / Black Hat SEO | | Modi0 -
Page not being indexed or crawled and no idea why!
Hi everyone, There are a few pages on our website that aren't being indexed right now on Google and I'm not quite sure why. A little background: We are an IT training and management training company and we have locations/classrooms around the US. To better our search rankings and overall visibility, we made some changes to the on page content, URL structure, etc. Let's take our Washington DC location for example. The old address was: http://www2.learningtree.com/htfu/location.aspx?id=uswd44 And the new one is: http://www2.learningtree.com/htfu/uswd44/reston/it-and-management-training All of the SEO changes aren't live yet, so just bear with me. My question really regards why the first URL is still being indexed and crawled and showing fine in the search results and the second one (which we want to show) is not. Changes have been live for around a month now - plenty of time to at least be indexed. In fact, we don't want the first URL to be showing anymore, we'd like the second URL type to be showing across the board. Also, when I type into Google site:http://www2.learningtree.com/htfu/uswd44/reston/it-and-management-training I'm getting a message that Google can't read the page because of the robots.txt file. But, we have no robots.txt file. I've been told by our web guys that the two pages are exactly the same. I was also told that we've put in an order to have all those old links 301 redirected to the new ones. But still, I'm perplexed as to why these pages are not being indexed or crawled - even manually submitted it into Webmaster tools. So, why is Google still recognizing the old URLs and why are they still showing in the index/search results? And, why is Google saying "A description for this result is not available because of this site's robots.txt" Thanks in advance! Pedram
White Hat / Black Hat SEO | | CSawatzky0 -
Is this Cloaking?
http://www.shopstyle.com/product/sephora-makeup-sephora-collection-glossy-gloss/233883264 This comparison shopping engine url shows googlebot something dramatically different than My frustration is that a comp shop takes retailers content and copies and duplicates it and then uses it to capture traffic and send sales to other retailers other than the original provider of the content. Although this is a javascript function and not explicit bot detection does this qualify as unethical cloaking?
White Hat / Black Hat SEO | | tjgill990