Trying to determine if either of these are considered cloaking
-
Option 1) In the browser, we use javascript to determine if you meet the redirect conditions (referrer not mydomain.com and no bypassing query-string). If so, then we direct your browser to the subdomain.mydomain.com URL. Googlebot would presumably get the original page.
Option 2) In the browser, we use javascript to determine if you meet the redirect conditions. If so, we trigger different CSS that hides certain components of the page and use javascript to load in extra ads. Googlebt would get the unaltered page.
In both scenarios the page content does not change. However, the presentation is different. The idea is that under certain conditions users are redirected to a page with more ads. The ads are not too severe on the redirected page and will not cause an above the fold penalty. That said, will either option be considered cloaking by Google?
-
Matt Cutts has discussed this pretty well. Cloaking means specifically showing something different to Google than to the user. Hidden content is not specifically cloaking, it is a different issue altogether.
If I had to choose between the two options, I would choose the second. The 302 redirect can be problematic. You should assume that Google is going to find BOTH of these and execute the javascript appropriately. Just don't make anything on your site behave differently specifically for Google.
LivingSocial and Groupon both do javascript redirects and are not suffering the consequences, so I think you should be fine too.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Captcha wall to access content and cloaking sanction
Hello, to protect our website against scrapping, visitor are redirect to a recaptcha page after 2 pages visited. But for a SEO purpose Google bot is not included in that restriction so it could be seen as cloaking. What is the best practice in SEO to avoid a penalty for cloaking in that case ?
Intermediate & Advanced SEO | | clementjaunault
I think about adding a paywall Json shema NewsArticle but the content is acceccible for free so it's not a paywall but more a captcha protection wall. What do you recommend ?
Thanks, Describe your question in detail. The more information you give, the better! It helps give context for a great answer.1 -
Trying to get Google to stop indexing an old site!
Howdy, I have a small dilemma. We built a new site for a client, but the old site is still ranking/indexed and we can't seem to get rid of it. We setup a 301 from the old site to the new one, as we have done many times before, but even though the old site is no longer live and the hosting package has been cancelled, the old site is still indexed. (The new site is at a completely different host.) We never had access to the old site, so we weren't able to request URL removal through GSC. Any guidance on how to get rid of the old site would be very appreciated. BTW, it's been about 60 days since we took these steps. Thanks, Kirk
Intermediate & Advanced SEO | | kbates0 -
How complex or what to consider when moving from a .aspx webdeveloper to my own wordpress.org website?
Basically my current web developer is not providing me with what a modern website should need to fully utilize online marketing and SEO in terms of blogging, social media widgets, e-commerce and so on. Because of this I have thought of moving to a wordpress.org website run and built by myself. Is this a good idea? What is the best way to migrate and save existing authority (Re-directs etc)? Is there any potential risks or problems that I could encounter that aren't immediate obvious? Many thanks! Tom
Intermediate & Advanced SEO | | CoGri0 -
Why are these pages considered duplicate content?
I have a duplicate content warning in our PRO account (well several really) but I can't figure out WHY these pages are considered duplicate content. They have different H1 headers, different sidebar links, and while a couple are relatively scant as far as content (so I might believe those could be seen as duplicate), the others seem to have a substantial amount of content that is different. It is a little perplexing. Can anyone help me figure this out? Here are some of the pages that are showing as duplicate: http://www.downpour.com/catalogsearch/advanced/byNarrator/narrator/Seth+Green/?bioid=5554 http://www.downpour.com/catalogsearch/advanced/byAuthor/author/Solomon+Northup/?bioid=11758 http://www.downpour.com/catalogsearch/advanced/byNarrator/?mediatype=audio+books&bioid=3665 http://www.downpour.com/catalogsearch/advanced/byAuthor/author/Marcus+Rediker/?bioid=10145 http://www.downpour.com/catalogsearch/advanced/byNarrator/narrator/Robin+Miles/?bioid=2075
Intermediate & Advanced SEO | | DownPour0 -
When I try creating a sitemap, it doesnt crawl my entire site.
We just launched a new Ruby app at (used to be a wordpress blog) - http://www.thesquarefoot.com We have not had time to create an auto-generated sitemap, so I went to a few different websites with free sitemap generation tools. Most of them index up to 100 or 500 URLS. Our site has over 1,000 individual listings and 3 landing pages, so when I put our URL into a sitemap creator, it should be finding all of these pages. However, that is not happening, only 4 pages seem to be seen by the crawlers. TheSquareFoothttp://www.thesquarefoot.com/http://www.thesquarefoot.com/users/sign_inhttp://www.thesquarefoot.com/searchhttp://www.thesquarefoot.com/renters/sign_upThis worries me that when Google comes to crawl our site, these are the only pages it will see as well. Our robots.txt is blank, so there should be nothing stopping the crawlers from going through the entire site. Here is an example of one of the 1,000s of pages not being crawled****http://www.thesquarefoot.com/listings/Houston/TX/77098/Central_Houston/3910_Kirby_Dr/Suite_204Any help would be much appreciated!
Intermediate & Advanced SEO | | TheSquareFoot0 -
I'm pulling my hair out trying to figure out why google stopped crawling.. any help is appreciated
This is going to be kind of long, simply because there is a background to the domain name that is not typical to anybody in the world really and I'm not sure if its possible that it was penalized or ranked lower because of that or not. Because of that I'm going to include it with the hopes that giving the full picture some nice soul in the world who has more knowledge in this than me see's something or knows something and can point me in the right direction. Our site has been around for a few years, at one point the domain was seized by homeland security ICE, and then they had to give it back in Dec. which sparked a lot of the SOPA PIPA stuff and we became the poster child so to speak. The site had previously been up since 2008, but due to that whole mess the site was down for 13 months on the dreaded seized server with a scary warning graphic and site title which caused quite obviously a bunch of 404 errors and who knows what else damage to anything we'd had before that as far as page rank and incoming links. we had a lot of incoming links from high quality sites. We were advised upon getting the domain back to pretty much scrap all the old content that was on the site prior and just start fresh.. which we did. Googlebot started crawling slowly, but then as we started getting back into the swing of things people started linking to us,some with high page rank, we were getting indexed quite frequently and ranking high on search results in our niche.. Then something happened on March 4th, we had arguably our best day with google traffic, we'd been linked back by places like Huff Post etc for content in our niche.. and the next day literally it was a freefall. Darn near nothing. I've attached a screen shot from webmaster tools so you can see how drastic it was. I went crazy, trying to figure out what was wrong, searching obsessively through webmaster tools looking for any indication of a problem, searched the site on google site:dajaz1.com and what comes up is page 2 page 3 page 45 page 46. It's also taken to indexing our category and tag pages and even our search pages. I've now set those all to noindex follow but when I look at where the googlebots are at on the site, they're on the categories, pages, author pages, and tags. Some of our links are still getting indexed, but doing a search just of our site name and we're ranking below many of the media sites that have written about our legal issues, when a month ago we were at least top result for our own name. I've racked my brain trying to figure out the issue. I've disabled plugins, I'm on fetch as google bot all the time making sure our stuff is at least coming out as 200 (we had 2 days where we were getting 403 errors due to a super-cache issue, but once fixed googlebot returned like it never left) I've literally watched 1000 videos, read 100 forums, added in SEO plugins, tried to optimize the site to the point I'm worried I'm over doing it.. and still they've barely begun to crawl. As you can see there is some activity in the last 2-3 days, but even submitting a new site map once I changed the theme out of desperation it's only indexed 16. I've looked for errors all through webmaster tools and I can't find anything to tell me why that happened, how to fix it, and how to get googlebot to like us again. I'm pulling my hair out here. The links we have incoming are high quality links like huffington post , spin, complex, etc. Those haven't slowed down at all, we do outgoing links to sites we trust and are high quality as well. I've got interns working on how they're writing titles and such, I've gone through and attempted to fix duplicate pages and titles.. I've been going through and re-writing meta description tags What am I missing? I'm pulling my hair out trying to figure out what the issue is. Eternally grateful for any help provided. jnzb6.png
Intermediate & Advanced SEO | | malady0 -
Any experience regarding what % is considered duplicate?
Some sites (including 1 or two I work with) have a legitimate reason to have duplicate content, such as product descriptions. One way to deal with duplicate content is to add other unique content to the page. It would be helpful to have guidelines regarding what percentage of the content on a page should be unique. For example, if you have a page with 1,000 words of duplicate content, how many words of unique content should you add for the page to be considered OK? I realize that a) Google will never reveal this and b) it probably varies a fair bit based on the particular website. However... Does anyone have any experience in this area? (Example: You added 300 words of unique content to all 250 pages on your site, that each had 100 words of duplicate content before, and that worked to improve your rankings.) Any input would be appreciated! Note: Just to be clear, I am NOT talking about "spinning" duplicate content to make it "unique". I am talking about adding unique content to a page that has legitimate duplicate content.
Intermediate & Advanced SEO | | AdamThompson0 -
Need to migrate multiple URLs and trying to save link juice
I have an interesting problem SEOmozers and wanted to see if I could get some good ideas as to what I should to for the greatest benefit. I have an ecommerce website that sells tire sensors. We just converted the old site to a new platform and payment processor, so the site has changed completely from the original, just offering virtually the same products as before. You can find it at www.tire-sensors.com We're ranked #1 for the keyword "tire sensors" in Google. We sell sensors for ford, honda, toyota, etc -- and tire-sensors.com has all of those listed. Before I came along, the company I'm working for also had individual "mini ecommerce" sites created with only 1 brand of sensors and the URL to match that maker. Example : www.fordtiresensors.com is our site, only sells the Ford parts from our main site, and ranks #1 in Google for "ford tire sensors" I don't have analytics on these old sites but Google Keyword Tool is saying "ford tire sensors" gets 880 local searches a month, and other brand-specific tire sensors are receiving traffic as well. We have many other sites that are doing the same thing. www.suzukitiresensors.com (ranked #2 for "suzuki tire sensors") Only sells our Suzuki collection from the main site's inventory etc We need to get rid of the old sites because we want to shut down the payment gateway and various other things those sites are using, and move to one consolidated system (aka www.tire-sensors.com) Would simply making each maker-specific URL (ie. fordtiresensors.com) 301 redirect to our main site (www.tire-sensors.com) give us to most benefit, rankings, traffic etc? Or would that be detrimental to what we're trying to do -- capturing the tire sensors market for all car manufacturers? Suggestions? Thanks a lot in advance! Jordan
Intermediate & Advanced SEO | | JordanGodbey0