Showing pre-loaded content cloaking?
-
Hi everyone, another quick question. We have a number of different resources available for our users that load dynamically as the user scrolls down the page (like Facebook's Timeline) with the aim of improving page load time. Would it be considered cloaking if we had Google bot index a version of the page with all available content that would load for the user if he/she scrolled down to the bottom?
-
We have a ton of text -- as the user scrolls down the page, we would load the text/resource they would like to see. If we use LL, we can reduce page load time by 50%-75%.
I was wondering originally if we could show the entirety of the text for the crawlers to see, even though the average users would see go through the content using LL. We want to speed up the site to improve UX, but don't want to do anything that might hit us with a penalty or be seen as black hat/cloaking.
Thanks Syed!
-
Why do you want to use lazy load for text? Text loads super fast and is also critical for SEO.
I haven't tested this out myself (its under consideration) but if I were you I'd never LL the text - would only do it for the slow loading not-so-SEO important aspects of the page like Facebook widgets, images, etc -
We're going to use Lazy Load with different sections of text. Interested to hear if people have experience with this.
-
I believe you are referring to Lazy Load? I'd love to get some opinions on this as well
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is Syndicated (Duplicate) Content considered Fresh Content?
Hi all, I've been asking quite a bit of questions lately and sincerely appreciate your feedback. My co-workers & I have been discussing content as an avenue outside of SEO. There is a lot of syndicated content programs/plugins out there (in a lot of cases duplicate) - would this be considered fresh content on an individual domain? An example may clearly show what I'm after: domain1.com is a lawyer in Seattle.
White Hat / Black Hat SEO | | ColeLusby
domain2.com is a lawyer in New York. Both need content on their website relating to being a lawyer for Google to understand what the domain is about. Fresh content is also a factor within Google's algorithm (source: http://moz.com/blog/google-fresh-factor). Therefore, fresh content is needed on their domain. But what if that content is duplicate, does it still hold the same value? Question: Is fresh content (adding new / updating existing content) still considered "fresh" even if it's duplicate (across multiple domains). Purpose: domain1.com may benefit from a resource for his/her local clientale as the same would domain2.com. And both customers would be reading the "duplicate content" for the first time. Therefore, both lawyers will be seen as an authority & improve their website to rank well. We weren't interested in ranking the individual article and are aware of canonical URLs. We aren't implementing this as a strategy - just as a means to really understand content marketing outside of SEO. Conclusion: IF duplicate content is still considered fresh content on an individual domain, then couldn't duplicate content (that obviously won't rank) still help SEO across a domain? This may sound controversial & I desire an open-ended discussion with linked sources / case studies. This conversation may tie into another Q&A I posted: http://moz.com/community/q/does-duplicate-content-actually-penalize-a-domain. TLDR version: Is duplicate content (same article across multiple domains) considered fresh content on an individual domain? Thanks so much, Cole0 -
Cloaking/Malicious Code
Does anybody have any experience with software for identifying this sort of thing? I was informed by a team we are working with that our website may have been compromised and I wanted to know what programs people have used to identify cloaking attempts and/or bad code. Thanks everybody!
White Hat / Black Hat SEO | | HashtagHustler0 -
Search Results Showing Additional info/Links
Did I miss something? I was looking at search result listings this morning and noticed that Walmart has additional information at the bottom of their (non-paid (I think)) search results. Please see the attached image and you'll notice links to "Item Description - Product Warranty and Service - Specifications - Gifting Plans" How are they doing this? I just noticed the same on one of our competitors listings so It's not just Walmart and the links are item specific. (I have update the image) Z0yqKtO.jpg
White Hat / Black Hat SEO | | BWallacejr1 -
Duplicate content or not? If you're using abstracts from external sources you link to
I was wondering if a page (a blog post, for example) that offers links to external web pages along with abstracts from these pages would be considered duplicate content page and therefore penalized by Google. For example, I have a page that has very little original content (just two or three sentences that summarize or sometimes frame the topic) followed by five references to different external sources. Each reference contains a title, which is a link, and a short abstract, which basically is the first few sentences copied from the page it links to. So, except from a few sentences in the beginning everything is copied from other pages. Such a page would be very helpful for people interested in the topic as the sources it links to had been analyzed before, handpicked and were placed there to enhance user experience. But will this format be considered duplicate or near-duplicate content?
White Hat / Black Hat SEO | | romanbond0 -
Is there such thing as white hat cloaking?
We are near the end of a site redesign and come to find out its in javascript and not engine friendly. Our IT teams fix to this is show crawlable content to googlebot and others through the user agents. I told them this is cloaking and I'm not comfortable with this. They said after doing research, if the content is pretty much the same, it is an acceptable way to cloak. About 90% of the content will be the same between the "regular user" and content served to googlebot. Does anyone have any experience with this, are there any recent articles or any best practices on this? Thanks!
White Hat / Black Hat SEO | | CHECOM0 -
Will aggregating external content hurt my domain's SERP performance?
Hi, We operate a website that helps parents find babysitters. As a small add- on we currently run a small blog with the topic of childcare and parenting. We are now thinking of introducing a new category to our blog called "best articles to read today". The idea is that we "re-blog" selected articles from other blogs that we believe are relevant for our audience. We have obtained the permission from a number of bloggers that we may fully feature their articles on our blog. Our main aim in doing so is to become a destination site for parents. This obviously creates issues with regard to duplicated content. The question I have is: will including this duplicated content on our domain harm our domains general SERP performance? And if so, how can this effect be avoided? It isn't important for us that these "featured" articles rank in SERPs, so we could potentially make them "no index" sites or make the "rel canonical" point to the original author. Any thoughts anyone? Thx! Daan
White Hat / Black Hat SEO | | daan.loening0 -
Webmaster Tools Showing Bad Links Removed Over 60 Days Ago
Hello, One of my clients received the notorious message from Google about unnatural links late last March. We've removed several hundred (if not thousands) of links, and resubmitted several times for reconsideration, only to continue with responses that state that we still have unnatural links. Looking through the "links to your site" in google webmaster tools, there are several hundred sites / pages listed, from which we removed our link over 60 days ago. If you click each link to view the site / page, they contain nothing, viewable or hidden, regarding our website / address. I was wondering if this (outdated / inaccurate) list is the same as the one their employees use to analyze the current status of bad links, and if so how long it will take to reflect up-to-date information. In other words, even though we've removed the bad links, how long do we need to wait until we can expect a clean resubmission for reconsideration. Any help / advice would be greatly appreciated -
White Hat / Black Hat SEO | | Bromtec0 -
Why doesn't Google find different domains - same content?
I have been slowly working to remove near duplicate content from my own website for different locals. Google seems to be doing noting to combat the duplicate content of one of my competitors showing up all over southern California. For Example: Your Local #1 Rancho Bernardo Pest Control Experts | 858-352 ...
White Hat / Black Hat SEO | | GerryWeitz<cite>www.pestcontrolranchobernardo.com/</cite>CachedYou +1'd this publicly. UndoPest Control Rancho Bernardo Pros specializes in the eradication of all household pests including ants, roaches, etc. Call Today @ 858-352-7728. Your Local #1 Oceanside Pest Control Experts | 760-486-2807 ...
<cite>www.pestcontrol-oceanside.info/</cite>CachedYou +1'd this publicly. UndoPest Control Oceanside Pros specializes in the eradication of all household pests including ants, roaches, etc. Call Today @ 760-486-2807. The competitor is getting high page 1 listing for massively duplicated content across web domains. Will Google find this black hat workmanship? Meanwhile, he's sucking up my business. Do the results of the competitor's success also speak to the possibility that Google does in fact rank based on the name of the url - something that gets debated all the time? Thanks for your insights. Gerry
0