Lazy Loading of Blog Posts and Crawl Depths
-
Hi Moz Fans,
We are looking at our blog and improving the content as much as we can for SEO purposes, but we have hit a bit of a blank in terms of lazy loading implications and issues with crawl depths.
We introduced lazy loading onto the blog home page to increase site speed initially and it works well with infinite scroll, but we were wondering whether this would cause any issues regarding SEO.
A lot of the resources online seem to be conflicting and some are very outdated, so some clarification on what is best in terms of lazy loading and crawl depths for blogs, would be fantastic!
I hope someone can help and give us some up to date insights - If you need anymore information, I'll reply ASAP
-
This is fantastic - Thank you!
-
Lazy load and infinite scroll are absolutely not the same thing, as far as search crawlers are concerned.
Lazy-loaded content, if it exists in the dom of the page will be indexed but it's importance will likely be reduced (any content that requires user interaction to see is reduced in ranking value).
But because infinite scroll is unmanageable for the crawler (it's not going to stay on one page and keep crawling for hours as every blog post rolls into view) Google's John Mueller has said the crawler will simply stop at the bottom of the initial page load.
This webinar/discussion on crawl and rendering from just last week included G's John Mueller and a Google engineer and will give you exactly the info you're looking for, right from the horse's mouth, Victoria.
To consider though - the blog's index page shouldn't be the primary source for the blog's content anyway - the individual permalinked post URLs are what should be crawled and ranking for the individual post content. And the xml sitemap should be the primary source for google's discovery of those URLs. Though obviously linking from authoritative pages will help the posts, but that's going to change every time the blog index page updates anyway. Also, did you know that you can submit the blog's RSS feed as a sitemap in addition to the xml sitemap? It's the fastest way I've found of getting new blog posts crawled/indexed.
Hope that helps!
Paul
-
I'm afraid I don't have an insight into how Google crawls with lazy loading.
Which works better for your user, pagination or lazy loading? I wouldn't worry about lazy loading and Google. If you're worried about getting pages indexed then I would make sure you've got a sitemap that works correctly.
-
Great, thank you
Do you have any insight into crawl depth too?
At what point would Google stop crawling the page with lazy loading? Is it best to use pagination as opposed to infinite scroll? -
With lazy loading, the code can actually still be seen in the source code. That's what Google uses, so you should be fine with using this as it's becoming a common practice now.
-
Yes, it's similar to the BBC page and loads when it is needed by the user so to speak.
It increased the site loading, but do you know at what point Google would stop indexing the content on our site?
How do we ensure that the posts are being crawled and is pagination the best way to go?
-
I'd have to say, not too familiar with the method you are using, but I take it the idea is elements of the page load as you scroll like BBC?
If it decreases the load time of the site that is good for both direct and indirect SEO, But the key thing is can Google see the contents of the page or not? - Use Google Search Console and fetch the page to see if it contains the content.
Also, Google will not hang around on your site, if it doesn't serve the content within a reasonable amount of time it will bounce off to the next page, or the next site to crawl. It's harsh, but it's a fact.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I redirect off topic blog posts?
We launched a store on top of a popular blog. The blog had nothing to do with the store. The blog has a lot of backlinks and traffic, but our store is now our primary business. I am concerned that the off topic blog content may be affecting or ability to rank better for the core store business. Should we delete or redirect the old blog content to another website to improve the SEO for our store?
Intermediate & Advanced SEO | | seo-mojo1 -
Republishing blog content on LinkedIn and Medium
Hi Mozzers, I'm thinking republishing content from my own website's blog on platforms like LinkedIn and Medium. These sites are able to reach a far bigger (relevant) audience than I can through my own website, so there's strategic reasoning for doing this. However, with SEO being a key activity on my own website, I don't want to be at risk of any penalties for duplicate content. However, I've just read this on Search Engine Journal: "there is confirmation from Google... Gary Illyes has stated that republishing articles won’t cause a penalty, and that it’s simply a filter they use when evaluating sites. Most sites are only penalized for duplicate content if the site is 100% copied content." So, what do people think - is republishing blog content, on LinkedIn and Medium safe? And is it a sound tactic to increase reach?
Intermediate & Advanced SEO | | Zoope0 -
Good to Internal Link from Old Blog Posts?
I am working on a site which has good amount of content pages & blogposts. All blog posts & content pages have been indexed in google already since a long time. Will this be a good practice if I internally link to new pages from old pages or a few important pages from those old indexed content pages?
Intermediate & Advanced SEO | | welcomecure0 -
Google crawling different content--ever ok?
Here are a couple of scenarios I'm encountering where Google will crawl different content than my users on initial visit to the site--and which I think should be ok. Of course, it is normally NOT ok, I'm here to find out if Google is flexible enough to allow these situations: 1. My mobile friendly site has users select a city, and then it displays the location options div which includes an explanation for why they may want to have the program use their gps location. The user must choose the gps, the entire city, or he can enter a zip code, or choose a suburb of the city, which then goes to the link chosen. OTOH it is programmed so that if it is a Google bot it doesn't get just a meaningless 'choose further' page, but rather the crawler sees the page of results for the entire city (as you would expect from the url), So basically the program defaults for the entire city results for google bot, but for for the user it first gives him the initial ability to choose gps. 2. A user comes to mysite.com/gps-loc/city/results The site, seeing the literal words 'gps-loc' in the url goes out and fetches the gps for his location and returns results dependent on his location. If Googlebot comes to that url then there is no way the program will return the same results because the program wouldn't be able to get the same long latitude as that user. So, what do you think? Are these scenarios a concern for getting penalized by Google? Thanks, Ted
Intermediate & Advanced SEO | | friendoffood0 -
Wordpress blog in a subdirectory not being indexed by Google
HI MozzersIn my websites sitemap.xml, pages are listed, such as /blog/ and /blog/textile-fact-or-fiction-egyptian-cotton-explained/These pages are visible when you visit them in a browser and when you use the Google Webmaster tool - Fetch as Google to view them (see attachment), however they aren't being indexed in Google, not even the root directory for the blog (/blog/) is being indexed, and when we query:site: www.hilden.co.uk/blog/ It returns 0 results in Google.Also note that:The Wordpress installation is located at /blog/ which is a subdirectory of the main root directory which is managed by Magento. I'm wondering if this causing the problem.Any help on this would be greatly appreciated!AnthonyToTOHuj.png?1
Intermediate & Advanced SEO | | Tone_Agency0 -
Why is SEO Moz only crawling my homepage?
When I send the SEO Bot to crawl my website, it only crawls the homepage? Does anyone know why this is happening? Thanks, Andy
Intermediate & Advanced SEO | | ChrisCurdDesign0 -
Blog - on the domain or place on separate site, now that Panda ranks for bounce, TOP, depth of visit
Over 10 years ago, we decided to run our blog external to our main website. contrary to conventional wisdom then, we thought we’d have more control/opps for generating external anchor text links, plus working in a bona fide blog software environment (WP). As we had hoped, the blog generated alot of strong inbound links, captured inbound links of it own from other sites and I think, helped improve our SERPs and traffic. Once the blog was established and with the redesign of the website, we capitulated, and finally moved the blog onto the main domain. After reading a number of pieces on Panda and the new reality of SEO, sounds like bounce rates (in particular), time on page, and other GA measures may have a more profound influence on google rankings now. Given that blogs are notoriously for high bounce rates (ours is), low time on site, depth of visit, seems logical that it adversely affects our site averages for the main domain). Is it time to re-consider pulling our blog off the main domain to reassert the ‘true’ GA measures of the main domain? I guess it still gets down to the question... is the advantage of all the inbound links to the blog on the main domain of greater value than moving the blog off-site and reasserting better 'site stats' for google's pando algo? Thanks.
Intermediate & Advanced SEO | | ahw0 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0