Lazy Loading of Blog Posts and Crawl Depths
-
Hi Moz Fans,
We are looking at our blog and improving the content as much as we can for SEO purposes, but we have hit a bit of a blank in terms of lazy loading implications and issues with crawl depths.
We introduced lazy loading onto the blog home page to increase site speed initially and it works well with infinite scroll, but we were wondering whether this would cause any issues regarding SEO.
A lot of the resources online seem to be conflicting and some are very outdated, so some clarification on what is best in terms of lazy loading and crawl depths for blogs, would be fantastic!
I hope someone can help and give us some up to date insights - If you need anymore information, I'll reply ASAP
-
This is fantastic - Thank you!
-
Lazy load and infinite scroll are absolutely not the same thing, as far as search crawlers are concerned.
Lazy-loaded content, if it exists in the dom of the page will be indexed but it's importance will likely be reduced (any content that requires user interaction to see is reduced in ranking value).
But because infinite scroll is unmanageable for the crawler (it's not going to stay on one page and keep crawling for hours as every blog post rolls into view) Google's John Mueller has said the crawler will simply stop at the bottom of the initial page load.
This webinar/discussion on crawl and rendering from just last week included G's John Mueller and a Google engineer and will give you exactly the info you're looking for, right from the horse's mouth, Victoria.
To consider though - the blog's index page shouldn't be the primary source for the blog's content anyway - the individual permalinked post URLs are what should be crawled and ranking for the individual post content. And the xml sitemap should be the primary source for google's discovery of those URLs. Though obviously linking from authoritative pages will help the posts, but that's going to change every time the blog index page updates anyway. Also, did you know that you can submit the blog's RSS feed as a sitemap in addition to the xml sitemap? It's the fastest way I've found of getting new blog posts crawled/indexed.
Hope that helps!
Paul
-
I'm afraid I don't have an insight into how Google crawls with lazy loading.
Which works better for your user, pagination or lazy loading? I wouldn't worry about lazy loading and Google. If you're worried about getting pages indexed then I would make sure you've got a sitemap that works correctly.
-
Great, thank you
Do you have any insight into crawl depth too?
At what point would Google stop crawling the page with lazy loading? Is it best to use pagination as opposed to infinite scroll? -
With lazy loading, the code can actually still be seen in the source code. That's what Google uses, so you should be fine with using this as it's becoming a common practice now.
-
Yes, it's similar to the BBC page and loads when it is needed by the user so to speak.
It increased the site loading, but do you know at what point Google would stop indexing the content on our site?
How do we ensure that the posts are being crawled and is pagination the best way to go?
-
I'd have to say, not too familiar with the method you are using, but I take it the idea is elements of the page load as you scroll like BBC?
If it decreases the load time of the site that is good for both direct and indirect SEO, But the key thing is can Google see the contents of the page or not? - Use Google Search Console and fetch the page to see if it contains the content.
Also, Google will not hang around on your site, if it doesn't serve the content within a reasonable amount of time it will bounce off to the next page, or the next site to crawl. It's harsh, but it's a fact.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blog with all copied content, should it be rewritten?
Hi, I am auditing a blog where their goal is to get approved to on ad networks but the whole blog has copied content from different sources, so no ad network is approving them. Surprisingly (at least to me), is that the blog ranks really well for a few keywords (#1's and rich snippets ), has a few hundred of natural backlinks, DA is high, has never been penalized (they have always used canonical tags to the original content), traffic is a few thousand sessions a month with mostly 85% organic search, etc. overall Google likes it enough to show them high on search. So now the owner wants to monetize it. I suggested that the best approach was to rewrite their most visited articles and deleted the rest with 301 redirects to the posts that stay. But I actually haven't worked on a similar project before and can't find precise information online so I'm looking to know if anyone has a similar experience to this. A few of my questions are: If they rewrite most of the pages and delete the rest so there is no repeated/copied content, would ad networks (eg. adsense) approve them? Assuming the new articles are at least as good quality as the current ones but with original content, is there a risk on losing DA? since pretty much it will look like a new site once they are done They have thousands of articles but only about 200 hundred get most visits, which would be the ones getting rewritten, so it should be fine to redirect the deleted ones to the remaining? Thanks for any suggestions and/or tips on this 🙂
Intermediate & Advanced SEO | | ArturoES0 -
After blog URL structure change, should you wait to optimize old posts?
Hi all, I'm changing the URL structure on my site's blog (getting rid of dates) soon, but I'm also working on updating/optimizing a bunch of old posts. Some of these old posts have a good amount of traffic, which I don't want to lose when I redirect the old URLs to the new URLs after restructure. I know that you are more likely to maintain your rank and traffic after a redirect if you keep the page content the exact same. So my question is -- should I leave the old posts alone (not making any changes) for a couple of weeks after the URL restructure/redirects for Google to index the new URLs and see that the content is the exact same so the pages don't lose any traffic, OR does it not really matter because I am optimizing these posts, meaning that the content will be better and hopefully get ranked higher? I haven't been able to find a consensus on this, so I'd really appreciate the advice! Many thanks, Rebecca
Intermediate & Advanced SEO | | rwhite10 -
WordPress posts Title field inserts title into blog posts like a headline but doesn't ad H1 tag how to change?
I have a Wordpress website which is just using the Default theme, when I post in the blog, whatever I put in the "Title" field at the top of the editor is automatically is placed within the body of the blog post, like a headline, but it doesn't include any H1 tags that I can see. If I add my own headline within in the blog editor, it still inserts the Title like a headline. I am using the Yoast SEO Plugin and also write the meta title there, should I just leave the Wordpress title field blank so it doesn't insert into the blog post? Or is that inserted Title being recognized as an H1 even though I don't see h1 tags anywhere? Hope this isn't too confusing.
Intermediate & Advanced SEO | | SEO4leagalPA1 -
Google crawling different content--ever ok?
Here are a couple of scenarios I'm encountering where Google will crawl different content than my users on initial visit to the site--and which I think should be ok. Of course, it is normally NOT ok, I'm here to find out if Google is flexible enough to allow these situations: 1. My mobile friendly site has users select a city, and then it displays the location options div which includes an explanation for why they may want to have the program use their gps location. The user must choose the gps, the entire city, or he can enter a zip code, or choose a suburb of the city, which then goes to the link chosen. OTOH it is programmed so that if it is a Google bot it doesn't get just a meaningless 'choose further' page, but rather the crawler sees the page of results for the entire city (as you would expect from the url), So basically the program defaults for the entire city results for google bot, but for for the user it first gives him the initial ability to choose gps. 2. A user comes to mysite.com/gps-loc/city/results The site, seeing the literal words 'gps-loc' in the url goes out and fetches the gps for his location and returns results dependent on his location. If Googlebot comes to that url then there is no way the program will return the same results because the program wouldn't be able to get the same long latitude as that user. So, what do you think? Are these scenarios a concern for getting penalized by Google? Thanks, Ted
Intermediate & Advanced SEO | | friendoffood0 -
Sidewide footer links to latest blog posts
Dear Friends I hope you can answer my questions. We are launching a new site and on the footer (sidewide) we list all new blog posts (limit 5 links - blog title), do you think we have any problems with that part of the website? I am concerned because all new blog post title will appear on all pages' footer and it will be marked as spammed. We just want visitors to be noticed about new blogs post on our site. Thank you,Tran
Intermediate & Advanced SEO | | SteveTran20130 -
Does Bing(and Yahoo)Crawl AJAX Based Content?
I found this article and Bing appeared to at this time have a checkbox for the option for AJAX handling although looking at the Bing Webmaster Tools it doesnt appear that this option is available. Has it just been completely integrated now, relieving webmaster from needing to check the option or is it no longer supported? http://searchengineland.com/bing-now-supports-googles-crawlable-ajax-standard-84149
Intermediate & Advanced SEO | | imiJoe0 -
Crawl Budget on Noindex Follow
We have a list of crawled product search pages where pagination on Page 1 is indexed and crawled and page 2 and onward is noindex, noarchive follow as we want the links followed to the Product Pages themselves. (All product Pages have canonicals and unique URLs) Orr search results will be increasing the sets, and thus Google will have more links to follow on our wesbite although they all will be noindex pages. will this impact our carwl budget and additionally have impact to our rankings? Page 1 - Crawled Indexed and Followed Page 2 onward - Crawled No-index No-Archive Followed Thoughts? Thanks, Phil G
Intermediate & Advanced SEO | | AU-SEO0 -
What is the best tool to crawl a site with millions of pages?
I want to crawl a site that has so many pages that Xenu and Screaming Frog keep crashing at some point after 200,000 pages. What tools will allow me to crawl a site with millions of pages without crashing?
Intermediate & Advanced SEO | | iCrossing_UK0