Back in the days, we lazyloaded images by not setting its src-attribute, but by introducing a fake attribute holding the path to our desired image. Using JS, we can detect if the image is near the user's viewport, and load them. But we already have native lazyloading of images for a while.
Is native lazyloading a success?
When you don't have the resources to optimize your site to the fullest, then do use native lazyloading. When using Wordpress, Shopware or whatever platform for your solution, there will be plugins to implement native image lazyloading.
- Even after a year since its intent, not all browsers support lazyloading;
- There are browser differences, where Chrome is being to eager, loading to much images in advance;
- Native lazyloading is only supporting images and iframes, no background images, or scripts.
As lazyloading images doesn't seem to impact the crawlability of your images when it comes to GoogleBot, let's look at native lazyloading from other perspectives:
Lazyloading from a developer perspective
I think serving HTML based on a client hint is a good solution, because:
- if you want to roll out lazyloading in non-supportive browsers, you would have to omit the src-attribute altogether, and only swap the attributes back to an src-attribute when native lazyloading is supported. Otherwise, let a library take over control;
However, at the same time, I see disadvantages:
- Doing a server check implies doing (development) work on client and server side;
- Server side create and store different HTML caches;
Conclusion: developers don't need to do less work because of native lazyloading.
Lazyloading from a UX perspective
Native lazyloading means, you aren't in control anymore. Chrome's lazy-loading implementation is based not just on how near the current scroll position is, but also the connection speed, as revealed by its source code.
And that's a good thing, as browsers are often smarter when it comes to performance, or at least have more resources/people to evaluate behaviour. Nevertheless, there are browser differences where Chrome is more eager than FireFox, when it comes to lazyloading.
Besides internet speed, users can have Data Saving enabled. In this case, you don't want to be too eager to prevent consuming their band width. When using a library, it is up to you which strategy you wan't to follow, based on circumstances, where some libraries are already taking such circumstances into account.
Partial lazyloading support
Besides lazyloading inline images and iframes, there are other resources that could benefit from lazyloading, and thus improve performance and user experience.
Often, you want product images or hero images to be shown early on. While I would consider a product image to be critical, hero images often have cosmetic or decorative purpose and aren't critical. You could skip such resources on slow internet connections or when Data Saving has been enabled.
This means, you would still need a library to make your website and loading behaviour adaptive.
When it comes to performance, most developers know preventing render blocking resources is a good practice. No need to put your Google Maps script in the head section of your source code, negatively impacting Largest Contentful Paint, when the map is being displayed in the footer.
However, even then, the resources will be downloaded and executed, blocking the main thread and impacting metrics such as Time to Interactive (TTI) and the Lighthouse V6 replacement: Total Blocking Time (TBT).
Having a Google Maps, AddThis or just a sticky-element script, they often aren't critical or even resource heavy. But when not in the viewport yet, why not lazyload them. If a user never reaches the Google Maps or social sharing icons, there is no need to serve such assets.
Moreover, as above examples aren't critical resources (or at least can be replaced by just a Google Maps hyperlink, or no enriched functionality at all), a developer can even choose to omit such resources altogether, depending on the circumstances:
Within the website my agency is building, we try to let the user overrule preferences, such as dark-mode, reduce motions and thus animations as well as download less resources. Not all users are aware of the saveData flag (called Lite mode within Chromium as off April 23, 2019) but might still want a lean experience, or just use less resources and have a lower footprint.
On the other hand, when saveData has been enabled by the user, they might choose to disable it just for one specific website. For this reason, we enable users to overrule their preferences on-site, where preferences are saved using localStorage. An example can be viewed on this website as well, via the button positioned bottom right of your screen.
To intervene in an optimal way, it is easiest to implement this type of adaptive design on the client, instead of using cookies and let the server serve a different HTML source, with different attributes. Otherwise, it would impact the setup and effectiveness of server side caching as well, if we would have to create and store different (full page) caches.
Moreover, I know our websites are still in control of lazyloading behaviour, being more adaptive and delivering the same experience across different browsers. This would also improve trustworthiness of your data, such as bouncerates, when it comes to analyzing the effect of implementing lazyloading, or changing thresholds.
For the time being, native lazyloading isn't the one size fits all solution for me yet.