Progress (0%)

Chapter 10 14 mins

Learning outcomes:

1. Previous inefficient procedures
2. How to make them efficient

## Same offsets

All the lazy loading examples we've seen so far involved images placed at considerable distances away from each other. In these cases our algorithm indeed did a fairly good job of loading the images.

But what if the images are placed close enough to each other. As an example consider the following.

Image 1
Image 2
Image 3

Will it even now be justified to give a separate scroll listener for each image? Clearly no!

As all the images are on the same offset position there is absolutely no point of adding separate listeners for them. Instead we shall be using a single listener that tracks the offset of these images against the scroll position and loads all of them together.

But before doing this we'll have to determine which images are at the same offset position. Only then can we proceed any further. How can we do this?

As a quick hint you can start off by comparing the first image's offset against the value `-Infinity`, and if it is greater than this then you should push this offset into an `offsets` array. The value to be compared now with the second offset will become this new value, not `-Infinity` and so on.

If you are really willing to solve this problem on your own in a detailed guide then you should address the exercise on the next page. Below we just roughly outline the code and its details on how does it finally give us an `offsets` array where all values are unique and correspond to indexes of images stored in another `indexes` array.

``````var offsets = [-Infinity];
var indexes = [];

function populateIndexes(offset, index, i) {
if (offset > offsets[i]) {
// current offset greater than previous offset
offsets.splice(i + 1, 0, offset);
// create new array with the index
indexes.splice(i, 0, [index]);
return true;
}
else if (offset === offsets[i]) {
// same offset
// offset already appearing in array offsets
// hence only update the indexes array
indexes[i - 1].push(index);
}
else {
// offset lesser than previous
// hence iterate backwards until it becomes greater than previous
while (offset < offsets[i]) {
i--;
}
return populateIndexes(offset, index, i);
}
}``````

First we calculate each image's offset, and then send it to the function `populateIndexes()` as an argument, besides the image's index (`index`) and a pointer variable (`i`). This function does all the job of analysing the provided offset value with the previous values and checking whether it is equal to any, or less than any.

At the end of the loop we are ultimately left with two arrays `offsets` and `indexes`. `offsets` holds the offsets of all the lazy images, in sorted order whereas `indexes` holds the corresponding indexes of the images.

For example if we have four lazy images with the offsets: 100, 200, 100, 300, in this very order in the HTML, at the end `offsets` will be `[100, 200, 300]` and `indexes` will correspondingly be `[[0, 2], [1], [3]]`. The first element of `indexes` holds the indexes of the images at the first offset in `offsets` i.e 100.

Ultimately, now we can use this `offsets` array and assign scroll listeners for each of its value just like we did in the previous chapters. Once the condition in the listener is met we will loop through the corresponding elements in `indexes` and lazy load all the respective images one by one.

But for now you don't have to worry about writing code to get this step of efficiency work with the lazy loading algorithm, because there are more to follow. We will collectively form a single code at the end of this chapter whereby we merge all efficiency procedures to make our algorithm the most efficient it could be.

## Benchmark distance

If you closely look at the section above you might notice one thing that it only deals with cases where the images are exactly at the same offset, not one pixel above, not one pixel below.

However you might already know that even images that are 30, or 40 pixels far away from an image can't rightly be considered as far away. For two images to be far away from each they should be at least some considered distance apart, which you get to decide in the code.

What will you choose of the following values, to denote the least amount of distance between two considerably distanced lazy images?

• 30
• 50
• 100
• 200

Well even if you chose 100, technically it is not wrong; since it is your all your choice! We have used this value because if you closely notice, even 100 pixels don't really denote that of a considerable distance.

You can try this on your own too; place two images on a web page, with a top margin of 100px to the second image. The images shall be positioned as blocks. Once you do this take a second or two to see the images. You'll come to the conclusion that even 100px doesn't really denote a considerable distance value.

This means that we shall further be reducing down the `offsets` array to one where no two consecutive values differ by greater than the distance you consider considerable.

For example if you choose 200px as the benchmark distance, then the `offsets` array `[30, 70, 130, 400, 550]` shall reduce down to `[30, 400]`, in this step of making our algorithm efficient. Hence instead of the previous `offsets` array now we shall be using this new array. Remember that the code that reduces down the offsets array here shall also update `indexes` accordingly.

``````var a = 0;
var b = 0;
var o = 0;
var considerable = 200; // the considerable distance

while (a < (offsets.length - 1)) {
o = offsets[a] + considerable;
b = 1;
while (offsets[a + b] <= o) {
indexes[a] = indexes[a].concat(indexes[a + 1]);
indexes.splice(a + 1, 1);
b++;
}
a++;
offsets.splice(a, (b - 1));
}``````
To go through all the details of the this procedure follow along the second section on the next page.

## Multiple Listeners

The third problem in our lazy loading algorithm is as follows: multiple scroll listeners are assigned for the multiple offset values. So what exactly is this problem? Let's discuss it.

Suppose that after all processing done from sections 1 and 2 you are left with the arrays `offsets = [100, 370, 660]` and `indexes = [[0, 1, 2], [3], [4, 5, 6]]`. Even if you were to use this `offsets` array, you'll be assigning 10 scroll listeners altogether for all these 10 elements. One scroll listener will be assigned for each element in `offsets`.

Now as soon as you scroll even just one pixel on the web page, you'll be calling 10 listeners one after another! In this case, it won't hurt at all, as 10 concurrent calculations aren't that of a jank but what if the web page has 100 images all at different and considerably distanced offsets?

These days where we have detailed blog articles with over than 10,000 words, it isn't any surprising to see articles with over than 100 images!

In this case each pixel scrolled will call 100 listeners one by one! Imagine scrolling 1000px quickly - you will be invoking 100,000 listeners! Clearly what you don't want.

So how can we solve this problem?

Well we can take advantage of the sorted `offsets` array and use just a single listener to track the scroll against the offsets.

The way we will implement this is that we will match the scroll position, given by `window.pageYOffset`, with the first element of `offsets`. If it is lesser than the first element then it will obviously be lesser than all elements down the array and hence we won't need to load anything.

However if it is greater than the first element then we will further need to check if it is greater than the second element as well. We will have to continue doing on this unless we find a value where the scroll is less than the next value, or we reach the end of the array.

As an example illustrating this idea suppose you have the arrays `offsets = [11, 340, 670]` and `indexes = [[0, 1], [2], [3, 4]]`, that you start off with `pageYOffset` equal to `0`.

You scroll through 10px and your listener gets accordingly called 10 times, where in each call `pageYOffset` is less than the first offset (i.e 11px) - hence nothing happens. Then you scroll 1px further down, and the first offset (`11`) gets matched with `pageYOffset` - leading to all the images with indexes in `indexes[0]` getting loaded into view.

If you keep proceeding this way, going from the top of the page towards the buttom, you will always match only the first element in `offsets`; never going any further than that.

However if your web page has some id link that directly takes you to some position at the bottom of the page then the steps we discussed above will come to live action. They will check exactly in after which offset does the value of `pageYOffset` lie and accordingly load all images for that very offset.

Following is the code to accomplish this task of switching from multiple listeners to just one single listener.

`````` // offsets and indexes processed from steps 1 and 2 above
var i = -1;
while (window.pageYOffset >= offsets[i + 1]) i++;
if (i === -1) { return; }

// remove respective elements from both arrays
offsets.splice(i, 1);
curIndexes = indexes.splice(i, 1); // image indexes to lazy load

// iterate over all elements and lazy load each
for (var a = 0, len = curIndexes.length; a < len; a++) {
lazyImages[curIndexes[a]].src = lazyImages[curIndexes[a]].dataset.src;
}
}

// add function in the following SINGLE scroll listener
window.onscroll = function() {
}

// call function intially to load images above-the-fold
// without the need to scroll``````

Let's explain what is happening in this mess of a code.

1. The reason we create a seperate function, `lazyLoad()`, here is to be able to call it globally, with the page load (in line 22) to consider any images above-the-fold. If we were to not call this function globally, we couldn't get above-the-fold images lazy loaded without the need to scroll.
2. Anyways moving into the function, we first create a variable `i` that serves to provide us with the index of the desired offset (discussed in the discussion above). Using the `while` loop in line 4, we iterate over all offsets in the `offsets` array until we find a value that is greater than `pageYOffset`, or we reach the end of the array.
3. If we didn't find a match, and `i` remained as `-1`, we exit the function using `return` (in line 5).
4. If we found a match, then we remove the corresponding offset from `offsets` and the corresponding indexes array from `indexes`. Using this indexes array we iterate over all its elements and load them into view. This completes the logic and idea behind which we've made lazy loading - to load images at some point.

So as you can see in this way we can eliminate the usage of multiple listeners and consequently make our lazy loader way more efficient. Over to the last rectification.

## Quick firing

The last thing left to rectify in our lazy loading algorithm is the direct call of the scroll handler on each subsequent pixel scrolled. So what is this?

Imagine that you start on top of a web page with multiple lazy images and scroll swiftly across it to go to the position 800px. Also suppose that in this drive you skip many images above.

Now even if you were to apply all the three measures above, the lazy loading code will regardless, load all the images that you had skipped. You might not have wanted to get them loaded but the algorithm will!

Not only this but the quick scrolls will also call the scroll listener too frequently, making themselves as a potential candidate for performance degradation on low-tier devices.

In plain words, there's simply no stop to our algorithm. It doesn't wait for sometime before continuing on with its calculations. Rather it performs the calculations just at the speed at which we fire the scroll event, not considering whether certain images remain in the viewport for sometime or not before loading them.

This signifies the fact that the direct firing of the scroll listener is truly an inefficiency. Can you think of anything that can be used to prevent this?

What can we use to get the lazy loading algorithm come into action only after sometime has passed since the last call of the scroll event?

• Thd function `setInterval()`
• The function `delayFunc()`
• The function `setTimeout()`

Well we would have to go our old friend `setTimeout()`! If you don't know about this function consider reading Javascript Timers.

The whole lazy loading algorithm will go inside the timeout function which will then go inside the scroll handler. As a consequence this will cause every scroll event to first go through a timer before being able to execute the underlying lazy code.

Talking more about the working of this prodecure; in the scroll handler we'll create a timeout of 300ms with each call of the scroll event. If within this period of time another scroll event is dispatched by the user, the previous one will be canceled and a new one will be created for another 300ms.

Here's the code for the timed scroll event:

``````var time = 300; // 300ms delay
var timeoutFunc = null;

window.onscroll = function() {
clearTimeout(timeoutFunc);
timeoutFunc = setTimeout(function() {
}, time);
}``````

The variable `time` holds the time delay for the timeout (in milliseconds) whereas `timeoutFunc` is given to hold the id of the `setTimeout()` function. In line 5, we clear the previous timeout in stack, and then in line 6, create a new one to be executed after another 300s.

In this way we make sure that whenever scrolling is done, the lazy algorithm will only be called once 300ms has passed since the last firing of the event.

## Final code

And this marks the completion of all the efficiency steps we couldn't taken to improve out scroll based lazy loading algorithm. Following is the complete code after we've applie all the above measures to our previous algorithm.

Don't get panicked by looking at the length of this code - it's literally a mixture of everything you've understood and done yourself - there's nothing new!

``````var lazyConts = document.getElementsByClassName("lazy");
var lazyImages = document.getElementsByClassName("lazy_img");
var offsets = [-Infinity];
var indexes = [];
var time = 300;
var timeoutFunc = null;

function populateIndexes(offset, index, i) {
if (offset > offsets[i]) {
offsets.splice(i + 1, 0, offset);
indexes.splice(i, 0, [index]);
return true;
}
else if (offset === offsets[i]) { indexes[i - 1].push(index); }
else {
while (offset < offsets[i]) i--
return populateIndexes(offset, index, i);
}
}
var i = -1;
while (window.pageYOffset >= offsets[i + 1]) i++
if (i === -1) return;
offsets.splice(i, 1);
curIndexes = indexes.splice(i, 1);
for (var a = 0, len = curIndexes.length; a < len; a++) {
lazyImages[curIndexes[a]].src = lazyImages[curIndexes[a]].dataset.src;
}
}

var offset = 0;
for (var i = 0, p = 0, len = lazyConts.length; i < len; i++) {
(function(i) {
offset = lazyConts[i].getBoundingClientRect().top + window.pageYOffset - window.innerHeight;
populateIndexes(offset, i, p) ? p++ : 0;

// .loader code from previous chapter
// reloader code from previous chapter

lazyImages[i].onerror = null;
}
})(i);
}

offsets.shift();
window.onscroll = function() {
clearTimeout(timeoutFunc);
timeoutFunc = setTimeout(function() {
It's a sign off for now, but we'll come back in the next chapter with looking over a new technology to implement lazy loading - `IntersectionObserver()`.