6. Miscellaneous
Browser JavaScript execution flow, as well as in Node.js, is based on an event loop.
Understanding how event loop works is important for optimizations, and sometimes for the right architecture.
In this chapter we first cover theoretical details about how things work, and then see practical applications of that knowledge.
The event loop concept is very simple. There's an endless loop, where the JavaScript engine waits for tasks, executes them and then sleeps, waiting for more tasks.
The general algorithm of the engine:
That's a formalization for what we see when browsing a page. The JavaScript engine does nothing most of the time, it only runs if a script/handler/event activates.
Examples of tasks:
<script src="...">
loads, the task is to execute it.mousemove
event and execute handlers.setTimeout
, the task is to run its callback.Tasks are set -- the engine handles them -- then waits for more tasks (while sleeping and consuming close to zero CPU).
It may happen that a task comes while the engine is busy, then it's enqueued.
The tasks form a queue, so-called "macrotask queue" (v8 term):
For instance, while the engine is busy executing a script
, a user may move their mouse causing mousemove
, and setTimeout
may be due and so on, these tasks form a queue, as illustrated on the picture above.
Tasks from the queue are processed on "first come – first served" basis. When the engine browser is done with the script
, it handles mousemove
event, then setTimeout
handler, and so on.
So far, quite simple, right?
Two more details:
That was the theory. Now let's see how we can apply that knowledge.
Let's say we have a CPU-hungry task.
For example, syntax-highlighting (used to colorize code examples on this page) is quite CPU-heavy. To highlight the code, it performs the analysis, creates many colored elements, adds them to the document -- for a large amount of text that takes a lot of time.
While the engine is busy with syntax highlighting, it can't do other DOM-related stuff, process user events, etc. It may even cause the browser to "hiccup" or even "hang" for a bit, which is unacceptable.
We can avoid problems by splitting the big task into pieces. Highlight first 100 lines, then schedule setTimeout
(with zero-delay) for the next 100 lines, and so on.
To demonstrate this approach, for the sake of simplicity, instead of text-highlighting, let's take a function that counts from 1
to 1000000000
.
If you run the code below, the engine will "hang" for some time. For server-side JS that's clearly noticeable, and if you are running it in-browser, then try to click other buttons on the page -- you'll see that no other events get handled until the counting finishes.
let i = 0;
let start = Date.now();
function count() {
// do a heavy job
for (let j = 0; j < 1e9; j++) {
i++;
}
alert("Done in " + (Date.now() - start) + 'ms');
}
count();
The browser may even show a "the script takes too long" warning.
Let's split the job using nested setTimeout
calls:
let i = 0;
let start = Date.now();
function count() {
// do a piece of the heavy job (*)
do {
i++;
} while (i % 1e6 != 0);
if (i == 1e9) {
alert("Done in " + (Date.now() - start) + 'ms');
} else {
setTimeout(count); // schedule the new call (**)
}
}
count();
Now the browser interface is fully functional during the "counting" process.
A single run of count
does a part of the job (*)
, and then re-schedules itself (**)
if needed:
i=1...1000000
.i=1000001..2000000
.Now, if a new side task (e.g. onclick
event) appears while the engine is busy executing part 1, it gets queued and then executes when part 1 finished, before the next part. Periodic returns to the event loop between count
executions provide just enough "air" for the JavaScript engine to do something else, to react to other user actions.
The notable thing is that both variants -- with and without splitting the job by setTimeout
-- are comparable in speed. There's not much difference in the overall counting time.
To make them closer, let's make an improvement.
We'll move the scheduling to the beginning of the count()
:
let i = 0;
let start = Date.now();
function count() {
// move the scheduling to the beginning
if (i < 1e9 - 1e6) {
setTimeout(count); // schedule the new call
}
do {
i++;
} while (i % 1e6 != 0);
if (i == 1e9) {
alert("Done in " + (Date.now() - start) + 'ms');
}
}
count();
Now when we start to count()
and see that we'll need to count()
more, we schedule that immediately, before doing the job.
If you run it, it's easy to notice that it takes significantly less time.
Why?
That's simple: as you remember, there's the in-browser minimal delay of 4ms for many nested setTimeout
calls. Even if we set 0
, it's 4ms
(or a bit more). So the earlier we schedule it - the faster it runs.
Finally, we've split a CPU-hungry task into parts - now it doesn't block the user interface. And its overall execution time isn't much longer.
Another benefit of splitting heavy tasks for browser scripts is that we can show progress indication.
As mentioned earlier, changes to DOM are painted only after the currently running task is completed, irrespective of how long it takes.
On one hand, that's great, because our function may create many elements, add them one-by-one to the document and change their styles -- the visitor won't see any "intermediate", unfinished state. An important thing, right?
Here's the demo, the changes to i
won't show up until the function finishes, so we'll see only the last value:
<div id="progress"></div>
<script>
function count() {
for (let i = 0; i < 1e6; i++) {
i++;
progress.innerHTML = i;
}
}
count();
</script>
...But we also may want to show something during the task, e.g. a progress bar.
If we split the heavy task into pieces using setTimeout
, then changes are painted out in-between them.
This looks prettier:
<div id="progress"></div>
<script>
let i = 0;
function count() {
// do a piece of the heavy job (*)
do {
i++;
progress.innerHTML = i;
} while (i % 1e3 != 0);
if (i < 1e7) {
setTimeout(count);
}
}
count();
</script>
Now the <div>
shows increasing values of i
, a kind of a progress bar.
In an event handler we may decide to postpone some actions until the event bubbled up and was handled on all levels. We can do that by wrapping the code in zero delay setTimeout
.
In the chapter info:dispatch-events we saw an example: custom event menu-open
is dispatched in setTimeout
, so that it happens after the "click" event is fully handled.
menu.onclick = function() {
// ...
// create a custom event with the clicked menu item data
let customEvent = new CustomEvent("menu-open", {
bubbles: true
});
// dispatch the custom event asynchronously
setTimeout(() => menu.dispatchEvent(customEvent));
};
Along with macrotasks, described in this chapter, there are microtasks, mentioned in the chapter info:microtask-queue.
Microtasks come solely from our code. They are usually created by promises: an execution of .then/catch/finally
handler becomes a microtask. Microtasks are used "under the cover" of await
as well, as it's another form of promise handling.
There's also a special function queueMicrotask(func)
that queues func
for execution in the microtask queue.
Immediately after every macrotask, the engine executes all tasks from microtask queue, prior to running any other macrotasks or rendering or anything else.
For instance, take a look:
setTimeout(() => alert("timeout"));
Promise.resolve()
.then(() => alert("promise"));
alert("code");
What's going to be the order here?
code
shows first, because it's a regular synchronous call.promise
shows second, because .then
passes through the microtask queue, and runs after the current code.timeout
shows last, because it's a macrotask.The richer event loop picture looks like this (order is from top to bottom, that is: the script first, then microtasks, rendering and so on):
All microtasks are completed before any other event handling or rendering or any other macrotask takes place.
That's important, as it guarantees that the application environment is basically the same (no mouse coordinate changes, no new network data, etc) between microtasks.
If we'd like to execute a function asynchronously (after the current code), but before changes are rendered or new events handled, we can schedule it with queueMicrotask
.
Here's an example with "counting progress bar", similar to the one shown previously, but queueMicrotask
is used instead of setTimeout
. You can see that it renders at the very end. Just like the synchronous code:
<div id="progress"></div>
<script>
let i = 0;
function count() {
// do a piece of the heavy job (*)
do {
i++;
progress.innerHTML = i;
} while (i % 1e3 != 0);
if (i < 1e6) {
*!*
queueMicrotask(count);
*/!*
}
}
count();
</script>
A more detailed event loop algorithm (though still simplified compared to the specification):
To schedule a new macrotask:
setTimeout(f)
.That may be used to split a big calculation-heavy task into pieces, for the browser to be able to react to user events and show progress between them.
Also, used in event handlers to schedule an action after the event is fully handled (bubbling done).
To schedule a new microtask
queueMicrotask(f)
.There's no UI or network event handling between microtasks: they run immediately one after another.
So one may want to queueMicrotask
to execute a function asynchronously, but within the environment state.
For long heavy calculations that shouldn't block the event loop, we can use [Web Workers](https://html.spec.whatwg.org/multipage/workers.html).
That's a way to run code in another, parallel thread.
Web Workers can exchange messages with the main process, but they have their own variables, and their own event loop.
Web Workers do not have access to DOM, so they are useful, mainly, for calculations, to use multiple CPU cores simultaneously.