The Optimizing Compiler
Let's continue our journey from the previous article - part 1.
Up until this point, it runs ok. But not great.
What makes JavaScript really fast is that green arrow.
Anything we can get across green arrow - Turbo fan is good.
3 things the engine does to help you out
Speculative Optimization
Hidden classes for dynamic lookups
Function inlining
The reason that what makes the Optimizing Compiler faster is the removal of the things that make the Interpreter slower.
π It turns out that JavaScript is hard. JavaScript is dynamic.
Experiments
// benchmark.js
const { performance } = require('perf_hooks');
// SETUP π
let iterations = 1e7;
const a = 1;
const b = 2;
const add = (x, y) => x + y;
// π SETUP
performance.mark('start');
// EXERCISE πͺ
while (iterations--) {
add(a, b);
}
// π EXERCISE
performance.mark('end');
performance.measure('My Special Benchmark', 'start', 'end');
const [ measure ] = performance.getEntriesByName('My Special Benchmark');
console.log(measure);
Output
I would argue that we're running the same function add
with the same variables, a lot of times.
Feels like it's a good candidate for optimizing compiler.
How do we know is it going to the Optimizing compiler?
It turns out, we can do it with this command line
node --trace-opt benchmark.js
trace-opt
: trace optimization
We optimized add
for recompilation. We know it went to TurboFan. It got optimized.
Now, what if we never optimize our function, how slow will it be?
// EXERCISE πͺ
%NeverOptimizeFunction(add);
Let's add this line of code below the Exercise comment.
Run the following command and see the result we get.
node --allow-natives-syntax
The duration now is at 44.19ms
. That's much more slower.
Another Experiment
function add(x, y) {
return x + y;
}
add(1, 2);
%OptimizeFunctionOnNextCall(add);
add(3, 4);
In the terminal
node --allow-natives-syntax --trace-opt add.js
Here you can see, our add
function got optimized.
Even we just called it once. We optimized it on the next call. (2nd called)
A function usually NOT optimized for the first time it is run.
Because going to the Optimizing Compiler ISN'T FREE. It takes some time.
Now, what if we change the function argument
add(1, 2);
%OptimizeFunctionOnNextCall(add);
add(3, '4');
Let's run the following command
node --allow-natives-syntax --trace-opt --trace-deopt add.js
We de-optimized the function just after one time calling it, with different arguments.
What's happened?
We use a system called speculative optimization
How does this work?
function add(x, y) {
return x + y;
}
We use the interpreter, it's ready to go. But it doesn't know anything about our code, it's not as fast as Turbofan (compiler) is. The optimizing compiler is SLOW to get started.
The interpreter needs some information before it knows what work it can either optimized or skip out on altogether.
The interpreter starts gathering feedback about what it sees as the function is used.
JavaScript is not a typed language. It doesn't know it's gonna get two numbers every time. The interpreter try to do a bunch of hard work. It's going ahead, doing the slow way. Eventually, it generates feedback objects that are going to the Optimizing Compiler - with information about HOW the
add
function will be called "with two numbers". We're not guaranteed every time. But, we are pretty certain that numbers are going to passed in every time. Now it becomes the candidate for being optimized
What if a string slips in there?
π All these assumptions for optimization are wrong. We go back to the Byte code.
π The optimizing compiler optimizes for what it's seen. If it sees something new, that's problematic.
Mono-morphism, Polymorphism, and Mega-morphism.
The Mechanisms of Speculative Optimization.
Mono-morphism - it's the same thing every time
π This is all I know and all that I've seen. I can get incredibly fast at this one thing.
π Whenever we pass a bunch of objects that only have an a
property - for instance, and the value is a number {a: 2024}
Polymorphism
π I've seen a few shapes before. Let me just check to see which one and then I'll go do the fast thing.
Mega-morphism
I've seen a lot of things. I'm not particularly specialized. I'll not optimize!
Some Key Takeaways
Turbofan is able to optimize your code in substantial ways if you pass it consistent values
Initialize your properties at the creation
Try not to modify them after the fact
Maybe just use Typescript or Flow so you don't have to worry about these things.
The easiest way to reduce parse, compile, and execution times is to SHIP LESS CODE to the browser.
Use
User timing API
to figure out where the biggest amount of hurt isConsider using a type system so that you don't have to think about all of this stuff.