Answer the question
In order to leave comments, you need to log in
How to measure the execution time of a function in javascript (in nanoseconds)?
I have a list of some functions that performs mathematical operations. I need to measure their execution time in nanoseconds, given that the argument is a random number. The problem is that when I take a measurement several times, the times vary too much.
Consider on a specific function:
var func = function(x) {
return x*x
}
var func = function(x) {
return x*x
}
var s_time = performance.now()
func(Math.random())
var dur = performance.now() - s_time
console.log(dur)
var func = function(x) {
return x*x
}
var iterations = 1000000
var s_time = performance.now()
for (var i=0; i < iterations; i++) { func(Math.random()) }
var dur = (performance.now() - s_time)/iterations*1000*1000
document.write(iterations + " итераций: "+ dur.toFixed(4)+' ns/итерация<br>')
Answer the question
In order to leave comments, you need to log in
the exact execution time of a single function call is quite difficult to obtain.
firstly, performance.now returns a fractional number, so there, in principle, there can be nanosecond precision. however, the standard specifies that the precision should be 5 microseconds. Plus, browsers can deliberately lower the accuracy to deal with some attacks.
so it is correct to call the function many times and take the average.
also in the node there is process.hrtime which gives nanoseconds.
Both of these methods "on the forehead" will not give a normal result. Why? because in reality a lot of interesting things happen when the code is executed.
The engine has an incredible amount of optimizations, and a function called 10 times will have completely different code than a function called 100 times. The same goes for parameter types - for example, you can pass integer or fractional numbers.
For one of your written js functions, the engine will generate several functions that implement this. These functions can have completely different code with different performance.
Switching happens on the fly and in general you don't know when it happens.
therefore, in itself, measuring the speed of a "function" makes little sense, since there are several of them inside. If you're interested in the details, google JIT, AOT and v8 optimizations.
Now it is important that there are "cold" functions that work slower but more hopefully and are usually used immediately and there are "hot" options that the compiler starts using when it sees that the code is called many times, and the conditions do not change. "Hot" ones work faster.
when running the cycle, the "cold" version will work first, then at some point the optimization is triggered, it switches to a faster one. There may be several such switches.
knowing this, you can first warm up the code and then measure the speed. most likely the same variant will work on the warmed up code.
or, as you said, run it a million times (I think ten thousand is enough), and the first runs of cold code will not have much effect. This can be determined by how performance grows with increasing iterations, at some point it will stop doing this.
And here is the most important point - even if you measure this speed, what will you do with this knowledge? In a real program, when executing this code, the speed may not be at all what you intended. These tests are only good for their general development
. In practice, it makes sense to measure the speed of functions that take much longer AND where this creates problems. Google the term "premature optimization".
In this case, it is usually clear what creates delays and it is not difficult to measure the time of one call.
If the function works so fast that you can't figure out exactly how, then you probably don't need to find out.
For practical tasks, the browser has a profiler, if you need to find out what slows down in a particular code, it’s best to start with it.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question