Answer the question
In order to leave comments, you need to log in
Why can nodejs + redis performance be so slow?
pasted this code into one of the controllers to test the speed of getting the result. I set the value at the application startup stage. Then I execute the request from the browser, and I look at the console, and I see 6-12 ms. The server is on the home machine in virtualbox, but I think this does not affect the speed. I suspect that the snag is in asynchrony, since for some reason asynchronous requests are always several times slower than synchronous ones.
const st2 = Date.now();
client.get('string key', (err, val) => {
console.log(val);
console.log(Date.now() - st2);
});
Answer the question
In order to leave comments, you need to log in
At the time of the request, redis is sleeping, and it needs to wake up, and if requests to the radish are rarely sent, then waking up will be longer. Although, in theory, there should be an interrupt when requesting, because of which it should wake up quickly enough.
And console.log can really work for a long time, it is better not to call it between measurements.
Of course, after the radish, the node.js script should still wake up :)
But usually, a wake-up should take up to 1 ms, and two wake-ups should take up to 2 ms. 6–12 ms is really too much. The wakeup is likely taking a long time due to the script and redis being idle most of the time. Well, either the interrupts do not work at all or they work, but due to strong inactivity, the system still decides that it is better not to wake up the script so quickly.
Try testing under load. If the ping drops to 1ms, then everything is fine. Yes, 1 ms may seem like a lot, but usually requests are grouped in batches, and such batches are unlikely to be sent more than two, well, a maximum of three in very complex requests, which will give a total of up to 3 ms of ping — not so scary.
Someone might think, why is it taking so long to wake up. Because each wakeup takes CPU time, and the system, for example, can make a maximum of 50 thousand wakeups per second, loading one core at 100%. Now imagine 10 scripts and 10 radishes that do something every 1 ms - that's 20 thousand wakeups, which will load one core by 40%. And while the total ping will be 2 ms. For a ping of 1 ms, you will have to make 40 thousand wakes - this is 80% of the load.
Interrupts solve this problem best of all - the script wakes up only when data has arrived on the socket or, for example, it is listening to the keyboard, and the user has pressed a key on the keyboard, etc. As a result, it turns out that if 10 scripts do nothing and listen to nothing, then the processor will not be loaded. And if they do, then 80% of the load is not so scary, because we have more than one core on the server.
PS. By the way, your virtual machine can also influence. I would advise testing directly, because this is the feature where the virtual machine can really influence.
PS. In node.js, console.log blocks the script, at least when outputting to a file. That is, think of it as something like calling fs.appendFileSync(). Yes, and when outputting to the console, it also seems to work slowly, although it's better to check. And it’s not yet a fact that the function itself is fast, although this should already have a weaker effect.
100k per second redis will really do it without any problems. But the request will not reach the radish immediately, and after execution the result will not reach the script immediately either. That is, each request will still take about 1 ms, but such requests can be sent 100 thousand per second. It's like on the Internet you have 100 Mbps, but when you go to the site, it's still 10-1000 ms due to ping. But on the other hand, you can go to 10 such sites at once - then you will be able to use the channel much more efficiently.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question