R
R
Roman2020-04-15 03:49:43
Asynchronous programming
Roman, 2020-04-15 03:49:43

How does an asynchronous program (event loop) understand that a response has come from the server?

How does an asynchronous program (event loop) understand that a function (method) has received a response from the server and it needs to transfer control? Does it use another thread for this, or does the event loop check the readiness of tasks at a certain stage?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
D
Dmitry Belyaev, 2020-04-15
@Bloodie_lie

In order to fully understand asynchrony, you will have to gradually descend to the lowest level, down to iron. But it's worth starting from the very top - from the level of our application.
So, we write in our high-level favorite language, JS/Rust/C#/Scala/Python or whatever. In the modern world, we most likely have some kind of abstraction for working with asynchronous APIs, provided either by the standard library of the language or by third-party libraries. It can be primitive and callback based or more advanced like Future/Promise/Task or something similar. Sometimes our language provides a syntax like async/await to make it easier to work with these abstractions, and sometimes asynchronous work can be hidden from us at the language runtime, such as with goroutines in Go. But in any case, somewhere under the hood we will have an event-loop, and sometimes not one, since no one forbids us to write multi-threading at the same time using asynchronous calls.
The event-loop itself is nothing more than a normal while(true) or any other infinite loop. And inside this cycle, our program has access to extract to some queue (if you don’t know what kind of data structure it is, then google it), which contains the results of already processed tasks. The program takes the next result, finds the callback/Promise/Future/Task waiting for it, and starts executing the waiting code. Again, there can be several queues and they can be processed differently, but this is not important. The important thing is that our main thread (or threads) do not know anything about how asynchronous tasks are performed. It only looks to see if there is a result in the queue, and if there is, it processes it, and if not, then it decides to either exit the loop (and end the thread, and sometimes the entire process) or sleep until new results appear.
But where do the results come from in the queue? You need to understand that an asynchronous program is almost always multi-threaded and the result of operations gets into the queue from background threads, which simply block while waiting for the desired resource (or many resources at once, if using system APIs like epoll or kqueue). As a rule, such background threads are in a waiting state most of the time, which means they do not consume CPU resources and do not get into the OS scheduler. Such a simple model really allows you to save a lot of resources compared to a model where many threads perform 1 task each and wait for their requests on their own.
It's important to note that in today's world, even mid-level languages ​​like C or C++, let alone high-level ones, don't implement asynchrony themselves. Firstly, on different operating systems, different apis are used for this. Secondly, these apis on different operating systems can handle different types of resources (all major operating systems seem to be able to work with the network, but in addition to the network, you can work asynchronously with user input, disk and peripheral devices, such as scanners, webcams and other usb clinging) . The most popular (IMHO) is the libuv cross-platform library, although in Rust it is customary to use mio (or even abstractions over it, like tokio), in C # there are similar mechanisms in .NET Core, and in Go it is already hardwired

in the very 1.5MB of runtime that Go puts into each binary
(there is also a GC, but one fic is a lot and worthy of being put into a dynamic library)

OK. We sort of figured out the application code. But what happens in the OS kernel? After all, as it was written above, we even have an api to wait for requests in a bundle. Everything is simple. OS kernels became asynchronous even before it became mainstream, unless, of course, we are dealing with a real-time OS (but we have Windows / Tench / Poppy / Free, and not an OS for a Boeing on-board computer, where this is critical). Look, when something happens on the external periphery (well, for example, the disk read the requested data, or the data came over the network, or the user pulled the mouse), then an interrupt is generated. The CPU really interrupts its current work and runs to see what happened, or rather, it calls the handler provided by the OS. But the OS, that is, has the main job, so it rather tries to free the handler and simply throws all the data into the RAM, and it will be sorted out later when the turn comes. Doesn't it remind you of anything? It is very similar to what happened in the event-loop, only instead of background threads, "results" get into the queue from interrupts. And sometime later, the OS will give the data to the device driver, and so on, until they reach our application. That's it, no magic.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question