Answer the question
In order to leave comments, you need to log in
When is it better to use lockfree data structures, and when on mutexes?
You need to implement multiple writer single reader queue.
boost::lockfree::queue immediately googled. I have never had to use lockfree algorithms for this. Here it became interesting in what cases it is preferable to use them.
Answer the question
In order to leave comments, you need to log in
This is highly dependent on the task, in the general case, lockfree algorithms allow you to transfer messages without blocking the reader / writer threads and get a delay of < 1 µs per message. If that's your technical requirement, then boost::lockfree is probably the only portable option.
However, if you are interested in the transfer rate (and not the delay in terms of microseconds / per message), then the usual blocking queue is in no way inferior in terms of messages / sec.
Another use case is if you need to release the writing thread as soon as possible (a typical example is a logger), then a non-blocking queue is the most optimal algorithm.
Also note that boost::lockfree, like all other implementations, does not provide a way to synchronize the reader thread, meaning it will run continuously, wasting CPU resources. If you artificially synchronize it, for example, on a mutex, then the lockfree queue will degenerate to a regular blocking one. Another disadvantage is that the existing implementation severely limits the types of data in the queue, simple data with bitwise copying and no constructor, which greatly limits design options.
My recommendation is if you don't have very specific data transfer requirements, use a normal queue.
In fact, everything is very complicated and often very difficult to say without 2 implementations. On a tough race, lock-frees can show themselves very badly, because. will constantly hang out in internal loops, trying to write data, while other threads will get in and interfere. Again, it all depends on how you use it. For example, pulling out 5-10 elements from one queue per action is unlikely to be beneficial, because at best, there will be 10-20 lock-free operations with cache flush, even a spinlock mutex will be more profitable here.
It seemed to me that the whole point of lock-free algorithms is revealed when the critical section works out in less time than it takes to switch the context. Let's say if your writing to a file is being destroyed, then you can use the good old mutex without any spinlocks there, because it will obviously take longer to write than the context switches. And if you have access to a buffer that is being sorted out or to a queue, then lock-free will be much more efficient.
Also, with a large number of threads and a high level of competition, lock-free is preferable.
In general, everything is correct, you don’t need to use canonical + don’t forget to write the given rel="alternate" attributes on all three pages of the site.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question