Answer the question
In order to leave comments, you need to log in
Practices for regulating access to cached (rarely changed) data in a multithreaded environment?
In the title of the question, I could not fit an indication that there are certain conditions under which this should be done.
The essence is as follows:
There is a server, which, due to the tasks assigned to it, is very simple, written on the basis of a conventionalHttpListener
. The average load (without critical delays) is approximately 1000 requests per second, while the required (customer) maximum load should be approximately 100 rps, i.e. there is a margin of "strength". Request processing is done asynchronously. To issue an answer to a particular request, it is very often necessary to access data that is initially stored in the database and which can be called shared (shared), since it (or part of it) is consumed unchanged for the purpose of generating a response to different requests. Since the structure of all data and types are strictly defined and do not change, when this data is first loaded from the database, they are packed into specific entities represented as POCO classes. Of course, these entities are cached by the server using a simpleConcurrentDictionary
and a sliding schedule of the life of its elements. The length of responses is no more than 300-500 bytes, they are also cached, since the lion's share of requests are repetitive, they just come from different clients. All this worked very fast and well for 5 years.
Not so long ago, it became necessary during requests not only to read the very “shared” data, but also to change them occasionally. The entities that represent this data are stored in dictionaries (cache), which means that when a request takes an entity, it thereby takes not it, but a pointer to it. Each class that represents a particular entity has read-only fields and properties, and there are only two ways that allow you to change the field values - this is the update function and the delete method. With regards to these methods, everything is clear: to prevent conflicts of access to update and delete operations, asynchronous locks are used in write mode.
But it’s not clear: what to do with fields and properties ?, namely: for example, if at the moment of reading a particular property, an update or deletion occurs, then I should wrap each field / property in a read mode lock? ... or maybe the whole caching architecture, based on new needs, requires a complete revision?
There is a related problem: the fields in some entities are lists (of type IList<T>
, etc.), when the data and in particular the contents of the list will be updated (by inserting and / or deleting elements) and at this moment someone will enumerate its elements then you know what happens...
I will add realities: for some reason, the customer is extremely conservative, I cannot use any third-party libraries, except for the .NET Framework 4.7. (to which the solution was ported a year ago) with its standard set of libraries, in principle, they were not required to solve its problems.
Any comments, references to practices (except Google) are welcome, maybe a statement of your experience in order to be able to put it all together and turn it into a concise solution. Thank you.
Answer the question
In order to leave comments, you need to log in
if at the time of reading a particular property, an update or deletion occurs, then I should wrap each field/property in a read-mode lock?
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question