Z
Z
zaartix2018-11-23 10:58:22
PHP
zaartix, 2018-11-23 10:58:22

PHP7. Two-level caching + fork?

I'm thinking how to make a non-brake cache update. Now there are 2 levels of caching: redis + filecache. When the redis cache goes bad, I initiate a cache update, but before that, I write the value from the second cache level to the redis so that no new requests to the bad cache are generated while the new value is being calculated.
It turns out that only one session will slow down during the cache update, and not all. Actually now the question. How else to remove the brakes for this session? The idea is that in this session it would also be possible to return the value from the second level of the cache, and the process of updating the value is somehow forked so that it does not affect the current session and, upon completion of the work, would simply write the actual value to the cache. In PHP7 there were some tools for such purposes?

Answer the question

In order to leave comments, you need to log in

3 answer(s)
R
rPman, 2018-11-24
@zaartix

Change the development paradigm, in the vast majority of cases, you can remake your application with a minimum of costs.
The outdated classical principle - for each request to the web server, the application re-collects data from the database, templates, generates a response, etc., contains in its ideology the problem of this repetition of data collection, due to which caches have to be fenced.
Raise a web server based on php (or even a web socket server or both, react is quite a technology, with goodies from nodejs, asynchronous, etc.) in that case your data will always be in RAM and controlled by you (i.e. you can control caching ( lock management) at the level of application logic, up to full storage in RAM in variables, and not a database as a layer), of course there is no need to expose this server to the network, let your main web server be the proxy that controls users at the request level and even authorization, and your react application is responsible for the logic.
Among the shortcomings of the approach, the resulting bonus to speed will significantly delay the need to switch to a multiprocessor implementation, since by default this is a single thread application (but of course no one bothers you to run several backends, but you will also have to manage locks with an eye to this), i.e. . when developing an application about this, for some reason, many people try not to think right away, such as why everything is cool, but then it will hurt.
The classical approach allows you to use multithreading and even cluster implementation out of the box almost at the administrator level.
upd. corrected in response redis to react (stupidly confused terms)

A
Alexey Arkh, 2018-11-23
@AlekseyArh

Approximately the following scheme:
1) User logs in -> no cache -> user waits for data to be generated -> data is given to the user -> data is written to the cache
2) User logs in -> there is a cache -> user receives data
3) User logs in -> cache outdated -> the user gets an outdated cache - > a flag is set in the cache that the cache update has begun -> the cache is being updated
That is, if it is not very important how much fresh data the user will receive, then do not make him wait for their update, but give those that are immediately, and then start the update.

K
Konstantin, 2019-10-09
@armenka29

There are queues in the radish, the main script, when you need to update the cache, throws a message into the queue.
2nd script works as a demon, listens to the queue all the time. Receiving a message updates

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question