D
D
driverx182019-07-22 21:15:03
Highload
driverx18, 2019-07-22 21:15:03

What to do in case of "cold cache" during rush hour?

There is a request that runs for 8-12 seconds and is placed in the memcache. In the case of peak hour, when +50 people visit the site every second, this can be very bad, while the first user enters and the request is executed, the remaining 49 at the same time also execute the request and load the database. I read an article on the highload , but I still didn’t understand their solution, why they make another key.
Who will advise how to overcome this situation?

Answer the question

In order to leave comments, you need to log in

3 answer(s)
R
Roman Mirilaczvili, 2019-07-22
@2ord

According to the article, the backup cache gives data while a heavy request is being executed and the main one is being updated.

G
Gomonov, 2019-07-22
@Gomonov

First, the article implies (though it doesn't say) that you won't have "downtime" in requests for the ttl value of the fallback, otherwise both caches could be flushed. And so logical. The main cache is invalidated every hour (how should we invalidate it), suppose X is there, as in the spare. Although, for example, there is already Y in the database. The essence of the approach is that at the time of cache invalidation, the first request overwrites the main cache again with X from the spare, the rest of the requests will receive this X for some more time until the first one is completed. It is executed - and only then the main and spare receive Y. You get
the following chain of responses in time: XXXXXX | the main cache was reset | Y (this is a long-running request) XXXX |a long-running request was executed here and only returned a response| YYYY
PS thanks for the link to the article

I
Igor Vorotnev, 2019-07-23
@HeadOnFire

Working with the cache should be built in 1 of 2 ways:
1. Prime cache - a resource-intensive request is executed according to the schedule/event you set and updates the cache, the user always receives data only from the cache.
2. Serve stale - executing a resource-intensive query and updating the cache is a parallel process, while users still receive previous (albeit less relevant) data. Again, the user always only gets data from the cache.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question