F
F
Fengol2019-02-23 20:13:33
Backend
Fengol, 2019-02-23 20:13:33

How and with what is it correct to cache server responses?

Far from backend development, but for self-development, I started writing a home project for myself and chose a very difficult task especially for this. Its complexity lies in storing data in a database whose structure is very far from the structure that is required on the client. A very large amount of data in the database is stored in a normalized form, and on the client it is required to give it in the form of very complex trees or even graphs in order to visualize using graphs. So it turns out that it is simply pointless to generate the mentioned structures in json for each request and it is easier to give its cached version for each request. But I don't know how to hash properly and where to store it. The server will be on nodejs, but so far I don’t know exactly which one. The database is still mondodb + mongoose.
I hope there is enough information. I will also repeat once again - I absolutely do not know what and how to do, so I ask you to explain not according to the words taken from the question in the quality of the key ones, but as they do in real projects. Perhaps you need to use nginx, some other databases, maybe a server on nodejs is enough, or maybe it needs to be used in aggregate .. Explain in detail.

Answer the question

In order to leave comments, you need to log in

2 answer(s)
S
sim3x, 2019-02-23
@sim3x

The easiest way to cache on nginx
Next is memcached / redis
Then it all depends on the stack and subtleties

R
rPman, 2019-02-24
@rPman

Take a good look at exactly what data changes and how often.
For example, sometimes it turns out very well that in the project the main amount of data on the client is the values ​​of reference books, i.e. in fact, only the id of the records in the reference tables, which change very rarely, more precisely, this is not the main flow of changes, needs to be transferred to the client side from the server.
This means that it is necessary to send a stream from the server to the client to synchronize the lookup tables (i.e. in the websocket channel you will have messages like add/remove/modify table_name id, value) and transfer all objects as a set of id (even if it is a normalized a plate describing a graph of thousands of lines will not be scary, since there will be many repetitions and empty cells).
Be careful, even a boxed set of id serialized as json is not efficient (due to field names, or they also need to be encoded) and you might be better off using a binary serializer of structures that does not pass these names in principle, such is google protobuf for an example , support for a bunch of languages ​​\u200b\u200ballows you to use it on the web.
ps directories can be initially transferred to the client as static files if they rarely change, or as a sequence from their change log (a file for a date in the past and several files with a change history dump, how often to update files is a matter of statistics). In this case, the files will be cached by the default browser cache.
Otherwise, you will have to store data in a separate browser storage (of course with the version, and request the change history for the date when connecting), this is a bit inconvenient but gives more control over cache management.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question