M
M
Max Payne2019-06-05 20:27:40
Python
Max Payne, 2019-06-05 20:27:40

Is it possible to write information to two databases with guarantee?

And so, we have the peewee library with the psycopg2 driver for working with postgresql and the redis library for working with (wow!) Redis.
Gradually, in the course of the program, a certain buffer is filled for writing to postgresql and a certain pipeline for writing to redis. Further, using the context manager for the atomic operation (.atomic() ?) and bulk_create inside, the data is uploaded to postgresql, and then pipeline.execute() is called for the radish.
How can you guarantee that even in case of an emergency - power outage or process kill - the data will either be written to both databases or not written to either?
Yes, both libraries have some kind of transactions, but it turns out that in this way one should be nested in the other? What would it look like, for example?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
M
Melkij, 2019-06-05
@melkij

turning off the light or kill the process,

Separately, I pay close attention to the radish settings. It will not write data with the default fsync setting. The default fsync for AOF log mode is once per second. That is, you can lose all data up to a second of the duration if the OS crashes.
Redis does not support the two-phase commit protocol. And you can't do durable fsync to two places atomically.
So just don't do it.
What can be done is to redo the logic so that one of the databases can bring the data into a consistent form in case of an accident using the data of the leading database.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question