Answer the question
In order to leave comments, you need to log in
What is the best way to organize automatic application recovery after a failure?
I am developing an application that downloads a database from a remote server to my local database (ms sql).
Estimated volume - about 10 gigs.
The application generates links, accesses the server and collects data for one day.
There was a need to organize recovery after failure.
What is the best way to organize something like this? Sorry if this is a noob question.
Answer the question
In order to leave comments, you need to log in
Classic solution:
1. The table where quotes are stored has an index by date/time.
2. Unloading goes by separate transactions in a cycle. No internal state is saved between transactions. At the end of each transaction, the program (conditionally) returns to its original state (this ensures recovery after failures).
3. In the transaction itself, the record with the maximum value of the timestamp is first selected (search by indexed field is instantaneous).
4. This value is rounded down to the beginning of the period (minute/hour/day/year).
5. Request for quotes for this period is made.
6. The selected data is combined with the contents of the table in such a way that intersecting records are not duplicated (there are many solutions in sql).
7. The transaction is closed.
All.
Total: a complete guarantee of the absence of duplicate records in the database in any case. This script/program can be run manually/automatically/on_schedule at any time - the situation with multiple copies running at the same time is not dangerous (albeit meaningless).
Wouldn't the usual try/catch with saving failed requests into a separate list and then calling them again not work?
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question