D
D
Den S2019-08-20 20:26:07
Data archiving
Den S, 2019-08-20 20:26:07

Efficient firebird DB compression?

Hello.
there is a database firebird 2...2.5,,,
20 files 800-2000 mb.
mostly monthly copies of the database... differ from each other by 1-50 mb ,
basically the file is 60-90% the same within a year, all the more so . compare by content and take into account the difference ... 60-90% of the database is duplicate data ... these are the names of people and addresses and their other data ... rarely change ... otherwise it is effective to compress 1-2 files at once with ordinary archivers ... and 10-20 by 800-2000 mb at once to compress with a continuous archive with a 3GB dictionary will not work very well. RAM limit = 4GB can have software that compares huge files of 800-2000 MB in dozens at once
and the difference in the contents of 10-20% reveals and the archive already makes this difference?
Or is it too space technology?
and you need to increase the RAM to 16 GB and run 7zip x64 with an 8 GB dictionary?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
R
rPman, 2019-08-21
@rPman

Try bzdiff or better memory efficient vbindiff (or normal diff for text)
upd. the original utility consumes a lot of RAM, you need to look for versions that are not so demanding
You can store the original version and then a series of successive patches, the disadvantage is that in order to restore the latest version, you will need to sequentially accept all patches, a very long operation (in fact, the entire file is copied with each patch application
) For a long time I made a backup based on this, and so that I don’t have to apply hundreds of patches in a year, I keep only the last N patches, diligently preparing N + 1 version (using the old patch and saving the result, deleting the old version)

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question