Answer the question
In order to leave comments, you need to log in
What software and how to use for full web server backup on Ubuntu?
There is a local local web server based on Ubuntu. The task is to deploy a backup system that will allow you to quickly restore the operation of a combat server from scratch in case of failure of all hard drives. I am considering the following option: set up a remote dedicated server that will do a full daily backup of the system at night. In the event of a failure of the production server, I would like to be able to restore the operation of the server in the most simple way.
The problem is this: it is easy to backup and restore data, but it is problematic to install and configure the system in such a way that the new server works with this data in the same way as the old one.
Accordingly, a turnkey backup system option is needed. Ideally, I would like to boot on the new hardware from a live system on a flash drive, connect to the backup server and start the recovery process. That is, to simplify the recovery process as much as possible.
Please share your experience of setting up and using such backup systems.
Answer the question
In order to leave comments, you need to log in
You are trying to solve your problem a little in the wrong direction.
Backing up entire partitions of the machine is not a good idea, because not only disks break, but the system itself during updates (you won’t refuse security updates, for example), vulnerabilities may arise and all sorts of encryptors will encrypt volumes for you, but not enough what else could happen. that the binary backup for some reason will not rise and you will have to pick out a working service from the system backup very hemorrhoidally.
The correct way is to back up the database and the section with user statics separately, and you don't need to back up the source code and the system. The system should be as standard and clean as possible, and the backend should be deployed in docker containers by running the compose file.
There are many advantages:
- a transparent, understandable and declaratively described configuration,
- no dependencies on the host system,
- tolerance to the rise of any number of staging servers,
- it is convenient to develop and run on developers' machines,
- it is convenient to seamlessly migrate to new hardware when warnings from SMART have just begun to pour in not when the fried rooster of the SSD is screwing up,
- the space for backups is spent much more efficiently: only data (database and files) are backed up,
- the entire configuration is tolerant to version control systems, which means it is easier to unwind and solve problems,
- you can easily raise a spare or temporary service where anything before you extinguish the worker for some purpose,
- you can quickly (with one command) deploy a temporary server on any machine until you set up a regular server to replace the deceased one.
If you want to simplify the work of everyone as much as possible - make orchestration scripts (with comments) that will raise, lower, deploy, backup and raise backups. In a couple of years, when you forget how everything works there, these scripts and comments will save you a lot of time.
Do not forget that in addition to the main configuration, there are also SSL certificates that have the property of "unexpectedly" expire, the Nginx configuration, which would also be nice to put in the appropriate container, and so on. And then there are the IP addresses of internal DNS and all sorts of gateways that can change, and a stupid backup will only aggravate the situation. Yes, and on the server raised from the backup, most likely, the certificate has not been established for a long time and your site will not work just like that.
In short, you can’t just take and back up a server screw so as not to have problems under any circumstances.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question