M
M
mrstrictly2013-07-01 21:58:49
linux
mrstrictly, 2013-07-01 21:58:49

How to restore a home server from a backup with a minimum of effort?

Hello,
I'm a developer and many of my hobbies revolve around computers. In particular, for my various crafts at home, I have a small server running on old hardware. The specificity of probably any personal home server is the abundance of various trifles scattered over it, services of various directions, for example, MySQL and PostgreSQL, or RoR and Django applications in the neighborhood on one home machine - this is normal; which you will not find in adult production, where the pieces of iron have their own specialization and the quick commissioning of a new node is a regular procedure.
Some of the data on my server, as it should be, is of very significant value to me, and it would be a pity to lose it.
That's why I make backups. I make backups with Duplicityand store them on the NAS. I arrange small exercises for myself periodically, checking the procedure for restoring from backups and the integrity of backups, the ability to restore the database from dumps, the presence of configs and all that. I do this when I happen to have a small piece of free time. And now I’m getting to the heart of the matter: manually restoring even a small part of my small zoo from a backup is a very dreary procedure, which, if there is not enough time, can take (I’m talking about a full restore) - several evenings ...

So, the initial data:

  • home server running Linux. For example, the ubiquitous Ubuntu Server;
  • second-rate consumer hardware, ready to fail at any moment, or it can crumble a little bit for several weeks in a row (I'm talking primarily about hard drives);
  • another second-rate hardware, completely different from the first, having a completely different configuration up to the processor architecture (for example, there was an old laptop with Intel Core2Duo with a mechanical hard drive, and now there is a single-board minicomputer with a quad-core ARM Cortex-A7 and SSD);
  • an abundance of services (web server, subd, pythons and ruby ​​specific versions, code repositories, etc., etc.) and small pieces of this or that software and data scattered around the server.


I would like: observing for myself, if necessary, some simple discipline or algorithm of actions to have such a backup copy from which recovery even in a completely different environment (unless the OS version, but not the platform, will remain the same) would take a minimum of manual labor . At the same time, configuring and implementing such a backup system in itself should also take a minimum of effort (I'm not lazy, it's a matter of priorities :)).

The most pleasant of the options I came up with is to make deb packages with their own configurations, data, and properly affixed dependencies. Group them into a repository, and keep the repository on the NAS. Rebuild packages on a schedule, including the "unback up everything" meta package. Very nice, but, I feel, terribly laborious.

Please advise materials on the topic of backup and recovery, specific to my task, and, if available, ready-made solutions.
Thank you!

Answer the question

In order to leave comments, you need to log in

4 answer(s)
@
@sledopit, 2013-07-01
_

So is puppet . Well, or chef and other configuration management crafts.

P
Puma Thailand, 2013-07-01
@opium

you raise everything in openvz containers, and there we simply make a backup of the vzdump system, restore it via vzrestore.
you can install proxmox and do it through the web interface

S
Sergey Cherepanov, 2013-07-02
@fear86

Another option is LXC containers.

T
thunderspb, 2013-07-02
@thunderspb

The simplest is, of course, virtual machines, or, as advised above, puppet and other configuration management programs.
There is also such a thing as Stage4 - it's just an archive of the installed system. Somewhere I met a description of how to do it for Debian and CentOS
Here is an example for Gentoo: www.gentoo-wiki.info/HOWTO_Custom_Stage4 Deploying
the system to exactly the same server from where the snapshot was taken for about 10 minutes.
For the same one - about 5 minutes because the poppy addresses of the network cards do not change :) This is for an 80 gig screw with 6 partitions.
Of course, you can write a script for auto-deploying all this to the target server, but at that time I hadn’t figured out what to do with network cards yet, you can simply enter into it just auto-raise interfaces and configure.
Raising the server will be even faster if you do not cut the disk into many partitions :)

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question