Answer the question
In order to leave comments, you need to log in
How to build a fault-tolerant server (for availability, low load)?
Good afternoon!
I need to solve a technical problem, can you help?
There is a domain, example.com, on it client scripts will lie.
You need to have a main server for it, where the scripts (files) will be located.
You need a regular backup of all scripts.
You need a mirror on another server so that in case of technical work or problems, users continue to work without problems.
From here follow the tasks that need to be solved:
1. Synchronization of files. When we upload files to your server via FTP, they should be uploaded to the slave server with a slight delay. The volume is small, 10 MB per day maximum.
2. Setting up a domain (via RR-DNS or something else) so that in the event of a master server crash, requests go to the slave
3. Organize a backup of the file system
4. Ideally, script files on master should not be sent via ftp, but from the repository to bitbacket.
I've never done anything really admin, so thanks for any answer, there is very little knowledge on this topic.
Answer the question
In order to leave comments, you need to log in
In general, the correct answers depend very much on the project, I will write what I think is common sense and always applicable:
1. Do not upload via FTP. Pull from bitbucket after receiving a hook from them.
Keep in mind that, strictly speaking, the hook may not come, provide some kind of calculation log and the ability to pull your hands outside. There are great ready-made solutions, including those that can deploy: jenkins, teamcity, phpci - but you can also assemble your own simple script, it can take an hour at most if without prettiness.
Amazon route53 as dns + 2 pcs amazon health check.
2 A-records per domain pointing to 2 servers each linked to its own health check.
Set TTL less, 1-5 minutes. Unfortunately, there will still be visitors whose provider caches.
1.2 If you want fault tolerance - you do not need to divide into master and slave, the servers should be independent of each other and equivalent.
2. If downtime for single clients in N hours is acceptable:
If a downtime is completely unacceptable - unfortunately there are no simple solutions :)
You can look at hetzner's failoverip (2 iron servers per 1m IP with the ability to quickly switch), but it does not remove the question of what to do if hetzner's DC has fallen.
* and yes, in a normal situation it's better to keep servers at different hosts
3. You don't need to backup the entire system imho. You need to backup user data if you have it.
SQL showed itself well with galera cluster (master-master), but there is a nuance with transactions if you use them. You can drag the files with some kind of rsync, or you can just put them right away in S3 and forget forever (if you don’t have a lot of them - see tariffs)
If fault tolerance is needed in the context of the 1st DC, then look towards heartbeat.
install and configure heartbeat on two servers in one DC, in case of problems it will transfer the main IP of one server to another, warn the support that the IP will move from server to server (especially relevant for physical servers and not vps / vds)
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question