Answer the question
In order to leave comments, you need to log in
code distribution system
Experienced comrades, share your experience what you use to update the code on web servers, given that the code is stored on local drives.
The story is this:
1. Used NFS. Minus - everything is tied to one server.
2. Switched to NAS via CIFS. Minus - there were problems with transferring large files over the network, the NAS was sometimes unavailable.
3. Tried to use SAN through RedHat GFS. The downside is the complexity of the setup and the need to monitor the status of the nodes.
As a result, we decided to store the code on the local disks of the web servers. Pros - fault tolerance, no bottlenecks.
Possible solutions:
1. SVN/git code -> script that launches an update via SSH on each server.
2. puppet/chef
Thanks in advance for good advice :)
Answer the question
In order to leave comments, you need to log in
I used to do this: a commit hook to a specific branch of the repository, which deploys the code from this branch to production, causes the database schema to be migrated and tells Apache to re-read the configs.
At some point, I wrote a Makefile that copies files with rsync from the current local state of the repository, and there are all sorts of little things (for example, delivering dependencies to virtualenv, if needed). It is more convenient, because sometimes you want to drop the code directly from the editor to the staging server without committing it.
We have mercurial, once a minute hg pull && hg update is done on the servers.
This scheme is very convenient in that if something is corrected directly in production (anything happens), then you can commit it to a common repository from it.
Most likely, in your case, instead of mercurial, St.
Why not use Capistrano (or any other deployment tool) + call rsync from Capistrano to sync the year itself
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question