Answer the question
In order to leave comments, you need to log in
How to properly scale a project on Amazon AWS?
There is a project on Amazon AWS which consists of:
- AmazonRDS database
- EC2 instance (c4.xlarge)
There are two projects on the instance - one on php (self-written framework), one on nodejs (api)
As a web server - nginx
Development is carried out in git (two different repositories for each project).
The deployment (release) is as follows: I connect to the server via ssh and
git pull RemoteBranchName master is done in the folder with each project
After that, the project is rebuilt make update
The task is to withstand the increase in load that is planned in the near future.
How to do it right?
I thought about the following options:
1) Just increase the instance type to a more powerful one. Everything is clear here.
2) Using Load Balancers and Auto Scaling Groups
And there are a lot of questions here.
If you use Auto Scaling Groups, then you need to create a Launch configuration based on the AMI image.
But it turns out that after each release, you need to create a new image with an updated project code, which is not very convenient.
As an option, I thought that in the AMI image that is used for Launch Configuration, you can register a script at autoload that will pull the latest version of the project and collect it.
Ok, but it takes time. It turns out that while a new instance is being created during the load, there will be requests that will not be processed, since the old instances can no longer withstand the load, and there is no new one yet.
What is the best way to solve this problem?
Advise me to read something useful on this topic.
Answer the question
In order to leave comments, you need to log in
Generally that is "best practices". Yes, it is to use Load Balancer and Auto Scaling.
But AMI can be cooked, or as they say in the West in this context, "to bake the image". There is a very handy tool from Hashicorp - Packer . With it, you can make a ready-made AMI with a fresh version of your project. Next, create a new ASG with this AMI.
Making a release:
We leave the old AS at 100% and smoothly increase the capacity of the new AS, we reach the number of instances of 100%. Traffic goes to both old instances and new ones (with fresh code), we look at the logs, is everything all right? Did the new version work properly? Then we reduce the number of instances of the old AS to zero. The release is over. If something is wrong with the new version, you can easily rollback to the old one, and even after a while (an hour, a day, a week), you can make a convenient and clear rollback to any of your releases, since you have all ASG, increase the capacity of the release you need, not the one you need reduce. This is zero downtime deployment. By the way, it’s convenient to do A / B testing in this way.
PS creating AS can also be done conveniently: terraform
So where is the bottleneck?
If the base slows down - we cluster, we optimize.
If the base does not slow down, but the backend slows down, we put some kind of balancer thread and several backends to the same base.
In general, find out what suffers the most from the load
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question