K
K
Konstantin2015-03-13 10:19:16
Flask
Konstantin, 2015-03-13 10:19:16

The web application has been written. What's next?

Written something like cms in Flask. There are several customers, there are also several "sites".
I can not understand how to deploy this business to its full height on one VPS.
For the most part, nginx is recommended, which looks to the outside, and further passes requests to individual application instances that are running in their virtual environments, or rather not to the instances themselves, but to instances of the web server, for example gunicorn or uwsgi, which already passes the request further to the application .

Why is it so complicated? Or is there some rationale for this?

Indeed, for such a bundle, you need to edit the configs of all programs in the bundle (nginx + one for each instance of the web server, and the configs of each application). Or are there any tools to automate this disgrace? This is the first.
Another duplication of the application and its environment on disk and in memory for each instance. Second.
When you update an application, you must update each instance. Third.
Etc. etc.

Or are there simply no alternatives?

PS Do not kick hard, I'm doing this for the first time.

Answer the question

In order to leave comments, you need to log in

2 answer(s)
B
bromzh, 2015-03-13
@Lord_Prizrak

.
    _______                         ________
   |       |                       |        |
   |   n   | -> site1.com ->|  |-->| uwsgi1 |-->|   |--> app1 for site1
   |   g   |                |  |   |________|   |   |
-->|   i   | -> site2.com ->|->|    ________    |-->|--> app2 for site2
   |   n   |                |  |   |        |   |   |
   |   x   | -> site3.com ->|  |-->| uwsgi2 |-->|   |--> app3 for site3
   |_______|                       |________|

This is an approximate general deployment structure for several pythonic wsgi applications.
1) Nginx is put forward mainly for:
a) static output
b) load balancing
It is fast, reliable, it gives static better than uwsgi, plus, you can configure all sorts of https. However, nginx does not know how to run Python applications. To do this, it proxies the request to a wsgi-compatible server.
2) All available Python applications are launched in the wsgi server. Uwsgi can be quite flexibly configured, look at the docks. One of the cool things is emperor-mode: uwsgi can scan a folder for configs and automatically pick up Python applications. Usually 1 folder is created, and each wsgi application simply makes a symlink with the config to this folder.
3) Uwsgi can be run both through a regular tcp socket and through a unix socket. What you choose, then you will need to specify in the nginx config
4) Uwsgi is better to run through supervisord. It allows you to restart the application when it crashes, flexibly configure the launch of similar daemons, redirect stdout / stderr, set environment variables, etc. Again, see the docs. In the config, you specify how you will run uwsgi and which config / folder with uwsgi configs will read.
5) If the server has N cores, then it makes sense to run N-1 pieces of uwsgi processes on different ports / with different sock files. Then nginx will be able to load balance between them. You can start a process group either through the supervisor, or by setting the settings in the config of Uwsgi itself, whichever is more convenient. The only difference will be that in the first case, when one uwsgi crashes, the rest will live, and in the second case, all uwsgi processes will be restarted (most likely).
6) No need to describe the config of each uwsgi server in nginx separately, there is an upstream for the group.
7) As far as I understand, if the python application is 1, then it is better to run several uwsgi instances through the supervisor, if there are many of them, run several uwsgi instances in emperor-mode.
I don't remember exactly the syntax of the configs, but it should look something like this:
# Конфиг supervisor:
[program:uwsgi]
numprocs = 3 (для 4-х ядерного серва)
command = uwsgi --emperor /path/to/conf/dir --socket /tmp/uwsgi/uwsgi-%(process_num).sock

Either like this:
# Конфиг  uwsgi: /path/to/conf/default.ini
[uwsgi]
socket = /tmp/sockets/uwsgi-%(vassal_name).sock

# Конфиг супервизора
[program:uwsgi]
command = uwsgi --emperor /path/to/conf/dir ----vassals-include path/to/conf/default.ini

In any case, the whole thing is then easily added to nginx:
upstream backend {
    server localhost:8001;  #для tcp-сокетов
    server localhost:8002;

    server unix:/tmp/uwsgi/uwsgi-1.sock; # для unix-сокетов
    server unix:/tmp/uwsgi/uwsgi-2.sock;
}
# А потом просто проксируешь на эту штуку:
server {
    location / {
        listen       80;
        server_name site1.com;
        proxy_pass http://backend;
    }
}

server {
    location / {
        listen       80;
        server_name site2.com;
        proxy_pass http://backend;
    }
}

PS Perhaps, if the number of Python applications is comparable to the number of processors, then it might be better to set it up like this: 1 instance of uwsgi per 1 application. But I don’t know for sure if this makes sense, you need to read the uwsgi and nginx docks carefully.

U
un1t, 2015-03-13
@un1t

In short, you can do without it and expose your application directly to the outside. But there are indeed reasons for such an architecture, it allows you to serve a larger number of requests and, oddly enough, reduces memory consumption. I'm too lazy to write more, there is a lot of information on this topic.
When updating the application, you only need to restart uswgi (or gunicorn) and that's it.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question