Answer the question
In order to leave comments, you need to log in
UWSGI workers + Tensorflow - how to set up model initialization for each worker?
I ran into a problem with Flask and Tensorflow in a REST API service.
Given:
Let's say we have a web service written with Flask that is served on UWSGI. The service also uses a keras model based on the Tensorflow backend.
The service has a global variable (an initialized model) that must be initialized when the worker starts.
Issues: The problem is that the initialization of the model must occur at the start of each service. For example, in the wsgi.py file, after importing the application, we initialize the model like this:
from web.app import application
from keras_backend import KerasModel
application.keras_model = KerasModel(model_path='models/keras-weights.hdf5')
[uwsgi]
socket = 0.0.0.0:8000
protocol = http
chdir = /opt/project
module = wsgi:application
threads = 1
workers = 1
listen = 128
harakiri = 60
single-interpreter = true
Answer the question
In order to leave comments, you need to log in
Found the answer to the question.
If someone encounters such a problem in the future, then you can solve it using the lazy-apps=true option.
In this case, the application is initialized in each worker.
At the moment, the ini file looks like:
[uwsgi]
socket = 0.0.0.0:8000
protocol = http
chdir = /opt/project
module = wsgi:application
cheaper = 2
processes = 16
listen = 128
harakiri = 60
master = true
reaper = true
enable-threads = true
single-interpreter = true
py-autoreload = 1
lazy-apps = true
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question