L
L
last72015-01-08 21:58:11
Python
last7, 2015-01-08 21:58:11

How to work with the database in applications on Tornado?

There is a task to implement a real-time application, I started to dig in this direction, learned a lot, but one question remains open, how to work with data? On the example of a Tornado application?
I came across answers of several categories:
1) Use MongoDB (motor)
2) Use different wrappers for the standard libraries ala psycopg2, which allow you to perform asynchronous requests.
3) Do not use asynchrony, but make small-quick requests.
What's better?
For example, with a chat, in principle, point 3 is suitable, since complex queries are not expected there, but if you need something more serious, JOIN or a large selection, what to do?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
S
Sergey Martynov, 2015-01-12
@smart

There seem to be several hidden in your question. I will try to isolate them and give an answer.
1. Should I write a completely asynchronous application? Or try to make it synchronously working, but fast? The answer depends on the specifics of the application. However, if you need to make several independent requests to different sources, then there will already be a clear acceleration due to the simultaneous operation of requests. This will fully manifest itself if requests go to external services hosted on other machines. Especially if these are external services that are slow from the point of view of the network (for example, api of third-party services that work via http).
It is important to understand that in many cases, due to asynchrony, you do not speed up the application's response to an external request: if the application needs to make a request to mongodb and, based on the received data, a request to mysql, then you still will not respond outside before run these two queries. In fact, in this case, you only save memory (and a little CPU), because instead of 40 parallel processes responding to requests, you can run, say, 4 (according to the number of cores).
In my opinion, it is risky and inconvenient to combine asynchronous and synchronous approaches within one process, it is more correct and more convenient to either write synchronous code, or use honest asynchrony everywhere (this is what I like about nodejs ). Of course, in this case, your project can consist of both asynchronously running processes and synchronously executing scripts. Thus, I recommend looking at the specifics of the project as a whole and "cutting" it into logical parts for which it is easier to apply the synchronous approach and those for which the asynchronous approach is justified.
2. What DBMS to use? A lot has been written about this, but again - based on the tasks. In recent years, NoSQL databases have developed very well and offer interesting possibilities. For example, if you write a chat with bells and whistles, then the publish-subscribe mechanism may come in handy (there isexplicitly in redis , also implemented in mongo and others). If the scale of the project is large and you need scalability and reliability, then you should look at riak andaerospike (they scale well in multi-server configurations). Etc. etc. - if necessary, write more about the project, we'll think about it.
3. How to work with this DBMS specifically from python? I'm not a great specialist specifically in python, but as far as I understand, there are libraries for working asynchronously with both SQL-DBMS (at least for Postgres ) and NoSQL (at least there are for Mongo and for redis ).

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question