@
@
@defin2017-09-05 20:57:14
Mobile development
@defin, 2017-09-05 20:57:14

How to properly organize the work of the backend of a mobile application?

There is a mobile application (or rather, it is being developed), we send a message to it, the application tells the server that it has been read (when it is read). It is necessary to process this request from the application on the backend and save it to the database. Applications report the message id and the application id. On the server, you need to make two queries in the database:
UPDATE messages SET read = read+1 WHERE id = application id;
INSERT INTO log (id_mes, id_pril, date) VALUES (post id, app id, NOW());
and return a conditional OK in response. The application waits for a response for 10 seconds in the background.
At peak times, up to 10,000 such calls per second can come (designed load). But waves, i.e. we sent, conditionally, 100,000 messages, they began to be read and read notifications rained down on the server. In theory, all applications wait for a response within 10 seconds, i.e. the answer can not be directly given urgently, perhaps some kind of expectation. All this needs to be handled and stored correctly.
How to build all this correctly? Several servers for processing, one for the database? Do load balancing? if so, which way to look? What are the server configs?
Or maybe even reconsider saving in the database? Those. only inserts are allowed, and then some bot processes separately and makes updates ...

Answer the question

In order to leave comments, you need to log in

3 answer(s)
T
thingInSelf, 2017-09-05
@thingInSelf

At peak times, up to 10,000 such calls per second can come (designed load). But waves, i.e. we sent, conditionally, 100,000 messages, they began to be read and read notifications rained down on the server.

I'm just testing high loads right now.
The Linux networking stack is shit. To get 30,000 per second (on a virtual machine), you need to give him the whole 1 processor core at 100%. And this is not counting the fact that you still need to process the information and give an answer. The return also loads the core almost to the full, but a little less than the reception.
On real hardware, not virtual - people say that you can get about 10 times more. But this is dedicated.
So 100,000 per second will be very difficult.
If the load is torn, then a special cloud, autoscaling, will save you.
Otherwise, you will have to seriously understand how to optimize, how to parallelize.

A
Alexander Trakhimenok, 2017-09-05
@astec

Look towards cloud solutions.
For example Google AppEngine Datastore - can scale almost limitlessly. In addition, the queues are free. You can queue a read message and update the database in batches. The client will receive a response within 100-500 milliseconds. 100,000 messages can be processed in a couple of seconds or a minute, depending on the limit on the number of instances.
I'm using the AppEngine Go standard environment for my debt tracking app https://DebtsTracker.io/ and I'm very happy with it.
Go chose to have the instances start very quickly.
Standard, not Flexible so as not to control the scaling itself. Although now flexible can already scale automatically.

I
Igor Kalashnikov, 2017-09-07
@zo0m

You can write not to the database, but to some kind of redis and, for example, run bulk queries once a minute. Let's save time on routetrips.
Update can be rewritten to count the number of reads from the Log table.
And in general, it's best to file a couple of PoCs, take some JMeter and measure it.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question