Answer the question
In order to leave comments, you need to log in
How will the server respond to more than 20 sql queries?
The situation is this. The game portal, the game takes place on node.js, then the node sends data to the php server, and it, in turn, calculates the result and makes entries in the database.
How it looks like:
Users played the game, node.js sent an array of users with game points to the php script, php determined the winners and now we need to make a bunch of requests to the database:
1) get the game data and make sure that we received the correct data
2) write in the log table that such and such a game was played
3) make 4 requests to each player:
- get data about the player
- update the rating, etc.
- make a record that he played the game (at 20:21 the user won or lost, etc. ...)
- update data
The game can be from 2 to 4 people. There are 10 games in total. About 1000 people can play each game.
It is estimated that in order to process the result of a game for 4 players, you need to make about 20 queries to the database.
Please tell me if the traffic grows, how will the server react to such an operation?
For example, if 100 games are played simultaneously in 10 games. The server will need to make 2000 queries to the database. What is it for? And will the web cluster solve the problem if it is under load?
Answer the question
In order to leave comments, you need to log in
To solve such problems it is necessary to use the queue scheduler.
You create tasks to do this and that, and throw them into the queue.
And there may already be N workers that process this queue.
Thus, it is possible to make the load on the server not avalanche-like, but uniform.
Depending on the importance, tasks can be given different priorities, and so on.
Well, this, respectively, will be true if the queries are optimized and lightweight. One thing is 2000 queries of simple inserts/updates, and another thing is 2000 queries with a bunch of left joins, subqueries and temporary tables.
The input data is given rather abstractly. What database, what server. What else does this PHP script do. A lot of uncertainty.
But in short: it shouldn't be too much.
First, how the database is designed (here I mean data tables for storage) is strongly influenced. Look inside any modern CMS and you will see a pack of 20-30 sql queries to form one page. And nothing, sites keep thousands of requests. (It is clear that caching somehow helps, but still.)
Secondly, you can optimize queries (for example, get data for players at once in one request, etc.)
And then, you can eliminate the uneven load as follows - Noda writes to a file. And php hangs as a demon, or twitches at regular (or dynamically changing) intervals, and processes data from files.
Temporary tables can be created implicitly during queries.
But in general, 2000 requests of several milliseconds each will create a delay, but not at all critical.
Well, think for a second or two, most likely nothing will be noticeable on a multi-core server.
Try using ab to queue up 100 competitive requests for a page that will make fake requests, see how it behaves. It's ten minutes to check, and here we are guessing on the coffee grounds.
Nobody will give you an answer. You need to write your own software to simulate the load (taking into account your specifics) and drive it with different settings. The only way.
DBMS come in two types:
- blocker: all calls to the server return a readable value without any problems. The speed of execution and the logic of work is very simple. When any record is changed, all operations are temporarily blocked, and then continue when the changes are completed.
- transactional model: allows you to read and modify records. Changed records create temporary copies that simply wait for the changes to be committed or reverted. Until the data is confirmed, all other records see only what was before the changes.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question