Answer the question
In order to leave comments, you need to log in
Do I need to compress data during client-server transfer? How should the server work?
I am writing a program in C++.
Data from the client is transmitted to the server 2-3 times per second. The server calculates and sends a response.
Transfer via Win API sockets.
There are about a thousand clients. Server one.
I calculated that, on average, a client sends 1 kb of data per request.
There are two questions:
1. Is one kb of data in a request a lot or a little by today's standards? I can compress data but is it necessary?...
I can compress data up to 300 bytes.
Or send only changed data.
Question 2:
This logic of the server is normal:
The server loops through all connected sockets, looking for messages. If there is, then it accepts, counts, responds, and moves on to the next socket.
Another option: the server collects data from everyone in a cycle, then calculates it in a separate cycle, and sends everything to everyone in the third cycle
. How to do it right?
Answer the question
In order to leave comments, you need to log in
1) It is necessary to reap or not - you decide. Maybe you have a gigabit LAN there and criticality to the performance of the server / client. Or vice versa - GPRS.
2) That's right - a separate thread for a separate connection with all the logic. IMHO.
1. If the server is powerful, then it is better to compress the data, because you will save on the time spent on copying from buffer to buffer and there is a high probability that everything will be transferred in one frame, and will not be segmented;
2. Usually, when connecting to the server, a new thread is created, which works with the client (processes incoming data and sends a response). Although I would advise you to switch to an asynchronous model over time, but if the project does not involve development, you can not bother.
It is not correct "a separate thread for a separate connection with all the logic", it is better for the author to look at asynchronous sockets, java has netty. If the server is a bare CPP, then I won’t tell you, but there are also 100% asynchronous servers.
The point of what I write next is to
1) not suffer from a huge number of OS context switches due to hundreds or thousands of threads
2) not to produce extra locks in the game logic
The logic of netty as a network framework, you can listen to the event of receiving data from the socket and immediately process them in 1 of 4 threads, without blocking, external interaction (databases, etc. block and work for a long time, so it’s impossible), or put in a queue (lock on data structure, such as ArrayBlockingQueue) and process it in an external worker thread. If you want, you can multi-thread process, but to make sense - it's better to separate different workers from different sources (for example, 1 worker thread for even GameIds, 2nd for odd ones, all Game have a GameId that is assigned when the game starts).
On compression specifically - if there is an opportunity to quickly compress, then compress. It is dangerous to transmit a delta due to packet loss or reordering if UPD, on TCP, the order of the packets is guaranteed and the losses are re-sent and compressed with a delta, so you can. Here I am talking about the delta separately. it's still "compression" specific. But in shooters, for example, you can transfer the full current coordinates - it's more reliable and easier. If your logic is more complicated and a lot of things are transmitted, then you should think.
But in any case, it will require CPU resources on the server. Try lz4, something else fast, there are many specific compression algorithms, but I don’t know what suits you and what will be fast.
If you have a shooter or a network action, with architecture it may be suitable:
-- the client sends N times (3 times per second for you, but in shooters they do 30 times)
-- serv calculates game ticks (collisions, hits, damage, movements) U times
-- serv sends Y times responses to clients (in a separate task and loop, it is likely that all clients get a cutoff after calculating the same game tick by serv) in shooters I do it 30 times per second, it can deviate from N)
But in general, when calculating on a server, on clients, interpolation and predictions (extrapolation) can still be useful (because at the moment there is no data from the server - something needs to move) or a time machine (rollback if predicted incorrectly).
There is no information in your question about what kind of calculations you are doing. For individual users or something global, the result of which is needed by all clients without exception. The logic of the server directly depends on it.
As for compression, the answer is simple. It doesn't matter if you send a lot or a little. Proceed from whether you need to save traffic or not. If you pack data, you will lose performance. Tritely calculate how much time it will take to pack and unpack data and whether you will meet the time you need, whether it will be a bonus to save traffic if you have a bad client connection and you will gain time on compression.
As for polling threads - you have already been answered. It's wrong to sort through 1000 threads, expecting that there is something there. Lost time on context switching. A server with so many connections needs an event model.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question