Answer the question
In order to leave comments, you need to log in
Limiting processes in Linux by resources?
We decided to launch our own game hosting and have already come close to planning the architecture. Some questions cannot be solved on their own - we appeal to habrareason.
The main question so far is this: there is a physical server with debian on board, you need to keep a certain number of client game servers on it (one server - one process). And somehow flexibly limit each process in resources. Googling brought up the nice and cpu_limit utilities, but even more in-depth googling revealed numerous problems with these utilities, and it’s not entirely clear how to manage them (let’s say there are 5 clients, which means we (roughly) give 20% of the processor time to each client and 20% of the RAM. Another client is added and you need to somehow change these quotas without restarting the processes).
There was an idea to make my own super lightweight distribution and make several virtual machines based on qemu, but then it’s not entirely clear how to manage the process of the game server inside the guest machine - restarting, for example. I also don’t really want to restart a completely virtual machine - there will be a server monitoring system that tries to restart fallen instances. It would be possible to add the game server startup script to autorun, but it's not entirely clear what to do with it if it crashes and, say, read the logs from the client machine.
In a word, there are some gaps in the administration of unix systems that cannot be filled on their own. Any ideas or suggestions would be welcome, thanks!
Answer the question
In order to leave comments, you need to log in
On virtual machines, in addition to the game server, you can install your own, which will monitor and manage the game server.
If the resource consumption characteristic changes conditionally “smoothly”, for example, you can conditionally guarantee that the frequency of chart extrema will not exceed 1 hertz – you can use AVG. A simple script, in BASH or PERL, that will revise processes based on resource consumption.
If a sharper consumption of resources is possible, there are no effective solutions. Virtualization will not help - hanging the machine on which the virtual machine is located is not a problem in a number of popular cases. As a test, try, for example, scaling imagemagick convert a lot of images with a regular expression. This experiment somehow hung a well-known hosting provider, even though it was launched through virtuoso.
Offtopic. FreeBSD does not know how to manage resource consumption in real time at all, the passed stage - do not even look in its direction. With Debian and Ubuntu, my colleagues and I succeeded.
Solution - try to predict the load and plan resources for the maximum option. As practice shows, this is a really effective approach. If you run into financing, reconsider the business model. Google, by the way, did it ...
Thank you all for the answers, unfortunately, we could not get a solution adequate to our tasks, so it will not be the most desirable option - we will leave everything without restrictions (the probability of memory leaks, for example, is minimal, according to server application developers). And there will simply be active monitoring that will warn admins about running out of resources
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question