Answer the question
In order to leave comments, you need to log in
Highly loaded systems, what are the design principles?
Hello everyone, what are the principles for developing high-load systems?
And then in some books they use cgi scripts in python, use python from scripts in php, etc. Is it generally safe in terms of stress situations when there are a lot of users?
How to develop network services on sockets in cases of high load? How?
What are the best languages to use for this? Which ones not to use?
Thanks
Answer the question
In order to leave comments, you need to log in
let's say, there are two types of tasks:
- CPU bound - various algorithms, mathematics, encoding / decoding / encryption ... in a word, everything that loads the processor.
- I / O bound - in fact, when we have a lot of input / output operations, somewhere around 90% of tasks related to WEB and server development.
For CPU bound, you should use languages like C, Rust, Dlang, Go, etc. In a word, languages that compile to efficient machine code.
For I / O bound - Go, NodeJS, Erlang, Java .... yes, in principle, it doesn’t matter what language, the main thing is that non-blocking calls are used and there are no locks.
There are also task queues, horizontal scaling, etc. The architecture and algorithms used in the system often have a much stronger influence than programming languages.
No restrictions, just common sense. That is, we are unlikely to write complex mathematics on node.js, but there are much fewer tasks associated with this. Also, no one says that the system needs to be written strictly in one language. Today it is fashionable to use microservices, each of which can be implemented in its own language and with its own database, ideally suited for a specific task.
Don't forget about algorithms. They must also be optimal. For example, take a simple task - clustering labels on a map. Imagine that you have a million objects in the database that we need to display on the map. Since it will be problematic to do this on the client, we must do it on the server and return exactly as much data to the client as it needs.
And on such volumes, even if we took C, if our algorithm has complexity O (N ^ 2), then there’s nothing special to do here. And it will be so slow. But if we take any algorithms with O(NLogN) complexity, then it is already possible that this algorithm can be implemented at least in php/python/ruby. For example, I have this algorithm implemented in Java and not in the most efficient way. Coping.
The speed of development also affects (all sorts of ruby / python / node are good in this regard), the cost of support (supporting C is much more expensive than Go, for example, although you can always write everything so badly that it’s easier to throw it out than maintain it), the cost of developers .... Let's say find cheap strong developers on Go or Rust will be very problematic.
Also, do not forget that servers are now not so expensive. Sometimes it’s easier for a business to pay extra for a dozen more servers than to write everything on the pluses.
Actually the main rule of highly loaded systems is load testing and then optimization
I will add that to improve performance, you can use microservices written in C / C ++ and controlled by the manager. And contact the manager already through the socket or through the database, giving tasks to be completed.
And all the formation of pages - to give PHP (for example). The result is a synchronous system with semi-automatic shared load on the CPU, which can be brought to automatic by creating balanced rules for distributing the load on the CPU by different modules in different situations.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question