Answer the question
In order to leave comments, you need to log in
How to competently increase the fault tolerance of a WEB application?
Hello, I have a WEB application, it is located on the same Ubuntu 18.04 server. The application is written in Node Js and works with PostgeSql, Redis databases. All requests to the server go through Cloudflare.
The essence of the question is how to increase the fault tolerance and availability of the application by hosting it on several servers. But then it is not clear to me:
1) How to distribute requests between servers;
2) How to work with PostgreSql. Work with one database or somehow synchronize several databases, each server has a copy. If with one database, then as far as I've heard, networking slows down responses from the database enough and this is not safe;
Who had a similar experience, tell me how to implement it more correctly?
Answer the question
In order to leave comments, you need to log in
Read about load balancing , sharding, and replication .
There is no concept of correctness, implementation depends on specific requirements. Somewhere it will be right to use horizontal sharding, somewhere vertical , and somewhere not to use at all.
Regarding balancing requests between servers, you can read this for example .
Two DBMS can be run in master+master mode with automatic synchronization. Two Web applications - then each should access its own DBMS and not store any data except in the DBMS. Something like this.
increase the reliability of the application by hosting it on multiple servers
The essence of the question is how to increase the reliability of the application by hosting it on several servers.
In short :
Every server is a database client for everyone else. When the data changes, it accesses all databases asynchronously at once, and as soon as it receives the same responses from all, it returns control to the script.
Statics and scripts - work on the same server.
In most cases, moving a single-node application to multi-node will only degrade the performance of your application. Or you will actually re-engineer your application while adapting it to multi-node.
The world has changed a long time ago. Single node applications are very rare. Either your application uses the services of other applications (nodes), or sooner or later it will provide its services to other applications (nodes). Hence the requirement to develop your application immediately multi-node. A multi-node application can in a particular case be operated in a single-node variant.
To create a productive application, you need to purposefully save at once and on everything at the application development stage along the entire call chain: client -> network -> software layers of your application -> network -> database or other node -> network -> software layers of your application - > network -> client (not the most "complicated" chain)! The unjustified loss of several milliseconds in each link of this chain can bring you "at the exit" to the loss of several or tens of seconds.
In order to navigate where the performance failure occurred in the application, you need to have some kind of meters for different elements of the call chain. I think it's a good idea to return at least some of the most critical "measurements" in the request response. This greatly simplifies diagnosing the problem, especially in distributed (multi-node) applications, just look at the result of the query.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question