A
A
AdaStreamer2017-03-11 23:18:48
Highload
AdaStreamer, 2017-03-11 23:18:48

High Availability for front server proxy?

For high availability or load balancing, the practice of running processes in a cluster on several nodes is used. Accordingly, in case of failure of any node, others work properly. For example, using the same Docker Swarm, you can raise a process even with one replica, and it will be restarted on another node, in which case.
At this stage, everything is transparent and understandable. Next, a component such as the front server, or balancer, comes into play. Accordingly, part of the architectural logical tasks for traffic management lies with it. For example, detecting a disabled node, enabling a new one, etc.
But there is an assumption that the front server itself, rising on one node, a priori casts doubt on the high availability of the system, because. in the event of a failure in this section, it does not matter at all how many replicas of what were raised there - the requests will still not reach anywhere.
In the most classic version, the client sends a request for a specific domain name. Am I right in thinking that for high availability, the front server itself must also be launched in several replicas, and there must also be a component that can go to the DNS server using the necessary callbacks and make adjustments there when one or another front server is turned off? servers?
What topology of components (nodes) do you use in your projects?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
D
dinegnet, 2017-03-12
@dinegnet

If we are talking about a classic web service accessed by browsers, then nothing can be done, you have to:
The balancer is simplified to horror, to minimize errors.
The balancer is debugged much more thoroughly than other components.
Additionally, a layer is implemented at the network level that automatically transfers to a working balancer - for example, based on BGP information, setting several IP addresses in DNS, etc.
But problems cannot be completely avoided. Manual tracking, manual transfer to the reserve is needed.
If we are talking about an API accessed by specially written clients, it is balanced by clients. This can be made much more flexible and there is no single point of failure (there are many clients).

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question