H
H
HeyAway2018-07-05 16:44:59
Nginx
HeyAway, 2018-07-05 16:44:59

Can a balancer server serve static content?

Hello!
Faced one problem. There are three servers: the most powerful and two of the same.
I configured upstream and it works fine. The "most powerful" server-balancer was:

upstream backend {
           least_conn;
           server ip1 weight=3; #root - "самый мощный"
           server ip2 weight=2; #2
           server ip3 weight=1; #3
        }

The "most powerful" is available at data.domain.name , and the rest at data%number%.domain.name .
Balancing plows only between two weak ones. Is it possible to somehow include "the most powerful" in the return of content? Or at least make it a "backup server". Is this real or can the "balancer server" only deal with load distribution?
On top of all this, there is also CloudFlare.
DNS:
data.domain.name(data - СNAME record)
data%number%.domain.name(data%number% - A record)
I have only three servers. It is necessary to divide the load between them, and not just throw off the other two.
There is only one moment when I can see that in addition to "small" servers, the content is also trying to give "big" - if you register in the upstream not IP, but domains: data, data% number%. So "small" ones also work properly, but when balancing throws at the "large" server, then CF spits out an error - "Error 1002: DNS points to Prohibited IP". But how can I remove the IP of the "big" server, since it really interferes so much? The second solution to this problem (as written on the site) is to disable the CDN, i.e. leave only DNS only. Doesn't plow.

Answer the question

In order to leave comments, you need to log in

2 answer(s)
H
HeyAway, 2018-07-06
@HeyAway

Error - bad Google.
The solution is to the devacademy website . The link is cut.
Title: Load balancing for NGINX
Thanks to the author.
The server responsible for load balancing, I call the master, but it can also act as a request handler, like the other two. In this example, it is allowed to use the main server to process requests.
Another scenario: we use a load balancer to process requests. We will apply a little trick. Change the following settings on the server:

upstream balancer {
  server 192.168.1.100:80 ; 
  server 192.168.1.106:80 ;
}

server {
  listen 192.168.1.100:80;
  server_name yoursite.com;
  error_log /var/log/yoursite.com-error.log;
  location / {
      proxy_pass http://balancer;
  }

}

server {
    access_log off;
    error_log /var/log/yoursite.com-error.log;
    listen 127.0.01:80;
    server_name  yoursite.com www.yoursite.com;

    location ~* .(gif|jpg|jpeg|png|ico|wmv|3gp|avi|mpg|mpeg|mp4|flv|mp3|mid|js|css|wml|swf)$ {
      root   /var/www/yoursite.com;
      expires max;
      add_header Pragma public;
      add_header Cache-Control "public, must-revalidate, proxy-revalidate";
    }

    location / {
      root   /var/www/yoursite.com;
      index  index.php index.html index.htm;
    }
}

upstream balancer {
  server 192.168.1.100:80 ; 
  server 192.168.1.106:80 ;
  server 127.0.0.1:80 ;
}

server {
  listen 192.168.1.100:80;
  server_name yoursite.com;
  error_log /var/log/yoursite.com-error.log;
  location / {
      proxy_pass http://balancer;
  }
}

As you can see, we made two changes: we added a virtual host at 127.0.0.1 and configured it to process requests on port 80. Then we added the same server to the upstream servers, that is, it will be used to process requests from Nginx running on your localhost computer.
At this point, the Nginx load balancer should work without problems. Of course, there are many methods and options for customizing it, which are worth exploring.

P
pfg21, 2018-07-05
@pfg21

if he has enough resources, then why not??
balancing is one of the functions of the web server, which is in no way connected with the others.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question