F
F
fdroid2018-03-10 23:24:15
Nginx
fdroid, 2018-03-10 23:24:15

How to implement external web access on one IP to different servers?

Hello.
Given: white IP, three servers are hiding behind the router, which need access through a browser, via https. 1) Server A. A web application is running on it, the domain.example domain is attached, the certificate is commercial, it works on a very specific, specially sharpened version of nginx, there is practically nothing to configure. In this case, non-standard ports are used instead of 80 and 443, i.e. access like domain.example:1234 works, but you can't get certificates from LE, because this requires open ports 80 and 443, so the certificate is purchased 2) Server B. There is another web application on it, a subdomain from the domain of Server A is attached - second.domain.example. Certificate from Letsencrypt, because There is no wildcard and never will be. Runs on Apache, again sharpened, i.e. there is nothing to edit either 3) Server B. Also a web application, third.domain.example subdomain, LE certificate, standard Apache.
And here it is necessary to organize access to all this zoo through one IP. You need a proxy, use a virtual machine of one of the servers (preferably) or install a separate piece of hardware (undesirable) as it. At the same time, certificates must remain on the servers themselves, and not be located on the proxy, and you need the ability to forward non-standard ports to standard ones (Server A). So the question is - what software to use for proxying? I know of three options - Apache with mod_proxy, but it requires finding certificates on the proxy itself, with the corresponding virtual hosts configured. Nginx with proxy_pass? - but, to be honest, I still don’t understand if he can do all this. I know that you can configure the proxy with nginx to receive certificates from LE and then "distribute" them to the necessary servers, I know, but this is not what you need. Found another haproxy. I almost gnawed my veins yesterday while I was setting it up. What's the catch - I did not understand. It seems to be working as intended, but from time to time one or the other server becomes unavailable, or individual pages. And yet, as far as I understand, this is more of a balancer than a proxy. Or not?
Did I miss something else?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
K
ky0, 2018-03-10
@ky0

on a very specific custom nginx version

What is this specificity?
Use webroot authorization, in this case it doesn't matter what ports you have inside.
What is this requirement based on?
In essence, yes, nginx can do all this, proxy_pass is configured in just three lines, even in such an exotic version as HTTPS -> HTTPS on a custom port. Another thing is that no one does this for obvious reasons. Quite the opposite - they make a single (well, or separate for each application) https-gate with nginx, on which, in fact, HTTPS is terminated, a service is installed for obtaining and updating certificates from LE, and unencrypted HTTP is forwarded inside.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question