N
N
nano_e_t_42021-02-20 20:53:26
Nginx
nano_e_t_4, 2021-02-20 20:53:26

How to forward a websocket through ingres?

Hello. When transferring ms to Kuber, I encountered the following problem: nginx ingress (presumably) breaks the socket.io session with ms on go.

What happens:
A request for the /socket.io/ location arrives at the k8s.my-domain.com domain. The back end of this request creates a socket.io connection, and sends valid data back to ingress (they are displayed on the front page). but then the socket pings do not go through and the back says that the timeout when trying to send data to the ingress nginx. Why this happens, I can't understand. If someone came across or knows what's the matter, please help me, I

tried to play with Session Affinity on the back side, one thing is that

ingress timeouts , this

socket and this

one

ingress NodePort

configuration ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
metadata:
  name: ms-frontend
  namespace: awesomeNS
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "route"
    nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
    nginx.ingress.kubernetes.io/websocket-services: ms-backend
    nginx.org/websocket-services: ms-backend
    nginx.org/server-snippets: |
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $host;
      proxy_set_header HTTPS $https;
      proxy_http_version 1.1;
      proxy_read_timeout         600s;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "upgrade";

spec:
  rules:
  - host: k8s.my-domain.com
    http:
      paths:
      - path: /
        backend:
          serviceName: ms-frontend
          servicePort: 80
      - path: /socket.io/
        backend:
          serviceName: ms-backend
          servicePort: 4400


**Backend log**:
{"level":"debug","ts":1613842562.1601045,"caller":"sioconnector/sioconnector.go:47","msg":"client connected:","id":"b"}
{"level":"debug","ts":1613842622.160182,"caller":"sioconnector/sioconnector.go:52","msg":"client error:","error":"write tcp 10.back-ip-address:4400->10.ingress-ip-address:39470: i/o timeout"}
{"level":"debug","ts":1613842622.160477,"caller":"sioconnector/sioconnector.go:52","msg":"client error:","error":"write tcp 10.back-ip-address:4400->10.ingress-ip-address:39470: i/o timeout"}
{"level":"debug","ts":1613842622.160499,"caller":"sioconnector/sioconnector.go:52","msg":"client error:","error":"write tcp 10.back-ip-address:4400->10.ingress-ip-address:39470: i/o timeout"}
{"level":"debug","ts":1613842622.1605527,"caller":"sioconnector/sioconnector.go:56","msg":"client connected:","id":"b","reason":"client namespace disconnect"}

{"level":"debug","ts":1613842645.158182,"caller":"sioconnector/sioconnector.go:47","msg":"client connected:","id":"c"}
{"level":"debug","ts":1613842705.159041,"caller":"sioconnector/sioconnector.go:52","msg":"client error:","error":"write tcp 10.back-ip-address:4400->10.ingress-ip-address:39598: i/o timeout"}
{"level":"debug","ts":1613842705.1603222,"caller":"sioconnector/sioconnector.go:52","msg":"client error:","error":"write tcp 10.back-ip-address:4400->10.ingress-ip-address:39598: i/o timeout"}
{"level":"debug","ts":1613842705.160439,"caller":"sioconnector/sioconnector.go:52","msg":"client error:","error":"write tcp 10.back-ip-address:4400->10.ingress-ip-address:39598: i/o timeout"}
{"level":"debug","ts":1613842705.1612012,"caller":"sioconnector/sioconnector.go:52","msg":"client error:","error":"write tcp 10.back-ip-address:4400->10.ingress-ip-address:39598: i/o timeout"}
{"level":"debug","ts":1613842705.1615343,"caller":"sioconnector/sioconnector.go:56","msg":"client connected:","id":"c","reason":"client namespace disconnect"}


backing service configuration:

Name:                     ms-backend
Namespace:                awesomeNS
Labels:                   app.kubernetes.io/instance=ms-backend
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ms-backend
                          app.kubernetes.io/version=1.0.1
                          helm.sh/chart=ms-backend-0.1.0
Annotations:              meta.helm.sh/release-name: ms-backend
                          meta.helm.sh/release-namespace: default
Selector:                 app.kubernetes.io/instance=ms-backend,app.kubernetes.io/name=ms-backend
Type:                     NodePort
IP:                       10.0.0.10
Port:                     http  4400/TCP
TargetPort:               http/TCP
NodePort:                 http  30335/TCP
Endpoints:                back-ip-address:4400
Session Affinity:         ClientIP
External Traffic Policy:  Cluster
Events:                   <none>

Answer the question

In order to leave comments, you need to log in

1 answer(s)
D
Dmitry, 2021-02-20
@q2digger

Hello, I have not encountered such a disaster, but here there is a description of a similar problem
https://github.com/kubernetes/ingress-nginx/issues/3746
and a line in the documentation that relates to this
https://kubernetes.github.io /ingress-nginx/user-gu...
it says

If the NGINX ingress controller is exposed with a service type=LoadBalancer make sure the protocol between the loadbalancer and NGINX is TCP.

just check how it is specified in the deployment.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question