S
S
sailorpapay2020-05-01 00:09:41
Docker
sailorpapay, 2020-05-01 00:09:41

How to limit traffic to a pod until the containers in it are fully launched?

apiVersion: v1
kind: Service
metadata:
  name: api
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-3:4941388
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
  type: LoadBalancer
  selector:
    app: ojowo
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
    - name: https
      protocol: TCP
      port: 443
      targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: php-fpm
spec:
  selector:
    app: ojowo
  ports:
    - protocol: TCP
      port: 9000
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-api
spec:
  selector:
    app: ojowo
  ports:
    - protocol: TCP
      port: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ojowo-deployment
  labels:
    app: ojowo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: ojowo
  template:
    metadata:
      labels:
        app: ojowo
    spec:
      volumes:
      - name: shared-data
        emptyDir: {}
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        resources:
          limits:
            memory: 256Mi
          requests:
            memory: 128Mi
        volumeMounts:
        - name: shared-data
          mountPath: /var/www
        livenessProbe:
          httpGet:
            path: /
            port: 80
          failureThreshold: 2
          initialDelaySeconds: 20
          periodSeconds: 5
        readinessProbe:
          initialDelaySeconds: 15
          exec:
            command:
              - find
              - /var/www/public/live777.html

      - name: php-fpm
        image: php
        resources:
          limits:
            memory: 370Mi
          requests:
            memory: 256Mi
        volumeMounts:
        - name: shared-data
          mountPath: /var/www
        ports:
        - containerPort: 9000


We have such a deployment.

We have replicas: 3

The problem is that when you update (increase, decrease the number of pods, for example), the traffic on the balancer starts to be routed to the non-working pod.

How to fix?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
S
Saboteur, 2020-05-01
@saboteur_kiev

Well, it's simple.
Instead of this nonsense

readinessProbe:
          initialDelaySeconds: 15
          exec:
            command:
              - find
              - /var/www/public/live777.html

do an HTTP probe or even a socket request to your service as a rediness. Until it rises, traffic will not go to this container.
readinessProbe:
      tcpSocket:
        port: 80
      initialDelaySeconds: 15
      periodSeconds: 30

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question