E
E
emashev2019-11-20 17:26:19
Docker
emashev, 2019-11-20 17:26:19

How to set up 3 master nodes in kubernetes?

Greetings, there are 3 machines on which it is planned to raise kubernetes 1.16
Now we have a docker swarm cluster, also on 3 nodes, each of which is a manager and balancer. Only one node has a VIP address, when the machine dies, the VIP moves and everything works as it should.
I am just learning Kubernetes, mainly due to the fact that many projects already have ready-made pod.yml for their applications, and I want to learn a new technology.
Well, I decided to make the logic the same as the current docker swarm, that is, a VIP address and 3 master nodes.

~ kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
tech-app01   Ready    master   21d   v1.16.2
tech-app02   Ready    master   21d   v1.16.2
tech-app03   Ready    master   21d   v1.16.2

~ kubectl get pod -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-5644d7b6d9-9xzcj                1/1     Running   1          21d
coredns-5644d7b6d9-wtkwm                1/1     Running   1          21d
etcd-tech-cl-app01                      1/1     Running   1          21d
etcd-tech-cl-app02                      1/1     Running   1          21d
etcd-tech-cl-app03                      1/1     Running   1          21d
kube-apiserver-tech-cl-app01            1/1     Running   1          21d
kube-apiserver-tech-cl-app02            1/1     Running   1          21d
kube-apiserver-tech-cl-app03            1/1     Running   1          21d
kube-controller-manager-tech-cl-app01   1/1     Running   2          21d
kube-controller-manager-tech-cl-app02   1/1     Running   1          21d
kube-controller-manager-tech-cl-app03   1/1     Running   1          21d
kube-proxy-knvfk                        1/1     Running   1          21d
kube-proxy-rz27m                        1/1     Running   1          21d
kube-proxy-wwtfq                        1/1     Running   1          21d
kube-scheduler-tech-cl-app01            1/1     Running   2          21d
kube-scheduler-tech-cl-app02            1/1     Running   1          21d
kube-scheduler-tech-cl-app03            1/1     Running   1          21d
weave-net-jkd5f                         2/2     Running   3          21d
weave-net-nm4qn                         2/2     Running   3          21d
weave-net-strqt                         2/2     Running   3          21d

But when I try to run some application on 3 nodes, an error pops up:
I'm doing such an example from the office. docks:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-64599f457f-dhjc7   0/1     Pending   0          4m48s
nginx-deployment-64599f457f-qdbh7   0/1     Pending   0          4m48s
nginx-deployment-64599f457f-zqn7b   0/1     Pending   0          4m48s

All containers are in Pending status.
Looking at events:
tech-app01:/opt/kuber-apps/test:~$ kubectl get events
LAST SEEN   TYPE      REASON              OBJECT                                   MESSAGE
<unknown>   Warning   FailedScheduling    pod/nginx-deployment-64599f457f-dhjc7    0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
<unknown>   Warning   FailedScheduling    pod/nginx-deployment-64599f457f-dhjc7    0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
<unknown>   Warning   FailedScheduling    pod/nginx-deployment-64599f457f-qdbh7    0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
<unknown>   Warning   FailedScheduling    pod/nginx-deployment-64599f457f-qdbh7    0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
<unknown>   Warning   FailedScheduling    pod/nginx-deployment-64599f457f-zqn7b    0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
<unknown>   Warning   FailedScheduling    pod/nginx-deployment-64599f457f-zqn7b    0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
7m52s       Normal    SuccessfulCreate    replicaset/nginx-deployment-64599f457f   Created pod: nginx-deployment-64599f457f-dhjc7
7m51s       Normal    SuccessfulCreate    replicaset/nginx-deployment-64599f457f   Created pod: nginx-deployment-64599f457f-qdbh7
7m51s       Normal    SuccessfulCreate    replicaset/nginx-deployment-64599f457f   Created pod: nginx-deployment-64599f457f-zqn7b
7m52s       Normal    ScalingReplicaSet   deployment/nginx-deployment              Scaled up replica set nginx-deployment-64599f457f to 3

I understand that he cannot run 3 containers, because he does not see 3 worker nodes? So?
Is there any way to raise both master and worker on the same node? After all, minikube somehow works on one machine ...

Answer the question

In order to leave comments, you need to log in

1 answer(s)
D
Dmitry, 2019-11-20
@emashev

Try allowing the master to run containers:
from here: https://blog.alexellis.io/kubernetes-in-10-minutes/

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question