Примечание
# kubeadm init --control-plane-endpoint 192.168.0.201:6443 --upload-certs --pod-network-cidr=10.244.0.0/16
где
--control-plane-endpoint
— указывает адрес и порт балансировщика нагрузки;
--upload-certs
— используется для загрузки в кластер сертификатов, которые должны быть общими для всех управляющих узлов;
--pod-network-cidr=10.244.0.0/16
— адрес внутренней (разворачиваемой Kubernetes) сети, рекомендуется оставить данное значение для правильной работы Flannel.
… Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join 192.168.0.201:6443 --token cvvui8.lz82ufip6cz89ar9 \ --discovery-token-ca-cert-hash sha256:3ee0c550746a4a8e0abb6b59311f0fc301cdfeec00af8b26ed4598116c4d8184 \ --control-plane --certificate-key e0cbf1dc4e282bf517e23887dace30b411cd739b1aab037b056f0c23e5b0a222 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.201:6443 --token cvvui8.lz82ufip6cz89ar9 \ --discovery-token-ca-cert-hash sha256:3ee0c550746a4a8e0abb6b59311f0fc301cdfeec00af8b26ed4598116c4d8184
~/.kube
(с правами пользователя):
$ mkdir ~/.kube
# cp /etc/kubernetes/admin.conf ~<пользователь>/.kube/config
# chown <пользователь>: ~<пользователь>/.kube/config
$ kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
$ kubectl get pod -n kube-system -w
NAME READY STATUS RESTARTS AGE
coredns-78fcd69978-c5swn 0/1 ContainerCreating 0 11m
coredns-78fcd69978-zdbp8 0/1 ContainerCreating 0 11m
etcd-master01 1/1 Running 0 11m
kube-apiserver-master01 1/1 Running 0 11m
kube-controller-manager-master01 1/1 Running 0 11m
kube-flannel-ds-qfzbw 1/1 Running 0 116s
kube-proxy-r6kj9 1/1 Running 0 11m
kube-scheduler-master01 1/1 Running 0 11m
kubeadm init
на первом управляющем узле):
# kubeadm
join 192.168.0.201:6443 --token cvvui8.lz82ufip6cz89ar9 \
--discovery-token-ca-cert-hash sha256:3ee0c550746a4a8e0abb6b59311f0fc301cdfeec00af8b26ed4598116c4d8184 \
--control-plane --certificate-key e0cbf1dc4e282bf517e23887dace30b411cd739b1aab037b056f0c23e5b0a222
kubeadm init
на первом управляющем узле):
# kubeadm
join 192.168.0.201:6443 --token cvvui8.lz82ufip6cz89ar9 \
--discovery-token-ca-cert-hash sha256:3ee0c550746a4a8e0abb6b59311f0fc301cdfeec00af8b26ed4598116c4d8184
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube01 Ready <none> 23m v1.22.5
kube02 Ready <none> 15m v1.22.5
kube03 Ready <none> 2m30s v1.22.5
master01 Ready control-plane,master 82m v1.22.5
master02 Ready control-plane,master 66m v1.22.5
master03 Ready control-plane,master 39m v1.22.5