Примечание
Примечание
Примечание
# swapoff -a
И удалить соответствующую строку в /etc/fstab
.
# kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SystemVerification
где:
… Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.201:6443 --token cgmqh4.26l6pnhqagslwvae \ --discovery-token-ca-cert-hash \ sha256:9571e4fde1bed9ee43ed1cba98b5c2bca5184f99f54806f1a84657d161e9f0a1
~/.kube
:
$ mkdir ~/.kube
# cp /etc/kubernetes/admin.conf ~<пользователь>/.kube/config
# chown <пользователь>: ~<пользователь>/.kube/config
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
В выводе будут отображены имена всех созданных ресурсов. Проверить, что всё работает:
$ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-5rgmk 1/1 Running 0 38m
coredns-74ff55c5b-wjq4r 1/1 Running 0 38m
etcd-master01 1/1 Running 0 37m
kube-apiserver-master01 1/1 Running 0 37m
kube-controller-manager-master01 1/1 Running 0 37m
kube-flannel-ds-2gl6g 1/1 Running 0 92s
kube-proxy-tjmjt 1/1 Running 0 38m
kube-scheduler-master01 1/1 Running 0 37m
coredns должны находиться в состоянии Running. Количество kube-flannel и kube-proxy зависит от общего числа нод.
# kubeadm
join <ip адрес>:<порт> --token <токен>
--discovery-token-ca-cert-hash sha256:<хэш> --ignore-preflight-errors=SystemVerification
Данная команда была выведена при выполнении команды kubeadm init
на мастер-ноде. В данном случае:
# kubeadm
join 192.168.0.201:6443 --token cgmqh4.26l6pnhqagslwvae \
--discovery-token-ca-cert-hash \
sha256:9571e4fde1bed9ee43ed1cba98b5c2bca5184f99f54806f1a84657d161e9f0a1
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Примечание
$ kubeadm token list
TOKEN TTL EXPIRES USAGES
cgmqh4.26l6pnhqagslwvae 20h 2021-06-25T11:52:05+02:00 authentication,signing
По умолчанию, срок действия токена — 24 часа. Если требуется добавить новый узел в кластер по окончанию этого периода, можно создать новый токен:
$ kubeadm token create
--discovery-token-ca-cert-hash
неизвестно, его можно получить, выполнив команду (на мастер-ноде):
$ openssl x509
-pubkey -in /etc/kubernetes/pki/ca.crt | \
openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
9571e4fde1bed9ee43ed1cba98b5c2bca5184f99f54806f1a84657d161e9f0a1
<control-plane-host>:<control-plane-port>
, адрес должен быть заключен в квадратные скобки:
[fd00::101]:2073
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
docker03 Ready <none> 160m v1.20.2
master01 Ready control-plane,master 3h3m v1.20.2
или
$ kubectl get nodes -o wide
$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.0.201:6443
KubeDNS is running at https://192.168.0.201:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Посмотреть подробную информацию о ноде:
$ kubectl describe node docker03