Примечание
Примечание
Примечание
# swapoff -a
И удалить соответствующую строку в /etc/fstab
.
# kubeadm init --pod-network-cidr=10.244.0.0/16
# kubeadm init --pod-network-cidr=10.168.0.0/16
… Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.102:6443 --token rcbuiz.o0brh8chmu0i7ljw \ --discovery-token-ca-cert-hash \ sha256:b80186633ae51261c29ed4f5c2da68907b1e344f48a52022de413b3bd24191ce
~/.kube
(с правами пользователя):
$ mkdir ~/.kube
# cp /etc/kubernetes/admin.conf ~<пользователь>/.kube/config
# chown <пользователь>: ~<пользователь>/.kube/config
$ kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
$kubectl apply -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml
$kubectl apply -f https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml
$ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-5rgmk 1/1 Running 0 38m
coredns-74ff55c5b-wjq4r 1/1 Running 0 38m
etcd-kube01 1/1 Running 0 37m
kube-apiserver-kube01 1/1 Running 0 37m
kube-controller-manager-kube01 1/1 Running 0 37m
kube-flannel-ds-2gl6g 1/1 Running 0 92s
kube-proxy-tjmjt 1/1 Running 0 38m
kube-scheduler-kube01 1/1 Running 0 37m
coredns должны находиться в состоянии Running. Количество kube-flannel и kube-proxy зависит от общего числа нод.
kubeadm
join <ip адрес>:<порт> --token <токен>
--discovery-token-ca-cert-hash sha256:<хэш> --ignore-preflight-errors=SystemVerification
Данная команда была выведена при выполнении команды kubeadm init
на мастер-ноде.
# kubeadm
join 192.168.0.102:6443 --token rcbuiz.o0brh8chmu0i7ljw \
--discovery-token-ca-cert-hash \
sha256:b80186633ae51261c29ed4f5c2da68907b1e344f48a52022de413b3bd24191ce
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Примечание
$ kubeadm token list
TOKEN TTL EXPIRES USAGES
rcbuiz.o0brh8chmu0i7ljw 22h 2021-12-18T11:49:53Z authentication,signing
По умолчанию, срок действия токена — 24 часа. Если требуется добавить новый узел в кластер по окончанию этого периода, можно создать новый токен:
$ kubeadm token create
--discovery-token-ca-cert-hash
неизвестно, его можно получить, выполнив команду (на мастер-ноде):
$ openssl x509
-pubkey -in /etc/kubernetes/pki/ca.crt | \
openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
b80186633ae51261c29ed4f5c2da68907b1e344f48a52022de413b3bd24191ce
<control-plane-host>:<control-plane-port>
, адрес должен быть заключен в квадратные скобки:
[fd00::101]:2073
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube01 Ready control-plane,master 42m v1.22.5
kube02 Ready <none> 2m43s v1.22.5
kube03 Ready <none> 24s v1.22.5
или
$ kubectl get nodes -o wide
$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.0.102:6443
CoreDNS is running at https://192.168.0.102:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Посмотреть подробную информацию о ноде:
$ kubectl describe node kube03