# apt-get install kubernetes1.31-kubeadm kubernetes1.31-kubelet kubernetes1.31-crio cri-tools1.31
Запустить и включить службы crio и kubelet:
#systemctl enable --now crio#systemctl enable --now kubelet
Примечание
/etc/hosts);
Примечание
# swapoff -a
И удалить соответствующую строку в /etc/fstab.
# kubeadm init --pod-network-cidr=10.244.0.0/16
# kubeadm init --pod-network-cidr=192.168.0.0/16
# kubeadm init --pod-network-cidr=10.244.0.0/16 \
--kubernetes-version=1.31.10 \
--image-repository=registry.altlinux.org/p11
# kubeadm init --pod-network-cidr=192.168.0.0/16 \
--kubernetes-version=1.31.10 \
--image-repository=registry.altlinux.org/p11
--pod-network-cidr — диапазон IP-адресов для подсети Pod'ов:
--image-repository — указывает реестр для образов control plane (по умолчанию registry.k8s.io);
--kubernetes-version — фиксирует версию компонентов control plane.
Your Kubernetes control-plane has initialized successfully!А также команды для настройки доступа и добавления узлов:
To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.103:6443 --token 4o84lf.yerap79j66r4riwi \ --discovery-token-ca-cert-hash sha256:35b569232b0e81ca19cf23948cb7b3580e17be7059bbf242c4fb0e481afd8ff6
~/.kube (с правами пользователя):
$ mkdir ~/.kube
# cp /etc/kubernetes/admin.conf /home/<пользователь>/.kube/config
# chown <пользователь>: /home/<пользователь>/.kube/config
$ kubectl apply -f https://gitea.basealt.ru/alt/flannel-manifests/raw/branch/main/p11/latest/kube-flannel.yml
/etc/cni/net.d/:
# cd /etc/cni/net.d/
100-crio-bridge.conflist:
# cp 100-crio-bridge.conflist.sample 100-crio-bridge.conflist
$kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/refs/tags/v3.25.0/manifests/tigera-operator.yaml$kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/refs/tags/v3.25.0/manifests/custom-resources.yaml
$ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
coredns-7c65d6cfc9-f6hjb 1/1 Running 0 4m49s
coredns-7c65d6cfc9-g7hbf 1/1 Running 0 4m49s
etcd-kube03 1/1 Running 0 4m51s
kube-apiserver-kube03 1/1 Running 0 4m54s
kube-controller-manager-kube03 1/1 Running 0 4m56s
kube-proxy-ncdxh 1/1 Running 0 4m50s
kube-scheduler-kube03 1/1 Running 0 4m49s
coredns должны быть в состоянии Running. Количество kube-flannel и kube-proxy зависит от общего числа узлов.
kubeadm join, выведенную при инициализации:
# kubeadm join <ip адрес>:<порт> --token <токен>
--discovery-token-ca-cert-hash sha256:<хеш> --ignore-preflight-errors=SystemVerification
# kubeadm join 192.168.0.103:6443 --4o84lf.yerap79j66r4riwi \
--discovery-token-ca-cert-hash \
sha256:35b569232b0e81ca19cf23948cb7b3580e17be7059bbf242c4fb0e481afd8ff6
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.186592ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Примечание
$ kubeadm token list
TOKEN TTL EXPIRES USAGES
4o84lf.yerap79j66r4riwi 23h 2025-08-25T09:00:31Z authentication,signing
По умолчанию срок действия токена — 24 часа. Если требуется добавить новый узел в кластер по окончанию этого периода, можно создать новый токен:
$ kubeadm token create
--discovery-token-ca-cert-hash неизвестно, его можно получить, выполнив команду (на мастер-узле):
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | \
openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
35b569232b0e81ca19cf23948cb7b3580e17be7059bbf242c4fb0e481afd8ff6
<control-plane-host>:<control-plane-port>, адрес control plane должен быть в квадратных скобках:
[fd00::101]:2073
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube01 Ready <none> 2m2s v1.31.11
kube02 NotReady <none> 7s v1.31.11
kube03 Ready control-plane 8m4s v1.31.11
или
$ kubectl get nodes -o wide
$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.0.103:6443
CoreDNS is running at https://192.168.0.103:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Подробная информация об узле:
$ kubectl describe node kube03