kubeadm升级
验证是否可以升级
# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.24.1
[upgrade/versions] kubeadm version: v1.24.1
I0904 22:38:57.778942 76888 version.go:255] remote version is much newer: v1.25.0; falling back to: stable-1.24
[upgrade/versions] Target version: v1.24.4
[upgrade/versions] Latest version in the v1.24 series: v1.24.4
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.24.1 v1.24.4
Upgrade to the latest version in the v1.24 series:
COMPONENT CURRENT TARGET
kube-apiserver v1.24.1 v1.24.4
kube-controller-manager v1.24.1 v1.24.4
kube-scheduler v1.24.1 v1.24.4
kube-proxy v1.24.1 v1.24.4
CoreDNS v1.8.6 v1.8.6
etcd 3.5.3-0 3.5.3-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.24.4
Note: Before you can perform this upgrade, you have to update kubeadm to v1.24.4.
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
可以看到我们的版本可以升级到v1.24.4
显示版本差异
# kubeadm upgrade diff 1.24.4
[upgrade/diff] Reading configuration from the cluster...
[upgrade/diff] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
--- /etc/kubernetes/manifests/kube-apiserver.yaml
+++ new manifest
@@ -40,7 +40,7 @@
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.1
+ image: registry.aliyuncs.com/google_containers/kube-apiserver:1.24.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
--- /etc/kubernetes/manifests/kube-controller-manager.yaml
+++ new manifest
@@ -28,7 +28,7 @@
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --use-service-account-credentials=true
- image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.1
+ image: registry.aliyuncs.com/google_containers/kube-controller-manager:1.24.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
--- /etc/kubernetes/manifests/kube-scheduler.yaml
+++ new manifest
@@ -16,7 +16,7 @@
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
- image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.1
+ image: registry.aliyuncs.com/google_containers/kube-scheduler:1.24.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
查看kubeadm版本
# apt-cache madison kubeadm
kubeadm | 1.25.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.24.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
可以看到我们最新的版本时1.25.0以及1.24.4
升级
升级kubeadm
sudo apt-get install kubeadm=1.24.4-00
将 Kubernetes 集群升级到指定版本
sudo kubeadm upgrade apply 1.24.4
升级master节点
开启调度保护
# kubectl cordon master1
node/master1 cordoned
# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready,SchedulingDisabled control-plane 145m v1.24.1
node1 Ready <none> 121m v1.24.1
node2 Ready <none> 108m v1.24.1
# 排空节点
# kubectl drain master1 --ignore-daemonsets
node/master1 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-8lfc4, kube-system/kube-proxy-km8gx
evicting pod kube-system/coredns-74586cf9b6-fq7gg
evicting pod kube-system/coredns-74586cf9b6-2pk2p
pod/coredns-74586cf9b6-fq7gg evicted
pod/coredns-74586cf9b6-2pk2p evicted
node/master1 drained
更新节点配置
sudo kubeadm upgrade node
升级kublet组件
sudo apt-get install -y kubelet=1.24.4-00 kubectl=1.24.4-00
取消调度保护
kubectl uncordon master1
# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane 149m v1.24.4
node1 Ready <none> 125m v1.24.1
node2 Ready <none> 113m v1.24.1
升级node节点
开启调度保护(master节点执行)
# kubectl cordon node1
node/node1 cordoned
# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane 151m v1.24.4
node1 Ready,SchedulingDisabled <none> 127m v1.24.1
node2 Ready <none> 115m v1.24.1
# 排空节点
# kubectl drain node1 --ignore-daemonsets
node/node1 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-q8dpb, kube-system/kube-proxy-g7mbw
evicting pod kube-system/coredns-74586cf9b6-782sb
evicting pod default/myhello-rc-bxlbg
evicting pod default/myhello-rc-792dl
evicting pod default/myhello-rc-vxz47
pod/myhello-rc-vxz47 evicted
pod/myhello-rc-792dl evicted
pod/myhello-rc-bxlbg evicted
pod/coredns-74586cf9b6-782sb evicted
node/node1 drained
# 可以看到所有容器均运行到node2节点了
# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myhello-rc-2bmwh 1/1 Running 0 22s 10.244.4.7 node2 <none> <none>
myhello-rc-bmln7 1/1 Running 0 22s 10.244.4.9 node2 <none> <none>
myhello-rc-kjftb 1/1 Running 0 70m 10.244.4.3 node2 <none> <none>
myhello-rc-pk6vc 1/1 Running 0 22s 10.244.4.8 node2 <none> <none>
myhello-rc-wdvt6 1/1 Running 0 70m 10.244.4.2 node2 <none> <none>
升级kublet组件(node节点执行)
sudo apt-get install -y kubelet=1.24.4-00 kubectl=1.24.4-00
更新节点配置(node节点执行)
sudo kubeadm upgrade node
取消调度保护(master节点执行)
kubectl uncordon node1
# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane 149m v1.24.4
node1 Ready <none> 125m v1.24.1
node2 Ready <none> 113m v1.24.1
升级完成
# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane 157m v1.24.4
node1 Ready <none> 133m v1.24.4
node2 Ready <none> 121m v1.24.4
# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myhello-rc-4thbx 1/1 Running 0 100s 10.244.3.9 node1 <none> <none>
myhello-rc-b4f49 1/1 Running 0 100s 10.244.3.11 node1 <none> <none>
myhello-rc-frws4 1/1 Running 0 100s 10.244.3.8 node1 <none> <none>
myhello-rc-wgddb 1/1 Running 0 100s 10.244.3.7 node1 <none> <none>
myhello-rc-xtpbr 1/1 Running 0 100s 10.244.3.10 node1 <none> <none>
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/137612.html