【云原生 | Kubernetes 实战】01、K8s-v1.25集群搭建和部署基于网页的 K8s 用户界面 Dashboard

导读:本篇文章讲解 【云原生 | Kubernetes 实战】01、K8s-v1.25集群搭建和部署基于网页的 K8s 用户界面 Dashboard,希望对大家有帮助,欢迎收藏,转发!站点地址:www.bmabk.com

目录

一、K8s 概述

可以查看官方文档:概述 | Kubernetes

组件交互逻辑:

二、kubeadm 安装 K8s-v1.25高可用集群

k8s 环境规划:

1.初始化环境

2.安装 Docker 和容器运行时 containerd

3.安装 kubelet、kubeadm、kubectl

4.使用 kubeadm 创建集群

三、部署 Dashboard

1.部署 Dashboard

2.创建访问账号

3.令牌访问

4.登录成功


一、K8s 概述

可以查看官方文档:概述 | Kubernetes

【云原生 | Kubernetes 实战】01、K8s-v1.25集群搭建和部署基于网页的 K8s 用户界面 Dashboard

官方文档有详细的介绍和说明,这里就不多讲述了。

组件交互逻辑:

【云原生 | Kubernetes 实战】01、K8s-v1.25集群搭建和部署基于网页的 K8s 用户界面 Dashboard

二、kubeadm 安装 K8s-v1.25高可用集群

K8s没有单机部署,都是集群安装部署的。

安装k8s 集群时会参考下面的官方文档:

k8s 环境规划:

操作系统:CentOS 7.6
最低配置: 2Gib内存/2vCPU/30G硬盘
网络:NAT模式

K8s 集群角色 IP 主机名 安装的组件
控制(主)节点 192.168.78.133 k8s-master01 apiserver、controller-manager、schedule、kubelet、etcd、kube-proxy、容器运行时、calico、keepalived、nginx
工作(work)节点 192.168.78.131 k8s-node1 Kube-proxy、calico、coredns、容器运行时、kubelet
工作(work)节点 192.168.78.132 k8s-node2 Kube-proxy、calico、coredns、容器运行时、kubelet

1.初始化环境

给三台虚机初始化环境,可以看我这篇文章:CentOS 7 初始化系统

补充内容:

#1. 配置主机hosts文件,相互之间通过主机名互相访问
# 修改每台机器的/etc/hosts文件,文件最后增加如下内容:
echo "192.168.78.133  k8s-master01" >> /etc/hosts
echo "192.168.78.131  k8s-node1" >> /etc/hosts
echo "192.168.78.132  k8s-node2" >> /etc/hosts

#2. 关闭交换分区swap,提升性能
swapoff -a    # 临时关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久关闭:注释swap挂载,给swap这行开头加一下注释

#3. 转发 IPv4 并让 iptables 看到桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

#4. 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

#5. 应用 sysctl 参数而不重新启动
sudo sysctl --system

#6. 配置时间同步
# 安装ntpdate命令
yum install ntpdate -y
# 跟网络时间做同步
ntpdate cn.pool.ntp.org
# 把时间同步做成计划任务
crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org

#7. 安装基础软件、依赖
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet ipvsadm

        (1)Swap是交换分区,如果机器内存不够,会使用swap分区,但是swap分区的性能较低,k8s设计的时候为了能提升性能,默认是不允许使用交换分区的。Kubeadm初始化的时候会检测swap是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装k8s的时候可以指定–ignore-preflight-errors=Swap来解决。 

        (2)net.ipv4.ip_forward是数据包转发:
        出于安全考虑,Linux系统默认是禁止数据包转发的。所谓转发即当主机拥有多于一块的网卡时,其中一块收到数据包,根据数据包的目的ip地址将数据包发往本机另一块网卡,该网卡根据路由表继续发送数据包。这通常是路由器所要实现的功能。
        要让Linux系统具有路由转发功能,需要配置一个Linux的内核参数net.ipv4.ip_forward。这个参数指定了Linux系统当前对路由转发功能的支持情况;其值为0时表示禁止进行IP转发;如果是1,则说明IP转发功能已经打开。

2.安装 Docker 和容器运行时 containerd

三台机子都需要安装 Docker:Docker 的详细安装教程

#1. 接下来生成 containerd 的配置文件:
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

# 修改配置文件:
vim /etc/containerd/config.toml
把SystemdCgroup = false修改成SystemdCgroup = true
把sandbox_image = "k8s.gcr.io/pause:3.6"修改成sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7"

# 配置 containerd 开机启动,并启动 containerd
systemctl enable containerd --now

#2. 使用 crictl 对 Kubernetes 节点进行调试
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

systemctl restart containerd

#3. 配置containerd镜像加速器
vim /etc/containerd/config.toml
找到config_path = "",修改成如下目录:
config_path = "/etc/containerd/certs.d"
# 保存退出 wq

mkdir /etc/containerd/certs.d/docker.io/ -p
vim /etc/containerd/certs.d/docker.io/hosts.toml
# 写入如下内容:
[host."https://vh3bm52y.mirror.aliyuncs.com",host."https://registry.docker-cn.com"]
capabilities = ["pull"]
# 重启containerd:
systemctl restart containerd

#4. 配置docker镜像加速器
mkdir -p /etc/docker
vim /etc/docker/daemon.json
# 写入如下内容:
{
"registry-mirrors":["https://vh3bm52y.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"]
}
# 重启docker:
systemctl restart docker 

3.安装 kubelet、kubeadm、kubectl

你需要在每台机器上安装以下的软件包:

  • kubeadm:用来初始化集群的指令。

  • kubelet:在集群中的每个节点上用来启动 Pod 和容器等。

  • kubectl:用来与集群通信的命令行工具。 

#1. 配置安装k8s组件需要的阿里云的repo源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

# 安装组件
yum install -y kubelet-1.25.0 kubeadm-1.25.0 kubectl-1.25.0
systemctl enable --now kubelet

#2. 设置 k8s 命令自动补全
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

4.使用 kubeadm 创建集群

  • (1)设置容器运行时
crictl config runtime-endpoint /run/containerd/containerd.sock
  • (2)使用 kubeadm 初始化k8s集群
# 根据我们自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式为 ipvs,需要注意的是由于我们使用的containerd作为运行时,所以在初始化节点的时候需要指定cgroupDriver为systemd

# 这步只在 master执行
[root@k8s-master01 ~]# kubeadm config print init-defaults > kubeadm.yaml
[root@k8s-master01 ~]# vim kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  # master节点的ip
  advertiseAddress: 192.168.78.133
  bindPort: 6443
nodeRegistration:
  # 指定containerd容器运行时
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  # master节点主机名
  name: k8s-master01
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
# 指定阿里云镜像仓库地址
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
# k8s版本
kubernetesVersion: 1.25.0
networking:
  dnsDomain: cluster.local
  # 指定pod网段,需要新增加这个
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
# 在文件最后,插入以下内容,(复制时,要带着---)
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
  • (3)基于kubeadm.yaml 初始化k8s集群

链接:https://pan.baidu.com/s/1MVj9ymOWs1RMec44xBu9cg 
提取码:yyds

下载离线 k8s_1.25.0.tar.gz 镜像包、calico.tar.gz、busybox-1-28.tar.gz,并上传到三台机器上。下面介绍离线和在线拉取镜像、初始化集群两种方式,二选一即可:

#1. 离线安装方式:
# 使用ctr命令指定命名空间导入镜像
ctr -n=k8s.io images import k8s_1.25.0.tar.gz

# 查看镜像
crictl images

#2. 在线拉取镜像
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.25.0
kube-proxy:v1.25.0
kube-controller-manager:v1.25.0
kube-scheduler:v1.25.0
coredns:v1.9.3
etcd:3.5.4-0
pause:3.8
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
EOF
   
chmod +x ./images.sh && ./images.sh

【云原生 | Kubernetes 实战】01、K8s-v1.25集群搭建和部署基于网页的 K8s 用户界面 Dashboard

        k8s_1.25.0.tar.gz这个文件把安装k8s需要的组件镜像都集成好了,通过ctr images export 这个命令把镜像输出到k8s_1.25.0.tar.gz文件,如果大家安装其他版本,那就不需要实现解压镜像,可以默认从网络拉取镜像即可。
        ctr是containerd自带的工具,有命名空间的概念,若是k8s相关的镜像,都默认在k8s.io这个命名空间,所以导入镜像时需要指定命令空间为k8s.io

  • (4)kubeadm init 首先运行一系列预检查以确保机器 准备运行 Kubernetes。这些预检查会显示警告并在错误时退出。然后 kubeadm init 下载并安装集群控制平面组件。这可能会需要几分钟。 显示如下,说明安装完成,然后根据提示信息操作:
# 这步只在master执行
[root@k8s-master01 ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification

【云原生 | Kubernetes 实战】01、K8s-v1.25集群搭建和部署基于网页的 K8s 用户界面 Dashboard        配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s集群进行管理:

# 在 master 执行
[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES           AGE     VERSION
k8s-master01   NotReady   control-plane   6m40s   v1.25.0
  • (5)扩容k8s集群-加入节点 
#1. 如果令牌过期了(默认24小时),重新生成。在master1 上生成加入节点的命令:
[root@k8s-master01 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.78.133:6443 --token aptfee.te8dtfehnlwwrj4l --discovery-token-ca-cert-hash sha256:b7a8c0b94ce7d1799fba166161b9fadea867b4b3c193f60b6bc22ad2390bcdd8

#2. 把 node1、node2 加入k8s集群:
[root@k8s-node1 ~]# kubeadm join 192.168.78.133:6443 --token aptfee.te8dtfehnlwwrj4l --discovery-token-ca-cert-hash sha256:b7a8c0b94ce7d1799fba166161b9fadea867b4b3c193f60b6bc22ad2390bcdd8 --ignore-preflight-errors=SystemVerification

[root@k8s-node2 ~]# kubeadm join 192.168.78.133:6443 --token aptfee.te8dtfehnlwwrj4l --discovery-token-ca-cert-hash sha256:b7a8c0b94ce7d1799fba166161b9fadea867b4b3c193f60b6bc22ad2390bcdd8 --ignore-preflight-errors=SystemVerification

#3. 在 master1上查看集群节点状况:
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES           AGE     VERSION
k8s-master01   NotReady   control-plane   16m     v1.25.0
k8s-node1      NotReady   <none>          3m12s   v1.25.0
k8s-node2      NotReady   <none>          64s     v1.25.0

#4. 可以对node1、node2打个标签,显示work
[root@k8s-master01 ~]# kubectl label nodes k8s-node1 node-role.kubernetes.io/work=work

[root@k8s-master01 ~]# kubectl label nodes k8s-node2 node-role.kubernetes.io/work=work

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES           AGE     VERSION
k8s-master01   NotReady   control-plane   19m     v1.25.0
k8s-node1      NotReady   work            5m32s   v1.25.0
k8s-node2      NotReady   work            3m24s   v1.25.0
  • (4)安装 Pod 网络组件-Calico

calicao官网:About Calico

查看支持的k8s版本:System requirements

#1. 解压
ctr -n=k8s.io images import calico.tar.gz

#2. 下载最新版的 calico.yaml 文件
[root@k8s-master01 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O

# 或者下载指定版本的 calico.yaml
curl https://docs.projectcalico.org/v3.24/manifests/calico.yaml -O

#3. 根据yaml文件,安装calico 网络插件
[root@k8s-master01 ~]# kubectl apply -f calico.yaml

#4. 查看节点状态
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES           AGE   VERSION
k8s-master01   Ready    control-plane   79m   v1.25.0
k8s-node1      Ready    work            65m   v1.25.0
k8s-node2      Ready    work            63m   v1.25.0

#5. 查看集群部署了哪些应用,类似于 docker ps
[root@k8s-master01 ~]# kubectl get pod -A
# 运行中的应用在docker里面叫容器,在k8s里面叫Pod

【云原生 | Kubernetes 实战】01、K8s-v1.25集群搭建和部署基于网页的 K8s 用户界面 Dashboard

  • (5)测试在k8s创建pod是否可以正常访问网络
#1. 把busybox-1-28.tar.gz上传到node1、node2节点,手动解压
[root@k8s-node1 ~]# ctr -n k8s.io images import busybox-1-28.tar.gz
[root@k8s-node2 ~]# ctr -n k8s.io images import busybox-1-28.tar.gz

#2. 进入pod
[root@k8s-master01 ~]# kubectl run busybox --image docker.io/library/busybox:1.28 --image-pull-policy=IfNotPresent --restart=Never --rm -it busybox -- sh

# 说明calico网络插件已经被正常安装了
/ # ping www.baidu.com
PING www.baidu.com (180.101.49.14): 56 data bytes
64 bytes from 180.101.49.14: seq=0 ttl=127 time=15.429 ms
64 bytes from 180.101.49.14: seq=1 ttl=127 time=17.564 ms
64 bytes from 180.101.49.14: seq=2 ttl=127 time=18.939 ms
^C
--- www.baidu.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 15.429/17.310/18.939 ms

/ # nslookup kubernetes.default.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ # exit  # 退出 pod
pod "busybox" deleted

# 10.96.0.10 就是我们coreDNS的clusterIP,说明coreDNS配置好了。解析内部Service的名称,是通过coreDNS去解析的。
注意:busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip

【云原生 | Kubernetes 实战】01、K8s-v1.25集群搭建和部署基于网页的 K8s 用户界面 Dashboard

三、部署 Dashboard

        Dashboard 是基于网页的 Kubernetes 用户界面。 你可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。 你可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源 (如 Deployment,Job,DaemonSet 等等)。 例如,你可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。Dashboard 同时展示了 Kubernetes 集群中的资源状态信息和所有报错信息。

GitHub官方地址:https://github.com/kubernetes/dashboard/releases

1.部署 Dashboard

#1. 部署 Dashboard UI
[root@k8s-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

#2. 设置访问端口,找到 type: ClusterIP 改为 type: NodePort
[root@k8s-master01 ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

#3. 查看端口
[root@k8s-master01 ~]# kubectl get svc -A |grep kubernetes-dashboard
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.111.244.84   <none>        8000/TCP                 3m52s
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.106.76.164   <none>        443:32749/TCP            3m53s

#3. 访问: https://集群任意IP:端口 进入登录界面      
https://192.168.78.133:32749

【云原生 | Kubernetes 实战】01、K8s-v1.25集群搭建和部署基于网页的 K8s 用户界面 Dashboard

进入登录界面:

【云原生 | Kubernetes 实战】01、K8s-v1.25集群搭建和部署基于网页的 K8s 用户界面 Dashboard

网络不好的同学可以直接复制recommended.yaml内容(用vi编辑):

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.7.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

2.创建访问账号

创建实例用户官网:https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

# 创建访问账号,准备一个yaml文件
[root@k8s-master01 ~]# vi dashuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

[root@k8s-master01 ~]# kubectl apply -f dashuser.yaml

3.令牌访问

# 获取访问令牌
[root@k8s-master01 ~]# kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6IkMwZmZUeVU5VE5CeVR0VUgxQlF0RmktNG1PU1pCcmlkNjdGb3dCOV90dEEifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjY2NDI3ODc4LCJpYXQiOjE2NjY0MjQyNzgsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZmM1MmYyOWUtMzgyMS00YjQxLWEyNDMtNTE5MzZmYWQzNTYzIn19LCJuYmYiOjE2NjY0MjQyNzgsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.t7MWL1qKpxFwujJtZEOFRlQshp-XVvD9dJsu41_v97PCw5AaH3pHSP-fqdnsqobQ__HlxLjECcGSHhnDtyC8Z1uVX74iWOBU_qVDwKN0hezcmlSyB9SglMYDJ0_UokDMiOY7KdfpwnX_SoOYQrjKyCjXBMI9iSFWK6sIT6CQYpntd57wDDG6jPOHI2VsMjAMYdmzC7qhxGXfaMlXkERvti3gkuzAELQOVBtQJszoyXTykrd4eQAD5720ERQ-ky0gof2lDexkmjffB_9Ksa7Ubuq7i5sMzrHVql9bhUBK1Hjwlmo6hZUn4ldySoJrPnZ3yS5J8WPc1NF9e8GDhaYYYg

# 现在复制令牌并将其粘贴到登录屏幕上的Enter令牌字段中。

4.登录成功

【云原生 | Kubernetes 实战】01、K8s-v1.25集群搭建和部署基于网页的 K8s 用户界面 Dashboard

上一篇文章:

【Kubernetes 企业项目实战】01、使用 kubeadm 安装 K8s-v1.23 高可用集群_Stars.Sky的博客-CSDN博客

下一篇文章:【云原生 | Kubernetes 实战】02、k8s 核心资源 Pod 介绍_Stars.Sky的博客-CSDN博客

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/74528.html

(0)
小半的头像小半

相关推荐

极客之音——专业性很强的中文编程技术网站,欢迎收藏到浏览器,订阅我们!