文章目录
1. 安装nfs客户端
- 在k8s集群中的每个节点安装nfs-utils
yum install nfs-utils -y
- 启动服务
systemctl restart nfs && systemctl enable nfs
- 选择一台机器创建共享总目录 (为了方便,这里选择主节点[172.16.16.11])
mkdir -p /data/nfs
- 编辑配置
注:*表示内网所有机器都可访问并挂载目录,这里最好添加白名单限制,仅对K8S集群内节点开放
# 打开配置文件
vim /etc/exports
# 未设置权限
/data/nfs *(rw,no_root_squash)
# 添加挂载IP限制
/data/nfs 172.16.16.0/16(rw,async,no_root_squash)
- 使挂载配置生效并验证
# 重新挂载并显示(无需重启服务)
exportfs -rv
# 本机查看挂载情况
showmount -e
# 在其它节点查看挂载情况
showmount -e 172.16.16.11
- 为需要持久化的服务创建子目录(必须提前手动创建)
后文演示动态供给功能,因此添加子目录为“dynamic”,亦可根据需求创建其它子目录
mkdir -p /data/nfs/dynamic
- 挂载测试
# 登录其它节点
mkdir test
mount -t nfs 172.16.16.11:/data/nfs test
# 查看挂载情况
mount |grep 172.16.16.11
# 卸载挂载目录
umount test
2. 部署nfs-client-provisioner插件
2.1 配置授权(RBAC)
vim rbac.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
kubectl apply -f rbac.yaml
2.2 Deployment
需配置挂载目录172.16.16.11: /data/nfs/dynamic
vim nfs-client-provisioner.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs.provisioner
- name: NFS_SERVER
value: 172.16.16.11
- name: NFS_PATH
value: /data/nfs/dynamic
volumes:
- name: nfs-client-root
nfs:
server: 172.16.16.11
path: /data/nfs/dynamic
kubectl apply -f nfs-client-provisioner.yaml
2.3 创建StorageClass
vim storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-sc
provisioner: nfs.provisioner
parameters:
archiveOnDelete: "true"
allowVolumeExpansion: true
- archiveOnDelete: “true”,PVC被删除后,挂载的文件夹将会被标记为“archived”
3. 使用示例
NFS动态供给可以根据访问模式主要分为ReadWriteOnce和ReadWriteMany两种方式,类似于块存储和文件存储的使用:
- ReadWriteOnce(RWO):该卷能够以读写模式被加载到一个节点上
- ReadWriteMany(RWX):该卷能够以读写模式被多个节点同时加载
3.1 Deployment+RWO
一对一挂载
vim 1-deployment-rwo.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-deploy-rwo
spec:
storageClassName: "nfs-sc"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy-rwo
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:stable-alpine
name: nginx
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: nginx-deploy-rwo
kubectl apply -f 1-deployment-rwo.yaml
- 文件挂载测试
# 进入pod
kubectl exec -it nginx-deploy-rwo-6c7cf9ccdf-mkbcx -- sh
# 在持久化目录下生成文件
echo "hello,1-deployment-rwo" > /usr/share/nginx/html/1-deployment-rwo.html
- 到本地挂载目录查看文件
3.2 Statefulset+RWO
Statefulset多副本一对一挂载使用volumeClaimTemplates,并指定storageClass,将自动创建pvc、pv
vim 2-nginx-sfs-rwo.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-sfs-rwo
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:stable-alpine
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "nfs-sc"
resources:
requests:
storage: 1Gi
kubectl apply -f 2-nginx-sfs-rwo.yaml
- 文件挂载测试
# 进入nginx-sfs-rwo-0
kubectl exec -it nginx-sfs-rwo-0 -- sh
echo "hello,this is nginx-sfs-rwo-0" > /usr/share/nginx/html/index.html
# 进入nginx-sfs-rwo-1
kubectl exec -it nginx-sfs-rwo-1 -- sh
echo "hello,this is nginx-sfs-rwo-1" > /usr/share/nginx/html/index.html
- 然后分别访问nginx-sfs-rwo-0和nginx-sfs-rwo-1
3.3 Deployment/Statefulset+RWX
多pod挂载同一个卷,示例中使用Deployment或Statefulset部署皆可
vim 3-nginx-rwx.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-rwx
spec:
storageClassName: "nfs-sc"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-rwx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:stable-alpine
name: nginx
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: nginx-rwx
kubectl apply -f 3-nginx-rwx.yaml
- 文件挂载测试
# 进入第一个pod
kubectl exec -it nginx-rwx-64598fd68d-49hxv -- sh
echo "hello,this is nginx-rwx-0" > /usr/share/nginx/html/index.html
- 访问第二个pod
若本篇内容对您有所帮助,请三连点赞,关注,收藏支持下,谢谢~
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/70892.html