emptyDir存储卷
概述:
当Pod被分配给节点时,首先创建emptyDir卷,并且只要该Pod在该节点上运行,该卷就会存在。正如卷的名字所述,它最初是空的。Pod 中的容器可以读取和写入emptyDir卷中的相同文件,尽管该卷可以挂载到每个容器中的相同或不同路径上。当出于任何原因从节点中删除 Pod 时,emptyDir中的数据将被永久删除。
实例:
- mkdir /opt/volumes
- cd /opt/volumes
-
- vim pod-emptydir.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-emptydir
- namespace: default
- labels:
- app: myapp
- tier: frontend
- spec:
- containers:
- - name: myapp
- image: ikubernetes/myapp:v1
- imagePullPolicy: IfNotPresent
- ports:
- - name: http
- containerPort: 80
- #定义容器挂载内容
- volumeMounts:
- #使用的存储卷名称,如果跟下面volume字段name值相同,则表示使用volume的这个存储卷
- - name: html
- #挂载至容器中哪个目录
- mountPath: /usr/share/nginx/html/
- - name: busybox
- image: busybox:latest
- imagePullPolicy: IfNotPresent
- volumeMounts:
- - name: html
- #在容器内定义挂载存储名称和挂载路径
- mountPath: /data/
- command: ['/bin/sh','-c','while true;do echo $(date) >> /data/index.html;sleep 2;done']
- #定义存储卷
- volumes:
- #定义存储卷名称
- - name: html
- #定义存储卷类型
- emptyDir: {}
-
-
- kubectl apply -f pod-emptydir.yaml
-
- kubectl get pods -o wide
//在上面定义了2个容器,其中一个容器是输入日期到index.html中,然后验证访问nginx的html是否可以获取日期。以验证两个容器之间挂载的emptyDir实现共享。
hostPath存储卷
概述
hostPath卷将 node 节点的文件系统中的文件或目录挂载到集群中。
hostPath可以实现持久存储,但是在node节点故障时,也会导致数据的丢失。
在 node01 节点上创建挂载目录
- mkdir -p /data/pod/volume1
- echo 'node01.kgc.com' > /data/pod/volume1/index.html
在 node02 节点上创建挂载目录
- mkdir -p /data/pod/volume1
- echo 'node02.kgc.com' > /data/pod/volume1/index.html
/创建 Pod 资源
- vim pod-hostpath.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-hostpath
- namespace: default
- spec:
- containers:
- - name: myapp
- image: ikubernetes/myapp:v1
- #定义容器挂载内容
- volumeMounts:
- #使用的存储卷名称,如果跟下面volume字段name值相同,则表示使用volume的这个存储卷
- - name: html
- #挂载至容器中哪个目录
- mountPath: /usr/share/nginx/html
- #读写挂载方式,默认为读写模式false
- readOnly: false
- #volumes字段定义了paues容器关联的宿主机或分布式文件系统存储卷
- volumes:
- #存储卷名称
- - name: html
- #路径,为宿主机存储路径
- hostPath:
- #在宿主机上目录的路径
- path: /data/pod/volume1
- #定义类型,这表示如果宿主机没有此目录则会自动创建
- type: DirectoryOrCreate
kubectl apply -f pod-hostpath.yaml
访问测试
kubectl get pods -o wide
curl 10.244.2.35
删除pod,再重建,验证是否依旧可以访问原来的内容
- kubectl delete -f pod-hostpath.yaml
- kubectl apply -f pod-hostpath.yaml
kubectl get pods -o wide
访问测试
curl 10.244.2.37
nfs共享存储卷
- mkdir /data/volumes -p
- chmod 777 /data/volumes
-
- vim /etc/exports
- /data/volumes 192.168.10.0/24(rw,no_root_squash)
-
- systemctl start rpcbind
- systemctl start nfs
-
- showmount -e
- Export list for stor01:
- /data/volumes 192.168.10.0/24
-
-
- //master节点操作
- vim pod-nfs-vol.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-vol-nfs
- namespace: default
- labels:
- app: myapp
- spec:
- containers:
- - name: myapp
- image: nginx
- volumeMounts:
- - name: html
- mountPath: /usr/share/nginx/html
- readOnly: false
- nodeSelector:
- kubernetes.io/hostname: node01
- volumes:
- - name: html
- nfs:
- path: /opt/k8s/volumes
- server: 192.168.41.29




在nfs服务器上创建index.html

/master节点操作

删除nfs相关pod,再重新创建,可以得到数据的持久化存储
kubectl apply -f pod-nfs-vol.yaml
NFS使用PV和PVC
配置nfs存储
- 配置nfs存储
- mkdir v{1,2,3,4,5}

- vim /etc/exports
- /data/volumes/v1 192.168.41.0/24(rw,no_root_squash)
- /data/volumes/v2 192.168.41.0/24(rw,no_root_squash)
- /data/volumes/v3 192.168.41.0/24(rw,no_root_squash)
- /data/volumes/v4 192.168.41.0/24(rw,no_root_squash)
- /data/volumes/v5 192.168.41.0/24(rw,no_root_squash)
-

exportfs -arv
showmount -e

2、定义PV
这里定义5个PV,并且定义挂载的路径以及访问模式,还有PV划分的大小。
- vim pv-demo.yaml
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv001
- labels:
- name: pv001
- spec:
- nfs:
- path: /data/volumes/v1
- server: stor01
- accessModes: ["ReadWriteMany","ReadWriteOnce"]
- capacity:
- storage: 1Gi
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv002
- labels:
- name: pv002
- spec:
- nfs:
- path: /data/volumes/v2
- server: stor01
- accessModes: ["ReadWriteOnce"]
- capacity:
- storage: 2Gi
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv003
- labels:
- name: pv003
- spec:
- nfs:
- path: /data/volumes/v3
- server: stor01
- accessModes: ["ReadWriteMany","ReadWriteOnce"]
- capacity:
- storage: 2Gi
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv004
- labels:
- name: pv004
- spec:
- nfs:
- path: /data/volumes/v4
- server: stor01
- accessModes: ["ReadWriteMany","ReadWriteOnce"]
- capacity:
- storage: 4Gi
- ---
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv005
- labels:
- name: pv005
- spec:
- nfs:
- path: /data/volumes/v5
- server: stor01
- accessModes: ["ReadWriteMany","ReadWriteOnce"]
- capacity:
- storage: 5Gi
kubectl apply -f pv-demo.yaml

kubectl get pv

3、定义PVC
//这里定义了pvc的访问模式为多路读写,该访问模式必须在前面pv定义的访问模式之中。定义PVC申请的大小为2Gi,此时PVC会自动去匹配多路读写且大小为2Gi的PV,匹配成功获取PVC的状态即为Bound
- vim pod-vol-pvc.yaml
- apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: mypvc
- namespace: default
- spec:
- accessModes: ["ReadWriteMany"]
- resources:
- requests:
- storage: 2Gi
- ---
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-vol-pvc
- namespace: default
- spec:
- containers:
- - name: myapp
- image: ikubernetes/myapp:v1
- volumeMounts:
- - name: html
- mountPath: /usr/share/nginx/html
- volumes:
- - name: html
- persistentVolumeClaim:
- claimName: mypvc

kubectl apply -f pod-vol-pvc.yaml

- kubectl get pv
- kubectl get pvc

搭建 StorageClass + NFS,实现 NFS 的动态 PV 创建
Kubernetes 本身支持的动态 PV 创建不包括 NFS,所以需要使用外部存储卷插件分配PV。详见:https://kubernetes.io/zh/docs/concepts/storage/storage-classes/
卷插件称为 Provisioner(存储分配器),NFS 使用的是 nfs-client,这个外部卷插件会使用已经配置好的 NFS 服务器自动创建 PV。
Provisioner:用于指定 Volume 插件的类型,包括内置插件(如 kubernetes.io/aws-ebs)和外部插件(如 external-storage 提供的 ceph.com/cephfs)。
1、在stor01节点上安装nfs,并配置nfs服务
- mkdir /opt/k8s
- chmod 777 /opt/k8s/
-
- vim /etc/exports
- /opt/k8s 192.168.41.0/24(rw,no_root_squash,sync)
-
- systemctl restart nfs
-



2、创建 Service Account,用来管理 NFS Provisioner 在 k8s 集群中运行的权限,设置 nfs-client 对 PV,PVC,StorageClass 等的规则
- vim nfs-client-rbac.yaml
- #创建 Service Account 账户,用来管理 NFS Provisioner 在 k8s 集群中运行的权限
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: nfs-client-provisioner
- ---
- #创建集群角色
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRole
- metadata:
- name: nfs-client-provisioner-clusterrole
- rules:
- - apiGroups: [""]
- resources: ["persistentvolumes"]
- verbs: ["get", "list", "watch", "create", "delete"]
- - apiGroups: [""]
- resources: ["persistentvolumeclaims"]
- verbs: ["get", "list", "watch", "update"]
- - apiGroups: ["storage.k8s.io"]
- resources: ["storageclasses"]
- verbs: ["get", "list", "watch"]
- - apiGroups: [""]
- resources: ["events"]
- verbs: ["list", "watch", "create", "update", "patch"]
- - apiGroups: [""]
- resources: ["endpoints"]
- verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
- ---
- #集群角色绑定
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
- metadata:
- name: nfs-client-provisioner-clusterrolebinding
- subjects:
- - kind: ServiceAccount
- name: nfs-client-provisioner
- namespace: default
- roleRef:
- kind: ClusterRole
- name: nfs-client-provisioner-clusterrole
- apiGroup: rbac.authorization.k8s.io
-
-
- kubectl apply -f nfs-client-rbac.yaml
-

3、使用 Deployment 来创建 NFS Provisioner
NFS Provisione(即 nfs-client),有两个功能:一个是在 NFS 共享目录下创建挂载点(volume),另一个则是将 PV 与 NFS 的挂载点建立关联。
#由于 1.20 版本启用了 selfLink,所以 k8s 1.20+ 版本通过 nfs provisioner 动态生成pv会报错,解决方法如下:
- vim /etc/kubernetes/manifests/kube-apiserver.yaml
- spec:
- containers:
- - command:
- - kube-apiserver
- - --feature-gates=RemoveSelfLink=false #添加这一行
- - --advertise-address=192.168.80.20
- ......
-
- kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
- kubectl delete pods kube-apiserver -n kube-system
- kubectl get pods -n kube-system | grep apiserver
-
- #创建 NFS Provisioner
- vim nfs-client-provisioner.yaml
- kind: Deployment
- apiVersion: apps/v1
- metadata:
- name: nfs-client-provisioner
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: nfs-client-provisioner
- strategy:
- type: Recreate
- template:
- metadata:
- labels:
- app: nfs-client-provisioner
- spec:
- serviceAccountName: nfs-client-provisioner #指定Service Account账户
- containers:
- - name: nfs-client-provisioner
- image: quay.io/external_storage/nfs-client-provisioner:latest
- imagePullPolicy: IfNotPresent
- volumeMounts:
- - name: nfs-client-root
- mountPath: /persistentvolumes
- env:
- - name: PROVISIONER_NAME
- value: nfs-storage #配置provisioner的Name,确保该名称与StorageClass资源中的provisioner名称保持一致
- - name: NFS_SERVER
- value: stor01 #配置绑定的nfs服务器
- - name: NFS_PATH
- value: /opt/k8s #配置绑定的nfs服务器目录
- volumes: #申明nfs数据卷
- - name: nfs-client-root
- nfs:
- server: stor01
- path: /opt/k8s
-
-
- kubectl apply -f nfs-client-provisioner.yaml
-
- kubectl get pod
- NAME READY STATUS RESTARTS AGE
- nfs-client-provisioner-cd6ff67-sp8qd 1/1 Running 0 14s
-


创建 NFS Provisioner


4、创建 StorageClass,负责建立 PVC 并调用 NFS provisioner 进行预定的工作,并让 PV 与 PVC 建立关联
- apiVersion: /v1
- kind: PersistentVolumeClain
- metadata:
- name: nfs-client-pvc
- spec:
- accessModes:
- - ReadWriteMany
- storageClassName: nfs-client-storageclass
- resources:
- requests:
- storage: 1Gi
- ~
-
- kubectl apply -f nfs-client-storageclass.yaml
-
- kubectl get storageclass
kubectl get storageclass

5、创建 PVC 和 Pod 测试
- vim test-pvc-pod.yaml
- apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: test-nfs-pvc
- spec:
- accessModes:
- - ReadWriteMany
- storageClassName: nfs-client-storageclass #关联StorageClass对象
- resources:
- requests:
- storage: 1Gi
- ---
- apiVersion: v1
- kind: Pod
- metadata:
- name: test-storageclass-pod
- spec:
- containers:
- - name: busybox
- image: busybox:latest
- imagePullPolicy: IfNotPresent
- command:
- - "/bin/sh"
- - "-c"
- args:
- - "sleep 3600"
- volumeMounts:
- - name: nfs-pvc
- mountPath: /mnt
- restartPolicy: Never
- volumes:
- - name: nfs-pvc
- persistentVolumeClaim:
- claimName: test-nfs-pvc #与PVC名称保持一致
-
-
- kubectl apply -f test-pvc-pod.yaml
kubectl apply -f test-pvc-pod.yaml

PVC 通过 StorageClass 自动申请到空间