CentOS Linux release 7.7.1908 (Core) 3.10.0-1062.el7.x86_64
kubeadm-1.22.3-0.x86_64
kubelet-1.22.3-0.x86_64
kubectl-1.22.3-0.x86_64
kubernetes-cni-0.8.7-0.x86_64
主机名 | IP | VIP |
k8s-master01 | 192.168.30.106 | 192.168.30.115 |
k8s-master02 | 192.168.30.107 | |
k8s-master03 | 192.168.30.108 | |
k8s-node01 | 192.168.30.109 | |
k8s-node02 | 192.168.30.110 | |
k8s-nfs | 192.168.30.114 |
StatefulSet是为了解决有状态服务的问题(对应Deployments和ReplicaSets是为无状态服务而设计),其应用场景包括:
稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现
稳定的网络标志,即Pod重新调度后其PodName和HostName不变,基于Headless Service(即没有Cluster IP的Service)来实现
有序部署,有序扩展,即Pod是有顺序的,在部署或者扩展的时候要依据定义的顺序依次依次进行(即从0到N-1,在下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态),基于init containers来实现
有序收缩,有序删除(即从N-1到0)
从上面的应用场景可以发现,StatefulSet由以下几个部分组成:
用于定义网络标志(DNS domain)的Headless Service
用于创建PersistentVolumes的volumeClaimTemplates
定义具体应用的StatefulSet
StatefulSet中每个Pod的DNS格式为statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local,其中
serviceName为Headless Service的名字
0..N-1为Pod所在的序号,从0开始到N-1
statefulSetName为StatefulSet的名字
namespace为服务所在的namespace,Headless Servic和StatefulSet必须在相同的namespace
.cluster.local为Cluster Domain
使用 StatefulSet
StatefulSet 适用于有以下某个或多个需求的应用:
稳定,唯一的网络标志。
稳定,持久化存储。
有序,优雅地部署和 scale。
有序,优雅地删除和终止。
有序,自动的滚动升级。
在上文中,稳定是 Pod (重新)调度中持久性的代名词。 如果应用程序不需要任何稳定的标识符、有序部署、删除和 scale,则应该使用提供一组无状态副本的 controller 来部署应用程序,例如 Deployment 或 ReplicaSet 可能更适合您的无状态需求。
在一个大规模的Kubernetes集群里,可能有成千上万个PVC,这就意味着运维人员必须实现创建出这个多个PV,此外,随着项目的需要,会有新的PVC不断被提交,那么运维人员就需要不断的添加新的,满足要求的PV,否则新的Pod就会因为PVC绑定不到PV而导致创建失败.而且通过 PVC 请求到一定的存储空间也很有可能不足以满足应用对于存储设备的各种需求,而且不同的应用程序对于存储性能的要求可能也不尽相同,比如读写速度、并发性能等,为了解决这一问题,Kubernetes 又为我们引入了一个新的资源对象:StorageClass,通过 StorageClass 的定义,管理员可以将存储资源定义为某种类型的资源,比如快速存储、慢速存储等,用户根据 StorageClass 的描述就可以非常直观的知道各种存储资源的具体特性了,这样就可以根据应用的特性去申请合适的存储资源了
要使用 StorageClass,我们就得安装对应的自动配置程序,比如我们这里存储后端使用的是 nfs,那么我们就需要使用到一个 nfs-client 的自动配置程序,我们也叫它 Provisioner,这个程序使用我们已经配置好的 nfs 服务器,来自动创建持久卷,也就是自动帮我们创建 PV。
搭建StorageClass+NFS,大致有以下几个步骤:
1.创建一个可用的NFS Serve
2.创建Service Account.这是用来管控NFS provisioner在k8s集群中运行的权限
3.创建StorageClass.负责建立PVC并调用NFS provisioner进行预定的工作,并让PV与PVC建立管理
4.创建NFS provisioner.有两个功能,一个是在NFS共享目录下创建挂载点(volume),另一个则是建了PV并将PV与NFS的挂载点建立关联
这里就不作说明,可参考我开头提供的文章
- IP: 192.168.30.114
-
- exportfs
- /data/volumes 192.168.30.0/24
vim nfs-rbac.yaml
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: nfs-client-provisioner
- # replace with namespace where provisioner is deployed
- namespace: default #根据实际环境设定namespace,下面类同
- ---
- kind: ClusterRole
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: nfs-client-provisioner-runner
- rules:
- - apiGroups: [""]
- resources: ["persistentvolumes"]
- verbs: ["get", "list", "watch", "create", "delete"]
- - apiGroups: [""]
- resources: ["persistentvolumeclaims"]
- verbs: ["get", "list", "watch", "update"]
- - apiGroups: ["storage.k8s.io"]
- resources: ["storageclasses"]
- verbs: ["get", "list", "watch"]
- - apiGroups: [""]
- resources: ["events"]
- verbs: ["create", "update", "patch"]
- ---
- kind: ClusterRoleBinding
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: run-nfs-client-provisioner
- subjects:
- - kind: ServiceAccount
- name: nfs-client-provisioner
- # replace with namespace where provisioner is deployed
- namespace: default
- roleRef:
- kind: ClusterRole
- name: nfs-client-provisioner-runner
- apiGroup: rbac.authorization.k8s.io
- ---
- kind: Role
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: leader-locking-nfs-client-provisioner
- # replace with namespace where provisioner is deployed
- namespace: default
- rules:
- - apiGroups: [""]
- resources: ["endpoints"]
- verbs: ["get", "list", "watch", "create", "update", "patch"]
- ---
- kind: RoleBinding
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: leader-locking-nfs-client-provisioner
- subjects:
- - kind: ServiceAccount
- name: nfs-client-provisioner
- # replace with namespace where provisioner is deployed
- namespace: default
- roleRef:
- kind: Role
- name: leader-locking-nfs-client-provisioner
- apiGroup: rbac.authorization.k8s.io
kubectl apply -f nfs-rbac.yaml
vim nfs-storageClass.yaml
- apiVersion: storage.k8s.io/v1
- kind: StorageClass
- metadata:
- name: managed-nfs-storage
- provisioner: test-nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
- parameters:
- # archiveOnDelete: "false"
- archiveOnDelete: "true"
- reclaimPolicy: Retain
kubectl apply -f nfs-storageClass.yaml
vim nfs-provisioner.yaml
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: nfs-client-provisioner
- labels:
- app: nfs-client-provisioner
- # replace with namespace where provisioner is deployed
- namespace: default #与RBAC文件中的namespace保持一致
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: nfs-client-provisioner
- strategy:
- type: Recreate
- selector:
- matchLabels:
- app: nfs-client-provisioner
- template:
- metadata:
- labels:
- app: nfs-client-provisioner
- spec:
- serviceAccountName: nfs-client-provisioner
- containers:
- - name: nfs-client-provisioner
- #image: quay.io/external_storage/nfs-client-provisioner:latest
- #这里特别注意,在k8s-1.20以后版本中使用上面提供的包,并不好用,这里我折腾了好久,才解决,后来在官方的github上,别人提的问题中建议使用下面这个包才解决的,我这里是下载后,传到我自已的仓库里
- #easzlab/nfs-subdir-external-provisioner:v4.0.1
- image: registry-op.test.cn/nfs-subdir-external-provisioner:v4.0.1
- volumeMounts:
- - name: nfs-client-root
- mountPath: /persistentvolumes
- env:
- - name: PROVISIONER_NAME
- value: test-nfs-storage #provisioner名称,请确保该名称与 nfs-StorageClass.yaml文件中的provisioner名称保持一致
- - name: NFS_SERVER
- value: 192.168.30.114 #NFS Server IP地址
- - name: NFS_PATH
- value: "/data/volumes" #NFS挂载卷
- volumes:
- - name: nfs-client-root
- nfs:
- server: 192.168.30.114 #NFS Server IP地址
- path: "/data/volumes" #NFS 挂载卷
- imagePullSecrets:
- - name: registry-op.test.cn
kubectl apply -f nfs-provisioner.yaml
- # kubectl get sc
- NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
- managed-nfs-storage test-nfs-storage Retain Immediate false 24h
vim test-claim.yaml
- kind: PersistentVolumeClaim
- apiVersion: v1
- metadata:
- name: test-claim
- annotations:
- #与nfs-storageClass.yaml metadata.name保持一致
- volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
- spec:
- storageClassName: "managed-nfs-storage"
- accessModes:
- - ReadWriteMany
- #- ReadWriteOnce
- resources:
- requests:
- storage: 10Gi
kubectl apply -f test-claim.yaml
要确保状态为Bound,如果为pending,肯定是有问题 ,需要进一步检查原因
- # kubectl get pvc
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- test-claim Bound pvc-6324e17a-0a33-4a64-b0bb-e187f51a8f30 10Gi RWX managed-nfs-storage 3d19h
vim test-pod.yaml
- kind: Pod
- apiVersion: v1
- metadata:
- name: test-pod
- spec:
- containers:
- - name: test-pod
- image: busybox:1.24
- command:
- - "/bin/sh"
- args:
- - "-c"
- - "touch /mnt/SUCCESS && exit 0 || exit 1" #创建一个SUCCESS文件后退出
- volumeMounts:
- - name: nfs-pvc
- mountPath: "/mnt"
- restartPolicy: "Never"
- volumes:
- - name: nfs-pvc
- persistentVolumeClaim:
- claimName: test-claim #与PVC名称保持一致
kubectl apply -f test-pod.yaml
- # kubectl get pv
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-6324e17a-0a33-4a64-b0bb-e187f51a8f30 10Gi RWX Delete Bound default/test-claim managed-nfs-storage 3d19h
登录到192.168.30.114,查看nfs目录下是否有刚创建的文件
- # ll /data/volumes/default-test-claim-pvc-6324e17a-0a33-4a64-b0bb-e187f51a8f30/ #文件规则是按照${namespace}-${pvcName}-${pvName}创建的
- total 0
- -rw-r--r-- 1 root root 0 Dec 3 16:16 SUCCESS #看到这个文件,证明是成功了
vim nginx-statefulset.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: nginx-headless
- spec:
- clusterIP: None #None值,就是表示无头服务
- selector:
- app: nginx
- ports:
- - name: web
- port: 80
- protocol: TCP
- ---
- apiVersion: apps/v1
- kind: StatefulSet
- metadata:
- name: web
- spec:
- podManagementPolicy: OrderedReady #pod名-> 0-N,删除N->0
- replicas: 3 #三个副本
- revisionHistoryLimit: 10
- serviceName: nginx-headless
- selector:
- matchLabels:
- app: nginx
- template:
- metadata: #name没写,会默认生成的
- labels:
- app: nginx
- spec:
- containers:
- - name: nginx
- image: registry-op.test.cn/nginx:1.14.9
- ports:
- - containerPort: 80
- volumeMounts:
- - name: web #填vcp名字
- mountPath: /var/www/html
- imagePullSecrets:
- - name: registry-op.test.cn
- volumeClaimTemplates:
- - metadata:
- name: web
- spec:
- accessModes: ["ReadWriteOnce"]
- storageClassName: managed-nfs-storage #存储类名,指向我们已经建好的
- volumeMode: Filesystem
- resources:
- requests:
- storage: 512M
kubectl apply -f nginx-statefulset.yaml
- # kubectl get pods -l app=nginx
- NAME READY STATUS RESTARTS AGE
- web-0 1/1 Running 0 22h
- web-1 1/1 Running 0 22h
- web-2 1/1 Running 0 22h
----
- # kubectl get pvc
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-
- web-web-0 Bound pvc-1fa25092-9516-41aa-ac9d-0eabdabda849 512M RWO managed-nfs-storage 23h
- web-web-1 Bound pvc-90cf9923-e5d8-4195-bffb-b9e5f14c11ae 512M RWO managed-nfs-storage 23h
- web-web-2 Bound pvc-bd4eccfd-8b65-4135-83fe-d57f8a9d9a74 512M RWO managed-nfs-storage 23h
--
- # kubectl get pv
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
-
- pvc-1fa25092-9516-41aa-ac9d-0eabdabda849 512M RWO Retain Bound default/web-web-0 managed-nfs-storage 23h
- pvc-90cf9923-e5d8-4195-bffb-b9e5f14c11ae 512M RWO Retain Bound default/web-web-1 managed-nfs-storage 23h
- pvc-bd4eccfd-8b65-4135-83fe-d57f8a9d9a74 512M RWO Retain Bound default/web-web-2 managed-nfs-storage 23h
--
- # kubectl get pv
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
-
- pvc-1fa25092-9516-41aa-ac9d-0eabdabda849 512M RWO Retain Bound default/web-web-0 managed-nfs-storage 23h
- pvc-90cf9923-e5d8-4195-bffb-b9e5f14c11ae 512M RWO Retain Bound default/web-web-1 managed-nfs-storage 23h
- pvc-bd4eccfd-8b65-4135-83fe-d57f8a9d9a74 512M RWO Retain Bound default/web-web-2 managed-nfs-storage 23h
#查看NFS Server上
- # ll /data/volumes/
- total 20
- drwxrwxrwx 2 root root 4096 Dec 6 15:01 default-web-web-0-pvc-1fa25092-9516-41aa-ac9d-0eabdabda849
- drwxrwxrwx 2 root root 4096 Dec 6 15:01 default-web-web-1-pvc-90cf9923-e5d8-4195-bffb-b9e5f14c11ae
- drwxrwxrwx 2 root root 4096 Dec 6 15:02 default-web-web-2-pvc-bd4eccfd-8b65-4135-83fe-d57f8a9d9a74
#分别往三个文件目录写入三个index.html文件
#安装一个curl服务,对已经创建好的Statefulset服务进行测试
#创建curl
kubectl run curl --image=radial/busyboxplus:curl -n default -i --tty
#进入curl容器
kubectl exec -it curl /bin/sh -n default
---
- [ root@curl:/ ]$ nslookup nginx-headless
- Server: 10.96.0.10
- Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
-
- Name: nginx-headless
- Address 1: 10.244.3.33 web-0.nginx-headless.default.svc.cluster.local
- Address 2: 10.244.3.35 web-2.nginx-headless.default.svc.cluster.local
- Address 3: 10.244.3.34 web-1.nginx-headless.default.svc.cluster.local
-
- [ root@curl:/ ]$ curl -v http://10.244.3.33/index.html
- > GET /index.html HTTP/1.1
- > User-Agent: curl/7.35.0
- > Host: 10.244.3.33
- > Accept: */*
- >
- < HTTP/1.1 200 OK
- < Server: nginx
- < Date: Tue, 07 Dec 2021 06:34:08 GMT
- < Content-Type: text/html; charset=utf-8
- < Content-Length: 6
- < Last-Modified: Mon, 06 Dec 2021 07:01:48 GMT
- < Connection: keep-alive
- < ETag: "61adb55c-6"
- < Accept-Ranges: bytes
- <
- web-0 ##注意这个是我前面写入文件的内容
-
- [ root@curl:/ ]$ curl -v http://10.244.3.34/index.html
- > GET /index.html HTTP/1.1
- > User-Agent: curl/7.35.0
- > Host: 10.244.3.34
- > Accept: */*
- >
- < HTTP/1.1 200 OK
- < Server: nginx
- < Date: Tue, 07 Dec 2021 06:35:13 GMT
- < Content-Type: text/html; charset=utf-8
- < Content-Length: 6
- < Last-Modified: Mon, 06 Dec 2021 07:01:59 GMT
- < Connection: keep-alive
- < ETag: "61adb567-6"
- < Accept-Ranges: bytes
- <
- web-1
-
- [ root@curl:/ ]$ curl -v http://10.244.3.35/index.html
- > GET /index.html HTTP/1.1
- > User-Agent: curl/7.35.0
- > Host: 10.244.3.35
- > Accept: */*
- >
- < HTTP/1.1 200 OK
- < Server: nginx
- < Date: Tue, 07 Dec 2021 06:35:29 GMT
- < Content-Type: text/html; charset=utf-8
- < Content-Length: 6
- < Last-Modified: Mon, 06 Dec 2021 07:02:07 GMT
- < Connection: keep-alive
- < ETag: "61adb56f-6"
- < Accept-Ranges: bytes
- <
- web-2
-
-
#后续可以自已测试一下删除pod,再重新创建pod,以及扩缩容,观察pvc,pv的变化,然后可以看到pod的IP产生变化,但是名字不变,依然可以正常访问,并且还可以访问到原来的内容。
1、第一种配置
- archiveOnDelete: "false"
- reclaimPolicy: Delete #默认没有配置,默认值为Delete
#测试结果
- 1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
- 2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
- 3.删除PVC后,PV被删除且NFS Server对应数据被删除
2、第二种配置
- archiveOnDelete: "false"
- reclaimPolicy: Retain
#测试结果
- 1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
- 2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
- 3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
- 4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
3、第三种配置
- archiveOnDelete: "ture"
- reclaimPolicy: Retain
#测试结果
- 1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
- 2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
- 3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
- 4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
4、第四种配置
- archiveOnDelete: "ture"
- reclaimPolicy: Delete
#测试结果
- 1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
- 2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
- 3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
- 4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
总结:除以第一种配置外,其他三种配置在PV/PVC被删除后数据依然保留
有两种方法,一种是用kubectl patch,一种是在yaml文件直接注明
#kubectl patch
- #设置default时是为"true"
- # kubectl patch storageclass managed-nfs-storage -p '{ "metadata" : { "annotations" :{"storageclass.kubernetes.io/is-default-class": "true"}}}'
-
- # kubectl get sc #名字后面有个"default"字段
- NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
- managed-nfs-storage (default) test-nfs-storage Retain Immediate false 28h
-
- #取消default,值为"false"
-
- # kubectl patch storageclass managed-nfs-storage -p '{ "metadata" : { "annotations" :{"storageclass.kubernetes.io/is-default-class": "false"}}}'
-
- # kubectl get sc
- NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
- managed-nfs-storage test-nfs-storage Retain Immediate false 28h
#yaml文件
- apiVersion: storage.k8s.io/v1
- kind: StorageClass
- metadata:
- name: managed-nfs-storage
- annotations:
- "storageclass.kubernetes.io/is-default-class": "true" #添加此注释,这个变为default storageclass
- provisioner: test-nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
- parameters:
- # archiveOnDelete: "false"
- archiveOnDelete: "true"
- reclaimPolicy: Retain
- kind: PersistentVolumeClaim
- apiVersion: v1
- metadata:
- name: test-www
- # annotations:
- # volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" ##这里就可以不用指定
- spec:
- # storageClassName: "managed-nfs-storage" ##这里就可以不用指定
- accessModes:
- - ReadWriteMany
- #- ReadWriteOnce
- resources:
- requests:
- storage: 10Gi
参考链接: