• K8S使用开源CEPH作为后端StorageClass


    欢迎关注微信公众号 singless

    1 引言

    K8S在1.13版本开始支持使用Ceph作为StorageClass。其中云原生存储Rook和开源Ceph应用都非常广泛。本文主要介绍K8S如何对接开源Ceph使用RBD卷。

    K8S对接Ceph的技术栈如下图所示。K8S主要通过容器存储接口CSI和Ceph进行交互。

    https://docs.ceph.com/en/reef/rbd/rbd-kubernetes/

     

    CSI的官方地址如下

    https://github.com/ceph/ceph-csi/tree/release-v3.9

    在部署CSI前需要确认好部署的CSI版本,在CSI的官网,我们可以看到CSI版本与K8S之间的对应关系。

    CSI与Ceph之间的对应关系参考以下链接

    https://github.com/ceph/ceph-csi#support-matrix

    作者环境的K8S版本为1.24,Ceph版本为14,因此使用3.5.1版本的CSI。下面开始部署。

    2 Ceph侧资源创建

    1. [root@ceph-1 ~]# ceph osd pool create k8s 64 64 ##创建k8s存储池
    2. pool 'k8s' created
    3. [root@ceph-1 ~]# ceph auth get-or-create client.k8s mon 'profile rbd' osd 'profile rbd pool=k8s' mgr 'profile rbd pool=k8s' ##新建一个ceph用户,用户名和key后续需要使用到
    4. [client.k8s]
    5. key = AQBClIVj8usBLxAAxTl0DwZCz9prNRRRI9Bl5A==
    6. [root@ceph-1 ~]# ceph -s |grep id ##查看ceph 的fsid
    7. id: 395b7a30-eb33-460d-8e38-524fc48c58cb
    8. [root@ceph-1 ~]# ceph mon stat #查看ceph的mon服务ip,我们主要采用v1版本的ip和端口
    9. e3: 3 mons at {
    10. ceph-1=[v2:10.0.245.192:3300/0,v1:10.0.245.192:6789/0],
    11. ceph-2=[v2:10.0.138.175:3300/0,v1:10.0.138.175:6789/0],
    12. ceph-3=[v2:10.0.28.226:3300/0,v1:10.0.28.226:6789/0]},
    13. election epoch 1112, leader 0 ceph-1, quorum 0,1,2 ceph-1,ceph-2,ceph-3

     

    3 ceph-csi部署环境准备

    主要步骤为下载官方csi部署文件,创建csi需要使用到的configmap及sa、secret。使用到的文件均位于ceph-csi/deploy/rbd/kubernetes/ 目录下,同时会新建以下三个文件用于保存ceph相关配置

    • csi-kms-config-map.yaml

    • ceph-config-map.yaml

    • csi-rbd-secret.yaml

     

    1. [root@k8s-master02 ~]# wget https://github.com/ceph/ceph-csi/archive/refs/tags/v3.5.1.tar.gz ##本次实验ceph版本为14,使用3.5.1版本有较好的兼容性
    2. [root@k8s-master02 ~]# tar xvf v3.5.1.tar.gz
    3. [root@k8s-master02 ~]# mv ceph-csi-3.5.1 ceph-csi
    4. [root@k8s-master02 ~]# cd /root/ceph-csi/deploy/rbd/kubernetes/
    5. [root@k8s-master02 kubernetes]# cat csi-config-map.yaml ##编辑config-map文件
    6. #
    7. # /!\ DO NOT MODIFY THIS FILE
    8. #
    9. # This file has been automatically generated by Ceph-CSI yamlgen.
    10. # The source for the contents can be found in the api/deploy directory, make
    11. # your modifications there.
    12. #
    13. ---
    14. apiVersion: v1
    15. kind: ConfigMap
    16. metadata:
    17. name: "ceph-csi-config"
    18. data:
    19. config.json: |-
    20. [
    21. {
    22. "clusterID": "395b7a30-eb33-460d-8e38-524fc48c58cb", #ceph -s输出的id
    23. "monitors": [
    24. "10.0.245.192:6789", ##三个mon服务的ip地址
    25. "10.0.138.175:6789",
    26. "10.0.28.226:6789"
    27. ]
    28. }
    29. ]
    30. [root@k8s-master02 kubernetes]# kubectl create ns ceph-csi ##创建namespace
    31. namespace/ceph-csi created
    32. [root@k8s-master02 kubernetes]# kubectl -n ceph-csi create -f csi-config-map.yaml
    33. configmap/ceph-csi-config created
    34. [root@k8s-master02 kubernetes]# cat csi-kms-config-map.yaml ##新建kms-config文件
    35. ---
    36. apiVersion: v1
    37. kind: ConfigMap
    38. data:
    39. config.json: |-
    40. {}
    41. metadata:
    42. name: ceph-csi-encryption-kms-config
    43. [root@k8s-master02 kubernetes]# kubectl create -n ceph-csi -f csi-kms-config-map.yaml
    44. configmap/ceph-csi-encryption-kms-config created
    45. [root@k8s-master02 kubernetes]# cat ceph-config-map.yaml ##新建一个ceph-config文件,ceph.conf中的内容与ceph集群中/etc/ceph/ceph.conf的内容保持一致
    46. ---
    47. apiVersion: v1
    48. kind: ConfigMap
    49. data:
    50. ceph.conf: |
    51. [global]
    52. fsid = 395b7a30-eb33-460d-8e38-524fc48c58cb
    53. public_network = 10.0.0.0/16
    54. cluster_network = 10.0.0.0/16
    55. mon_initial_members = ceph-1
    56. mon_host = 10.0.245.192
    57. auth_cluster_required = cephx
    58. auth_service_required = cephx
    59. auth_client_required = cephx
    60. mon_allow_pool_delete = true
    61. auth_allow_insecure_global_id_reclaim = false
    62. rbd_default_format = 2
    63. # keyring is a required key and its value should be empty
    64. keyring: |
    65. metadata:
    66. name: ceph-config
    67. [root@k8s-master02 kubernetes]# kubectl -n ceph-csi create -f ceph-config-map.yaml
    68. configmap/ceph-config created
    69. [root@k8s-master02 kubernetes]# cat csi-rbd-secret.yaml ##新建一个secret文件
    70. apiVersion: v1
    71. kind: Secret
    72. metadata:
    73. name: csi-rbd-secret
    74. namespace: ceph-csi
    75. stringData:
    76. userID: k8s ##ceph集群上创建额用户
    77. userKey: AQBClIVj8usBLxAAxTl0DwZCz9prNRRRI9Bl5A== ##用户的key
    78. [root@k8s-master02 kubernetes]# kubectl create -f csi-rbd-secret.yaml
    79. secret/csi-rbd-secret created
    80. [root@k8s-master02 ~]# sed -i "s/namespace: default/namespace: ceph-csi/g" $(grep -rl "namespace: default" ./) #将所有yaml文件的namespace从default改成ceph-csi
    81. [root@k8s-master02 kubernetes]# cat csi-provisioner-rbac.yaml ##检查配置文件中namespace是否更改成功
    82. ---
    83. apiVersion: v1
    84. kind: ServiceAccount
    85. metadata:
    86. name: rbd-csi-provisioner
    87. # replace with non-default namespace name
    88. namespace: ceph-csi
    89. ---
    90. kind: ClusterRole
    91. apiVersion: rbac.authorization.k8s.io/v1
    92. metadata:
    93. name: rbd-external-provisioner-runner
    94. ##创建rbac权限
    95. [root@k8s-master02 kubernetes]# kubectl create -f csi-provisioner-rbac.yaml
    96. serviceaccount/rbd-csi-provisioner created
    97. clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created
    98. clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created
    99. role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created
    100. rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created
    101. [root@k8s-master02 kubernetes]# kubectl create -f csi-nodeplugin-rbac.yaml
    102. serviceaccount/rbd-csi-nodeplugin created
    103. clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
    104. clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created

    4 部署ceph-csi相关容器

    yaml中的镜像源需要替换,否则部署时镜像可能下载不成功。如果部署其他版本的csi,可以自己设置通过阿里云容器镜像服务托管下载k8s.gcr.io中相关csi版本的镜像。

    1. [root@k8s-master02 kubernetes]# sed -i 's#k8s.gcr.io/sig-storage/#registry.cn-shanghai.aliyuncs.com/singless/#' csi-rbdplugin* ##替换yaml里的镜像源
    2. [root@k8s-master02 kubernetes]# kubectl -n ceph-csi create -f csi-rbdplugin-provisioner.yaml ##部署sidecar容器,yaml文件里的所有镜像地址修改为registry.cn-shanghai.aliyuncs.com/singless/
    3. service/csi-rbdplugin-provisioner created
    4. deployment.apps/csi-rbdplugin-provisioner created
    5. [root@k8s-master02 kubernetes]# kubectl -n ceph-csi create -f csi-rbdplugin.yaml ##部署RBD CSI driver容器
    6. daemonset.apps/csi-rbdplugin created
    7. service/csi-metrics-rbdplugin created
    8. [root@k8s-master02 kubernetes]# kubectl get pod -n ceph-csi ##检查pod是否都已启动
    9. NAME READY STATUS RESTARTS AGE
    10. csi-rbdplugin-8s6cf 3/3 Running 0 60m
    11. csi-rbdplugin-g74qd 3/3 Running 0 60m
    12. csi-rbdplugin-provisioner-56d6d755c7-jhcwl 7/7 Running 0 14m
    13. csi-rbdplugin-provisioner-56d6d755c7-lz2zf 7/7 Running 0 14m
    14. csi-rbdplugin-provisioner-56d6d755c7-pxw7q 7/7 Running 0 14m
    15. csi-rbdplugin-twjdh 3/3 Running 0 60m
    16. csi-rbdplugin-v529x 3/3 Running 0 60m
    17. csi-rbdplugin-wgh5c 3/3 Running 0 60m

    5 创建StorageClass

    1. [root@k8s-master02 kubernetes]# cat storageclass.yaml
    2. ---
    3. apiVersion: storage.k8s.io/v1
    4. kind: StorageClass
    5. metadata:
    6. name: csi-rbd-sc
    7. provisioner: rbd.csi.ceph.com
    8. parameters:
    9. clusterID: 395b7a30-eb33-460d-8e38-524fc48c58cb ##ceph集群ID
    10. pool: k8s ##ceph集群的pool名
    11. imageFeatures: layering ##定义创建的rbd features
    12. csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
    13. csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi
    14. csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
    15. csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi
    16. csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
    17. csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi
    18. csi.storage.k8s/fstype: ext4
    19. reclaimPolicy: Delete
    20. allowVolumeExpansion: true
    21. mountOptions:
    22. - discard
    23. [root@k8s-master02 kubernetes]# kubectl create -f storageclass.yaml
    24. storageclass.storage.k8s.io/csi-rbd-sc created

    6 创建PV

    1. [root@k8s-master02 kubernetes]# cd /root/ceph-csi/examples/rbd/
    2. [root@k8s-master02 rbd]# kubectl create -f pvc.yaml
    3. persistentvolumeclaim/rbd-pvc created

    本文主要介绍K8S对接Ceph使用RBD块的方法,对象存储或文件存储可以参考官方文档进行对接。

  • 相关阅读:
    Grammarly安装到word里 2023/10/18
    暴力算法 --- 莫队
    CSS Grid Layout(网格布局)
    Redis(四)持久化策略
    SRS流媒体服务器安装与推拉流测试
    Mybatis------Mybatis基本操作
    数据结构-树的概念结构及存储
    携手华为使能全场景创新,夯实算力底座,麒麟信安受邀参加华为全联接大会2023
    第二证券|12月A股投资方向来了!这些板块已先涨为敬
    基于VUE的教室预约系统的设计与实现(PHP后台)
  • 原文地址:https://blog.csdn.net/ensp1/article/details/133944389