• ceph存储快速部署


    前言:ceph的基本介绍

    Ceph是一个统一的分布式存储系统,设计初衷是提供较好的性能、可靠性和可扩展性。

    Ceph特点:

    1.高性能:

    a. 摒弃了传统的集中式存储元数据寻址的方案,采用CRUSH算法,数据分布均衡,并行度高。

    b.考虑了容灾域的隔离,能够实现各类负载的副本放置规则,例如跨机房、机架感知等。

    c.能够支持上千个存储节点的规模,支持TB到PB级的数据。

    2.高可用性:

    a. 副本数可以灵活控制。

    b. 支持故障域分隔,数据强一致性。

    c. 多种故障场景自动进行修复自愈。

    d. 没有单点故障,自动管理。

    3.高可扩展性

    a. 去中心化。

    b. 扩展灵活。

    c. 随着节点增加而线性增长

    4.特性丰富

    a. 支持三种存储接口:块存储、文件存储、对象存储。

    b. 支持自定义接口,支持多种语言驱动。

    Cephd的核心基于RADOS,RADOS可提供高可靠、高性能和全分布式的对象存储服务。对象的分布可以基于集群中各节点的实时状态,也可以自定义故障域来调整数据分布。块设备和文件都被抽象包装为对象,对象则是兼具安全和强一致性语义的抽象数据类型,因此RADOS可在大规模异构存储集群中实现动态数据与负载均衡。对象存储设备(OSD)是 RADOS 集群的基本存储单元,它的主要功能是存储、备份、恢复数据,并与其他OSD之间进行负载均衡和心跳检查等。一块硬盘 通常对应一个OSD,由OSD对硬盘存储进行管理,但有时一个分区也可以成为一个OSD,每个OSD皆可提供完备和具有强一致性语义的本地对象存储服务。MDS是元数据服务器,向外提供CephFS在服务时发出的处理元数据的请求,将客户端对文件的请求转换为对对象的请求。RADOS中可以有多个 MDS分担元数据查询的工作。

    工作环境top图

    数据放置算法:

    RADOS取得高可扩展性的关键在于彻底抛弃了传统存储系统中的中心元数据节点,另辟蹊径的以基于可扩展哈希的受控副本分布算法—CRUSH来代替。通过CRUSH算法,客户端可以计算出所要访问的对象所在的OSD。与以往的方法相比,CRUSH的数据管理机制更好,它把工作分配给集群内的所有客户端和OSD来处理,因此具有极大的伸缩性。CRUSH用智能数据复制来确保弹性,更能适应超大规模存储。如图所示,从文件到对象以及PG(Placement Group)都是逻辑上的映射,从PG到OSD的映射采用CRUSH算法,以保证在增删集群节点时能找到对应的数据位置。

    三类访问接口:

    Object:有原生的API,而且也兼容Swift和S3的API。(对象)

    Block:支持精简配置、快照、克隆。(块)

    File:Posix接口,支持快照。(文件)

    下面我们准备环境部署。一般需要5台服务器,master与它的四个mod节点。我们这边为了节约成本使用3台。 

    master192.168.80.10
    node1192.168.80.7
    node2192.168.80.11

     每台主机需要添加2块磁盘,磁盘大小自己控制

     镜像源提取码 

    百度网盘 请输入提取码iiii

    1. 准备工作。3台主机都需要操作
    2. 先下载我们的安装包
    3. [root@ceph1 ~]# yum install net-tools
    4. yum -y install lrzsz unzip zip
    5. //安装常用的工具包
    6. [root@ceph1 ~]# cat /etc/hosts
    7. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    8. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    9. 192.168.80.10 ceph1
    10. 192.168.80.7 ceph2
    11. 192.168.80.11 ceph
    12. //3台主机都要添加主节点的ip信息
    13. systemctl stop firewalld && systemctl disable firewalld
    14. setenforce 0 && sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
    15. swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab
    16. //关闭防火墙。selinux,swap分区
    17. [root@ceph1 ~]# ulimit -SHn 65535
    18. [root@ceph1 ~]# cat /etc/security/limits.conf
    19. # End of file
    20. * soft nofile 65535
    21. * hard nofile 65535
    22. * soft nproc 65535
    23. * hard nproc 65535
    24. //这些参数加在文件结尾
    25. [root@ceph1 ~]# cat /etc/sysctl.conf
    26. # sysctl settings are defined through files in
    27. # /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
    28. #
    29. # Vendors settings live in /usr/lib/sysctl.d/.
    30. # To override a whole file, create a new file with the same in
    31. # /etc/sysctl.d/ and put new settings there. To override
    32. # only specific settings, add a file with a lexically later
    33. # name in /etc/sysctl.d/ and put new settings there.
    34. #
    35. # For more information, see sysctl.conf(5) and sysctl.d(5).
    36. kernel.pid_max = 4194303
    37. net.ipv4.tcp_tw_recycle = 0
    38. net.ipv4.tcp_tw_reuse = 1
    39. net.ipv4.ip_local_port_range = 1024 65000
    40. net.ipv4.tcp_syncookies = 1
    41. net.ipv4.tcp_max_tw_buckets = 20480
    42. net.ipv4.tcp_max_syn_backlog = 20480
    43. net.core.netdev_max_backlog = 262144
    44. net.ipv4.tcp_fin_timeout = 20
    45. //在文件结尾加上参数
    46. [root@ceph1 ~]# sysctl -p
    47. kernel.pid_max = 4194303
    48. net.ipv4.tcp_tw_recycle = 0
    49. net.ipv4.tcp_tw_reuse = 1
    50. net.ipv4.ip_local_port_range = 1024 65000
    51. net.ipv4.tcp_syncookies = 1
    52. net.ipv4.tcp_max_tw_buckets = 20480
    53. net.ipv4.tcp_max_syn_backlog = 20480
    54. net.core.netdev_max_backlog = 262144
    55. net.ipv4.tcp_fin_timeout = 20
    56. //验证
    1. 主节点ceph1添加yum源
    2. [root@ceph1 ~]# cd /opt/
    3. [root@ceph1 opt]# ls
    4. ceph_images ceph-pkg containerd data
    5. //提前把ceph的包拉进opt目录解压,镜像后面解压
    6. mkdir /etc/yum.repos.d.bak/ -p
    7. mv /etc/yum.repos.d/* /etc/yum.repos.d.bak/
    8. [root@ceph1 ~]# cd /etc/yum.repos.d
    9. [root@ceph1 yum.repos.d]# cat ceph.repo
    10. [ceph]
    11. name=ceph
    12. baseurl=file:///opt/ceph-pkg/
    13. gpgcheck=0
    14. enabled=1
    15. [root@ceph1 opt]# yum makecache
    16. Loaded plugins: fastestmirror
    17. Loading mirror speeds from cached hostfile
    18. ceph | 2.9 kB 00:00:00
    19. Metadata Cache Created
    20. //建立缓存
    21. yum install -y vsftpd
    22. echo "anon_root=/opt/" >> /etc/vsftpd/vsftpd.conf
    23. systemctl enable --now vsftpd
    24. //下载VSFTP
    1. 两个节点换上ceph1的源
    2. mkdir /etc/yum.repos.d.bak/ -p
    3. mv /etc/yum.repos.d/* /etc/yum.repos.d.bak/
    4. [root@ceph2 ~]# cd /etc/yum.repos.d
    5. [root@ceph2 yum.repos.d]# cat ceph.repo
    6. [ceph]
    7. name=ceph
    8. baseurl=ftp://192.168.80.10/ceph-pkg/
    9. gpgcheck=0
    10. enabled=1
    11. [root@ceph3 ~]# cat /etc/yum.repos.d/ceph.repo
    12. [ceph]
    13. name=ceph
    14. baseurl=ftp://192.168.80.10/ceph-pkg/
    15. gpgcheck=0
    16. enabled=1
    17. yum clean all
    18. yum makecache
    1. 安装时钟源 cp1 cp2 cp3 同时配置
    2. yum install -y chrony
    3. //下载chrony
    4. [root@ceph1 ~]# cat /etc/chrony.conf
    5. # Use public servers from the pool.ntp.org project.
    6. # Please consider joining the pool (http://www.pool.ntp.org/join.html).
    7. server 0.centos.pool.ntp.org iburst
    8. server 1.centos.pool.ntp.org iburst
    9. server 2.centos.pool.ntp.org iburst
    10. server 3.centos.pool.ntp.org iburst
    11. server 192.168.80.10 iburst
    12. allow all
    13. local stratum 10
    14. [root@ceph2 ~]# cat /etc/chrony.conf
    15. # Use public servers from the pool.ntp.org project.
    16. # Please consider joining the pool (http://www.pool.ntp.org/join.html).
    17. server 0.centos.pool.ntp.org iburst
    18. server 1.centos.pool.ntp.org iburst
    19. server 2.centos.pool.ntp.org iburst
    20. server 3.centos.pool.ntp.org iburst
    21. server 192.168.80.10 iburst
    22. allow all
    23. local stratum 10
    24. [root@ceph3 ~]# cat /etc/chrony.conf
    25. # Use public servers from the pool.ntp.org project.
    26. # Please consider joining the pool (http://www.pool.ntp.org/join.html).
    27. server 0.centos.pool.ntp.org iburst
    28. server 1.centos.pool.ntp.org iburst
    29. server 2.centos.pool.ntp.org iburst
    30. server 3.centos.pool.ntp.org iburst
    31. server 192.168.80.10 iburst
    32. systemctl restart chronyd
    33. clock -w
    1. //安装docker cp1 cp2 cp3
    2. yum install -y yum-utils device-mapper-persistent-data lvm2
    3. yum -y install docker-ce python3
    4. yum -y install python3
    5. [root@ceph1 ~]# systemctl status docker
    6. ● docker.service - Docker Application Container Engine
    7. Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
    8. Active: active (running) since Sat 2022-10-01 12:06:58 CST; 12min ago
    9. Docs: https://docs.docker.com
    10. Main PID: 1411 (dockerd)
    11. Tasks: 24
    12. Memory: 138.6M
    13. CGroup: /system.slice/docker.service
    14. └─1411 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
    15. Oct 01 12:07:24 ceph1 dockerd[1411]: time="2022-10-01T12:07:24.692606616+08:00" level=info msg="ignoring event" cont...elete"
    16. Oct 01 12:07:25 ceph1 dockerd[1411]: time="2022-10-01T12:07:25.280448528+08:00" level=info msg="ignoring event" cont...elete"
    17. Oct 01 12:07:27 ceph1 dockerd[1411]: time="2022-10-01T12:07:27.053867172+08:00" level=info msg="ignoring event" cont...elete"
    18. Oct 01 12:07:27 ceph1 dockerd[1411]: time="2022-10-01T12:07:27.697494308+08:00" level=info msg="ignoring event" cont...elete"
    19. Oct 01 12:07:28 ceph1 dockerd[1411]: time="2022-10-01T12:07:28.274449001+08:00" level=info msg="ignoring event" cont...elete"
    20. Oct 01 12:07:28 ceph1 dockerd[1411]: time="2022-10-01T12:07:28.811756697+08:00" level=info msg="ignoring event" cont...elete"
    21. Oct 01 12:17:32 ceph1 dockerd[1411]: time="2022-10-01T12:17:32.608800965+08:00" level=info msg="ignoring event" cont...elete"
    22. Oct 01 12:17:33 ceph1 dockerd[1411]: time="2022-10-01T12:17:33.219899151+08:00" level=info msg="ignoring event" cont...elete"
    23. Oct 01 12:17:33 ceph1 dockerd[1411]: time="2022-10-01T12:17:33.758334094+08:00" level=info msg="ignoring event" cont...elete"
    24. Oct 01 12:17:34 ceph1 dockerd[1411]: time="2022-10-01T12:17:34.296629723+08:00" level=info msg="ignoring event" cont...elete"
    25. Hint: Some lines were ellipsized, use -l to show in full.
    26. /安装yum install -y cephadm
    27. //docker导入镜像
    28. [root@ceph1 ~]# cd /opt/ceph_images/
    29. [root@ceph1 ceph_images]# for i in `ls`;do docker load -i $i;done
    30. Loaded image: quay.io/ceph/ceph-grafana:6.7.4
    31. Loaded image: quay.io/ceph/ceph:v15
    32. Loaded image: quay.io/prometheus/alertmanager:v0.20.0
    33. Loaded image: quay.io/prometheus/node-exporter:v0.18.1
    34. Loaded image: quay.io/prometheus/prometheus:v2.18.1
    35. //3台主机都需要执行此操作
    36. //初始化mon节点
    37. cp1 cp2 cp3
    38. mkdir -p /etc/ceph
    39. cephadm bootstrap --mon-ip 10.219.23.71 --skip-pull
    40. //初始话必须在创建的目录下执行
    41. https://docs.ceph.com/docs/master/mgr/telemetry/
    42. //最后是这一行内容说明初始化成功
    1. //安装ceph-common工具(ceph1)
    2. 服务
    3. yum install -y ceph-common
    4. //做免密登录
    5. [root@ceph1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph2
    6. [root@ceph1 ~]#ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph3
    7. //发现集群
    8. [root@ceph1 ~]# ceph orch host add ceph1
    9. Added host 'ceph1'
    10. [root@ceph1 ~]# ceph orch host add ceph2
    11. Added host 'ceph2'
    12. [root@ceph1 ~]# ceph orch host add ceph3
    13. Added host 'ceph3'
    14. //master与2个节点形成集群
    15. ceph config set mon public_network 192.168.80.0/24
    16. //执行此命令
    17. [root@ceph1 ~]# ceph orch apply mon ceph1,ceph2,ceph3
    18. Scheduled mon update...
    19. [root@ceph1 ~]#ceph orch host label add ceph1 mon
    20. [root@ceph1 ~]#ceph orch host label add ceph2 mon
    21. [root@ceph1 ~]#ceph orch host label add ceph3 mon
    22. //添加节点信息
    23. # node2与node3查看进程
    24. ps -ef | grep docker
    25. //验证
    26. [root@ceph1 ~]# ceph orch device ls
    27. Hostname Path Type Serial Size Health Ident Fault Available
    28. ceph1 /dev/sdb hdd 21.4G Unknown N/A N/A Yes
    29. ceph2 /dev/sdb hdd 21.4G Unknown N/A N/A Yes
    30. ceph3 /dev/sdb hdd 21.4G Unknown N/A N/A Yes
    31. systemctl restart ceph.target
    32. //重启服务

    11.部署OSD
    存储数据

     

    1. # 查看可用的磁盘设备
    2. [root@ceph1 ~]# ceph orch device ls
    3. Hostname Path Type Serial Size Health Ident Fault Available
    4. ceph1 /dev/sdb hdd 21.4G Unknown N/A N/A Yes
    5. ceph2 /dev/sdb hdd 21.4G Unknown N/A N/A Yes
    6. ceph3 /dev/sdb hdd 21.4G Unknown N/A N/A Yes
    7. # 添加到ceph集群中,在未使用的设备上自动创建osd
    8. [root@ceph1 ~]# ceph orch apply osd --all-available-devices
    9. Scheduled osd.all-available-devices update...
    10. # 查看osd磁盘
    11. ceph -s
    12. ceph df
    13. [root@ceph1 ~]# ceph -s
    14. cluster:
    15. id: 2b0a47ba-3e32-11ed-87cb-000c29bdbaa4
    16. health: HEALTH_WARN
    17. 2 failed cephadm daemon(s)
    18. 1 filesystem is degraded
    19. 1 MDSs report slow metadata IOs
    20. 1 osds down
    21. 1 host (1 osds) down
    22. Reduced data availability: 97 pgs inactive
    23. Degraded data redundancy: 44/66 objects degraded (66.667%), 13 pgs degraded, 97 pgs undersized
    24. 2 slow ops, oldest one blocked for 1636 sec, mon.ceph1 has slow ops
    25. services:
    26. mon: 1 daemons, quorum ceph1 (age 27m)
    27. mgr: ceph3.ikzpva(active, since 27m), standbys: ceph1.xixxhu
    28. mds: cephfs:1/1 {0=cephfs.ceph1.ffzbyq=up:replay} 2 up:standby
    29. osd: 3 osds: 1 up (since 27m), 2 in (since 3d)
    30. data:
    31. pools: 3 pools, 97 pgs
    32. objects: 22 objects, 6.9 KiB
    33. usage: 2.0 GiB used, 38 GiB / 40 GiB avail
    34. pgs: 100.000% pgs not active
    35. 44/66 objects degraded (66.667%)
    36. 84 undersized+peered
    37. 13 undersized+degraded+peered
    38. [root@ceph1 ~]# ceph df
    39. --- RAW STORAGE ---
    40. CLASS SIZE AVAIL USED RAW USED %RAW USED
    41. hdd 40 GiB 38 GiB 39 MiB 2.0 GiB 5.10
    42. TOTAL 40 GiB 38 GiB 39 MiB 2.0 GiB 5.10
    43. --- POOLS ---
    44. POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
    45. cephfs-metadata 1 32 113 KiB 22 1.6 MiB 0 54 GiB
    46. device_health_metrics 2 1 0 B 0 0 B 0 18 GiB
    47. cephfs-data 3 64 0 B 0 0 B 0 18 GiB

    12.部署MDS
    存储元数据
    CephFS 需要两个 Pools,cephfs-data 和 cephfs-metadata,分别存储文件数据和文件元数据

    1. ceph osd pool create cephfs-metadata 32 32
    2. ceph osd pool create cephfs-data 64 64
    3. ceph fs new cephfs cephfs-metadata cephfs-data
    4. # 查看 cephfs
    5. [root@ceph1 ~]# ceph fs ls
    6. name: cephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
    7. [root@ceph1 ~]#ceph orch apply mds cephfs --placement="3 ceph1 ceph2 ceph3"
    8. Scheduled mds.cephfs update...
    9. # 查看mds有三个,两个预备状态
    10. ceph -s
    11. [root@ceph1 ~]# ceph orch apply rgw rgw01 zone01 --placement="3 ceph1 ceph2 ceph3"
    12. Scheduled rgw.rgw01.zone01 update...
    13. [root@ceph1 ~]# ceph orch ls
    14. NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID
    15. alertmanager 1/1 31s ago 3d count:1 quay.io/prometheus/alertmanager:v0.20.0 0881eb8f169f
    16. crash 3/3 56s ago 3d * quay.io/ceph/ceph:v15 93146564743f
    17. grafana 1/1 31s ago 3d count:1 quay.io/ceph/ceph-grafana:6.7.4 557c83e11646
    18. mds.cephfs 3/3 56s ago 58s ceph1;ceph2;ceph3;count:3 quay.io/ceph/ceph:v15 93146564743f
    19. mgr 2/2 31s ago 3d count:2 quay.io/ceph/ceph:v15 93146564743f
    20. mon 1/3 56s ago 10m ceph1;ceph2;ceph3 quay.io/ceph/ceph:v15 mix
    21. node-exporter 1/3 31s ago 3d * quay.io/prometheus/node-exporter:v0.18.1 e5a616e4b9cf
    22. osd.all-available-devices 3/3 56s ago 3m * quay.io/ceph/ceph:v15 93146564743f
    23. prometheus 1/1 31s ago 3d count:1 quay.io/prometheus/prometheus:v2.18.1 de242295e225
    24. rgw.rgw01.zone01 2/3 56s ago 33s ceph1;ceph2;ceph3;count:3 quay.io/ceph/ceph:v15 93146564743f

    # 创建授权账号,给客户端使用
     

    1. [root@ceph1 ~]# ceph auth get-or-create client.fsclient mon 'allow r' mds 'allow rw' osd 'allow rwx pool=cephfs-data' -o ceph.client.fsclient.keyring
    2. //-o 指定保存的客户端key文件,这是一种方式,可以创建授权文件,再利用授权文件导出key保存给
    3. 客户端使用
    4. # 获取key
    5. [root@ceph1 ~]# ceph auth print-key client.fsclient > fsclient.key
    6. # 传递给予客户端(客户端可能没有ceph目录,需要创建 mkdir /etc/ceph/)
    7. [root@ceph1 ~]# scp fsclient.key root@ceph2:/etc/ceph/
    8. [root@ceph1 ~]# scp fsclient.key root@ceph3:/etc/ceph/

    客户端使用

    1. # 挂载使用cephfs(ceph2或ceph3节点)
    2. yum -y install ceph-common
    3. # 查看ceph模块
    4. [root@ceph1 ~]# modinfo ceph
    5. filename: /lib/modules/3.10.0-1160.el7.x86_64/kernel/fs/ceph/ceph.ko.xz
    6. license: GPL
    7. description: Ceph filesystem for Linux
    8. author: Patience Warnick
    9. author: Yehuda Sadeh
    10. author: Sage Weil
    11. alias: fs-ceph
    12. retpoline: Y
    13. rhelversion: 7.9
    14. srcversion: EB765DDC1F7F8219F09D34C
    15. depends: libceph
    16. intree: Y
    17. vermagic: 3.10.0-1160.el7.x86_64 SMP mod_unload modversions
    18. signer: CentOS Linux kernel signing key
    19. sig_key: E1:FD:B0:E2:A7:E8:61:A1:D1:CA:80:A2:3D:CF:0D:BA:3A:A4:AD:F5
    20. sig_hashalgo: sha256
    21. # 检查是否有上面授权给予的密钥key,如果密钥则无法使用相应的key
    22. ls /etc/ceph/
    23. [root@ceph1 ~]# ls /etc/ceph/
    24. ceph.client.admin.keyring ceph.client.fsclient.keyring ceph.conf ceph.pub fsclient.key rbdmap
    25. # 创建挂载点(以data目录为例)
    26. mkdir /cephfs
    27. # 挂载
    28. [root@ceph1 ~]# mount -t ceph ceph1:6789,ceph2:6789,ceph3:6789:/ /cephfs -o name=fsclient,secretfile=/etc/ceph/fsclient.key
    29. # 查看是否挂载成功
    30. [root@ceph1 ~]# df -TH
    31. Filesystem Type Size Used Avail Use% Mounted on
    32. devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
    33. tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
    34. tmpfs tmpfs 2.0G 38M 2.0G 2% /run
    35. tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
    36. /dev/mapper/centos-root xfs 54G 11G 43G 21% /
    37. /dev/sda1 xfs 1.1G 158M 906M 15% /boot
    38. /dev/mapper/centos-home xfs 45G 34M 45G 1% /home
    39. overlay overlay 54G 11G 43G 21% /var/lib/docker/overlay2/45a533a79429e5b246e6def07f055ab74f5dc61285bd3c11545dc9c916196651/merged
    40. overlay overlay 54G 11G 43G 21% /var/lib/docker/overlay2/9aee54ff92cad9256ab432d3b3faba73889d1deb6d27bd970cd5cf17c3223ff7/merged
    41. overlay overlay 54G 11G 43G 21% /var/lib/docker/overlay2/2554e96032aeca13ed847b587dfc8633398cb808a0b27cb9962f669acf8775b1/merged
    42. overlay overlay 54G 11G 43G 21% /var/lib/docker/overlay2/958178579d45c6cbadad049097d057ec6a4547d6afbe5d1b9abec8b8aed8a64f/merged
    43. overlay overlay 54G 11G 43G 21% /var/lib/docker/overlay2/6477aa1cb3b34f694d1092a2b6f0dd198db45bf74f3203de49916e18e567c4ea/merged
    44. tmpfs tmpfs 396M 0 396M 0% /run/user/0
    45. overlay overlay 54G 11G 43G 21% /var/lib/docker/overlay2/aa0f011762164ffe210eb54d25bce0da97d58e284a4b00ff9e633be43c5babef/merged
    46. overlay overlay 54G 11G 43G 21% /var/lib/docker/overlay2/2bcca822e8dd204d897abdae6e954b9ca35681c890782661877a6724c4c152dd/merged
    47. overlay overlay 54G 11G 43G 21% /var/lib/docker/overlay2/2fcc501129c6abb0c93a0fe078eb588239ec323353bfb32639bf0a0855d94e38/merged
    48. overlay overlay 54G 11G 43G 21% /var/lib/docker/overlay2/e78a4e1045a64349c08be6575f0df1b7152960e697933d838fd7103edc49cd26/merged
    49. # 配置永久挂载
    50. vim /etc/fstab
    51. ceph1:6789,ceph2:6789,ceph3:6789,ceph4:6789,ceph5:6789:/ /cephfs ceph name=fsclient,secretfile=/etc/ceph/fsclient.key,_netdev,noatime 0 0
    52. # _netdev作为网络传输挂载,不能访问时就不挂载
    53. # noatime为了提示文件性能,不用实时访问时间戳

  • 相关阅读:
    Angular React Vue 比较 - 前言
    vite + vue3 + ts配置《企业级项目》二次封装el-table、分页
    企业业务中台应用架构和技术架构
    一次redis主从切换导致的数据丢失与陷入只读状态故障
    SqlServer双机发布订阅
    COPU名誉主席陆首群在第十七届开源中国开源世界高峰论坛上的致辞
    多对多的创建方式与Ajax
    一个vuepress配置问题,引发的js递归算法思考
    oracle 数据库实验三
    勒索病毒最新变种.faust勒索病毒来袭,如何恢复受感染的数据?
  • 原文地址:https://blog.csdn.net/Mingzi540/article/details/127129464