rocky8.5,下载地址,目前 8.6 版本没有 sudo 软件包先在用quincysudo yum install -y git python3.8 yum-utils
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo groupadd docker
sudo gpasswd -a $USER docker
newgrp docker
sudo systemctl enable docker --now
docker version
cephadm官方提供了两种简单的安装方式(官方链接),但是第一种安装方式不支持在
rocky 中安装,第二种安装方式的 repo
无法加载,还有一种手动安装方式如下,官方链接
其中 ceph-release 改为 ceph 版本, distro 根据操作系统修改,可以使用 uname -r 命令查看,
cat << EOF >> /etc/yum.repos.d/ceph
[ceph]
name=Ceph packages for $basearch
baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/$basearch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/SRPMS
enabled=0
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
EOF
# 安装 cephadm
sudo yum install -y cephadm
# 查看是否安装成功
which cephadm
sudo mkdir -p /etc/ceph
# 设置 ceph 单节点配置文件
cat <<EOF > initial-ceph.conf
[global]
osd pool default size = 1
osd pool default min size = 1
EOF
# 在指定机器上为集群创建 mon 和 mgr 守护程序
sudo cephadm bootstrap --config initial-ceph.conf --mon-ip 192.168.100.229
为 Ceph 集群生成一个新的 SSH 密钥,并将其添加到 root 用户的 /root/.ssh/authorized_keys 文件中,将与新集群通信所需的最小配置文件写入
/etc/ceph/ceph.conf,将 client.admin 管理(特权!)秘密密钥的副本写入 /etc/ceph/ceph.client.admin.keyring
将公共密钥的副本写入 /etc/ceph/ceph.pub。下面这条命令会在初始化 ceph 命令之后显示。
sudo /sbin/cephadm shell --fsid ecf0f13c-f77c-11ec-ba77-fa163ecf9502 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
osd这里我们使用 cephadm 代替执行 ceph 命令
查看 Ceph 状态
# 查看 ceph 信息
sudo cephadm shell -- ceph -s
sudo cephadm shell -- ceph health
# 查看 ceph 可用卷
sudo cephadm shell -- ceph orch device ls
sudo cephadm shell -- ceph orch apply osd --all-available-devices
添加 osd
[rocky@ceph ~]$ sudo cephadm shell -- ceph orch daemon add osd ceph:/dev/vdb
Inferring fsid ecf0f13c-f77c-11ec-ba77-fa163ecf9502
Inferring config /var/lib/ceph/ecf0f13c-f77c-11ec-ba77-fa163ecf9502/mon.ceph/config
Using ceph image with id 'e5af760fa1c1' and tag 'v17' created on 2022-06-23 19:49:45 +0000 UTC
quay.io/ceph/ceph@sha256:d3f3e1b59a304a280a3a81641ca730982da141dad41e942631e4c5d88711a66b
Created no osd(s) on host ceph; already created?
查看 Ceph 信息
[rocky@ceph ~]$ sudo cephadm shell -- ceph -s
Inferring fsid ecf0f13c-f77c-11ec-ba77-fa163ecf9502
Inferring config /var/lib/ceph/ecf0f13c-f77c-11ec-ba77-fa163ecf9502/mon.ceph/config
Using ceph image with id 'e5af760fa1c1' and tag 'v17' created on 2022-06-23 19:49:45 +0000 UTC
quay.io/ceph/ceph@sha256:d3f3e1b59a304a280a3a81641ca730982da141dad41e942631e4c5d88711a66b
cluster:
id: ecf0f13c-f77c-11ec-ba77-fa163ecf9502
health: HEALTH_WARN
1 pool(s) have no replicas configured
services:
mon: 1 daemons, quorum ceph (age 26m)
mgr: ceph.watsbs(active, since 22m)
osd: 2 osds: 2 up (since 10m), 2 in (since 11m)
data:
pools: 1 pools, 1 pgs
objects: 2 objects, 449 KiB
usage: 440 MiB used, 9.6 GiB / 10 GiB avail
pgs: 1 active+clean