记录:337
场景:在CentOS 7.9操作系统上,在ceph集群中,使用ceph命令查看ceph集群信息,以及mon、mgr、mds、osd、rgw等组件信息。
版本:
操作系统:CentOS 7.9
ceph版本:ceph-13.2.10
名词:
ceph:ceph集群的工具命令。
1.基础环境
在规划集群主机上需安装ceph-deploy、ceph、ceph-radosgw软件。
(1)集群主节点安装软件
安装命令:yum install -y ceph-deploy ceph-13.2.10
安装命令:yum install -y ceph-radosgw-13.2.10
解析:集群中主节点安装ceph-deploy、ceph、ceph-radosgw软件。
(2)集群从节点安装软件
安装命令:yum install -y ceph-13.2.10
安装命令:yum install -y ceph-radosgw-13.2.10
解析:集群中从节点安装ceph、ceph-radosgw软件。
2.命令应用
ceph命令,在集群主节点的/etc/ceph目录下使用。
(1)查看ceph版本
命令:ceph --version
解析:查的当前主机安装ceph的版本。
(2)查看集群状态
命令:ceph -s
解析:查看集群状态,使用频率高。

(3)查看集群实时状态
命令:ceph -w
解析:查看集群实时状态,使用命令时,控制台实时监控集群变化。
(4)查看mgr服务
命令:ceph mgr services
解析:命令会打印出:"dashboard": "https://app162:18443/";使用浏览器就能登录dashboard。
(5)查看mon 状态汇总信息
命令:ceph mon stat
解析:汇总mon状态。
(6)查看mds状态汇总信息
命令:ceph mds stat
解析:汇总mds状态。
(7)查看osd状态汇总信息
命令:ceph osd stat
解析:汇总osd状态。
(8)创建pool
命令:ceph osd pool create hz_data 16
解析:创建一个存储池,名称hz_data,分配16个pg。
(9)查看存储池
命令:ceph osd pool ls
解析:能看到存储列表。
(10)查看pool的pg数量
命令:ceph osd pool get hz_data pg_num
解析:查看pool的pg_num数量。
(11)设置pool的pg数量
命令:ceph osd pool set hz_data pg_num 18
解析:设置pool的pg_num数量。
(12)删除pool
命令:ceph osd pool delete hz_data hz_data --yes-i-really-really-mean-it
解析:删除pool时,pool的名称需要传两次。
(13)创建ceph文件系统
命令:ceph fs new hangzhoufs xihu_metadata xihu_data
解析:使用ceph fs new创建ceph文件系统;文件系统名称:hangzhoufs;存储池xihu_data和xihu_metadata。
(14)查ceph文件系统
命令:ceph fs ls
解析:查看ceph文件系统,打印文件系统名称和存储池。
(15)查ceph文件系统状态
命令:ceph fs status
解析:查ceph文件系统状态,打印文件系统的Pool的信息、类型等。
(16)删除ceph文件系统
命令:ceph fs rm hangzhoufs --yes-i-really-mean-it
解析:hangzhoufs是已创建的ceph文件系统名称。
(17)查看服务状态
命令:ceph service status
解析:查看服务状态,查看服务最后一次反应时间。
(18)查看节点quorum状态
命令:ceph quorum_status
解析:查看节点quorum状态。
(19)查看pg状态
命令:ceph pg stat
解析:查看pg状态;pg,placement group。
(20)查看pg清单
命令:ceph pg ls
解析:列出所有pg。
(21)查看osd磁盘信息
命令:ceph osd df
解析:打印osd磁盘信息,包括容量,可用空间,已经使用空间等。
3.命令帮助手册
(1)ceph帮助命令
命令:ceph --help
解析:查看ceph支持全部命令和选项,在实际工作中,查看这个手册应该是必备之选。
- General usage:
- ==============
- usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE]
- [--setuser SETUSER] [--setgroup SETGROUP] [--id CLIENT_ID]
- [--name CLIENT_NAME] [--cluster CLUSTER]
- [--admin-daemon ADMIN_SOCKET] [-s] [-w] [--watch-debug]
- [--watch-info] [--watch-sec] [--watch-warn] [--watch-error]
- [--watch-channel {cluster,audit,*}] [--version] [--verbose]
- [--concise] [-f {json,json-pretty,xml,xml-pretty,plain}]
- [--connect-timeout CLUSTER_TIMEOUT] [--block] [--period PERIOD]
-
- Ceph administration tool
-
- optional arguments:
- -h, --help request mon help
- -c CEPHCONF, --conf CEPHCONF
- ceph configuration file
- -i INPUT_FILE, --in-file INPUT_FILE
- input file, or "-" for stdin
- -o OUTPUT_FILE, --out-file OUTPUT_FILE
- output file, or "-" for stdout
- --setuser SETUSER set user file permission
- --setgroup SETGROUP set group file permission
- --id CLIENT_ID, --user CLIENT_ID
- client id for authentication
- --name CLIENT_NAME, -n CLIENT_NAME
- client name for authentication
- --cluster CLUSTER cluster name
- --admin-daemon ADMIN_SOCKET
- submit admin-socket commands ("help" for help
- -s, --status show cluster status
- -w, --watch watch live cluster changes
- --watch-debug watch debug events
- --watch-info watch info events
- --watch-sec watch security events
- --watch-warn watch warn events
- --watch-error watch error events
- --watch-channel {cluster,audit,*}
- which log channel to follow when using -w/--watch. One
- of ['cluster', 'audit', '*']
- --version, -v display version
- --verbose make verbose
- --concise make less verbose
- -f {json,json-pretty,xml,xml-pretty,plain}, --format {json,json-pretty,xml,xml-pretty,plain}
- --connect-timeout CLUSTER_TIMEOUT
- set a timeout for connecting to the cluster
- --block block until completion (scrub and deep-scrub only)
- --period PERIOD, -p PERIOD
- polling period, default 1.0 second (for polling
- commands only)
-
- Local commands:
- ===============
-
- ping
Send simple presence/life test to a mon -
may be 'mon.*' for all mons - daemon {type.id|path}
- Same as --admin-daemon, but auto-find admin socket
- daemonperf {type.id | path} [stat-pats] [priority] [
] [] - daemonperf {type.id | path} list|ls [stat-pats] [priority]
- Get selected perf stats from daemon/admin socket
- Optional shell-glob comma-delim match string stat-pats
- Optional selection priority (can abbreviate name):
- critical, interesting, useful, noninteresting, debug
- List shows a table of all available stats
- Run
times (default forever), - once per
seconds (default 1) -
-
- Monitor commands:
- =================
- auth add
{ [...]} add auth info for from input file, or random key - if no input is given, and/or any caps specified in the
- command
- auth caps
[...] update caps for from caps specified in the command - auth export {
} write keyring for requested entity, or master keyring if - none given
- auth get
write keyring file with requested key - auth get-key
display requested key - auth get-or-create
{ [...]} add auth info for from input file, or random key - if no input given, and/or any caps specified in the command
- auth get-or-create-key
{ [...]} get, or add, key for from system/caps pairs - specified in the command. If key already exists, any
- given caps must match the existing caps for that key.
- auth import auth import: read keyring file from -i
- auth ls list authentication state
- auth print-key
display requested key - auth print_key
display requested key - auth rm
remove all caps for - balancer dump
Show an optimization plan - balancer eval {
- specific pool or specific plan
- balancer eval-verbose {
- specific pool or specific plan (verbosely)
- balancer execute
Execute an optimization plan - balancer ls List all plans
- balancer mode none|crush-compat|upmap Set balancer mode
- balancer off Disable automatic balancing
- balancer on Enable automatic balancing
- balancer optimize
{ [...]} Run optimizer to create a new plan - balancer reset Discard all optimization plans
- balancer rm
Discard an optimization plan - balancer show
Show details of an optimization plan - balancer sleep
Set balancer sleep interval - balancer status Show balancer status
- config assimilate-conf Assimilate options from a conf, and return a new, minimal
- conf file
- config dump Show all configuration option(s)
- config get <who> {
} Show configuration option(s) for an entity - config help
Describe a configuration option - config log {
} Show recent history of config changes - config reset
Revert configuration to previous state - config rm <who>
Clear a configuration option for one or more entities - config set <who>
Set a configuration option for one or more entities - config show <who> {
} Show running configuration - config show-with-defaults <who> Show running configuration (including compiled-in defaults)
- config-key dump {
} dump keys and values (with optional prefix) - config-key exists
check for 's existence - config-key get
get - config-key ls list keys
- config-key rm
rm - config-key set
{} set to value - crash info
show crash dump metadata - crash json_report
Crashes in the last hours - crash ls Show saved crash dumps
- crash post Add a crash dump (use -i
) - crash prune
Remove crashes older than days - crash rm
Remove a saved crash - crash self-test Run a self test of the crash module
- crash stat Summarize recorded crashes
- dashboard create-self-signed-cert Create self signed certificate
- dashboard get-enable-browsable-api Get the ENABLE_BROWSABLE_API option value
- dashboard get-rest-requests-timeout Get the REST_REQUESTS_TIMEOUT option value
- dashboard get-rgw-api-access-key Get the RGW_API_ACCESS_KEY option value
- dashboard get-rgw-api-admin-resource Get the RGW_API_ADMIN_RESOURCE option value
- dashboard get-rgw-api-host Get the RGW_API_HOST option value
- dashboard get-rgw-api-port Get the RGW_API_PORT option value
- dashboard get-rgw-api-scheme Get the RGW_API_SCHEME option value
- dashboard get-rgw-api-secret-key Get the RGW_API_SECRET_KEY option value
- dashboard get-rgw-api-ssl-verify Get the RGW_API_SSL_VERIFY option value
- dashboard get-rgw-api-user-id Get the RGW_API_USER_ID option value
- dashboard set-enable-browsable-api
Set the ENABLE_BROWSABLE_API option value - dashboard set-login-credentials
Set the login credentials - dashboard set-rest-requests-timeout
Set the REST_REQUESTS_TIMEOUT option value - dashboard set-rgw-api-access-key
Set the RGW_API_ACCESS_KEY option value - dashboard set-rgw-api-admin-resource
Set the RGW_API_ADMIN_RESOURCE option value - dashboard set-rgw-api-host
Set the RGW_API_HOST option value - dashboard set-rgw-api-port
Set the RGW_API_PORT option value - dashboard set-rgw-api-scheme
Set the RGW_API_SCHEME option value - dashboard set-rgw-api-secret-key
Set the RGW_API_SECRET_KEY option value - dashboard set-rgw-api-ssl-verify
Set the RGW_API_SSL_VERIFY option value - dashboard set-rgw-api-user-id
Set the RGW_API_USER_ID option value - dashboard set-session-expire
Set the session expire timeout - df {detail} show cluster free space stats
- features report of connected features
- fs add_data_pool
add data pool - fs authorize
[...] add auth for to access file system - based on following directory and permissions pairs
- fs dump {
} dump all CephFS status, optionally from epoch - fs flag set enable_multiple
{--yes-i-really-mean-it} Set a global CephFS flag - fs get
get info about one filesystem - fs ls list filesystems
- fs new
{--force} {--allow- make new filesystem using named pools and - dangerous-metadata-overlay}
- fs reset
{--yes-i-really-mean-it} disaster recovery only: reset to a single-MDS map - fs rm
{--yes-i-really-mean-it} disable the named filesystem - fs rm_data_pool
remove data pool - fs set
max_mds|max_file_size|allow_new_snaps| set fs parameter to - inline_data|cluster_down|allow_dirfrags|balancer|standby_
- count_wanted|session_timeout|session_autoclose|down|
- joinable|min_compat_client
{} - fs set-default
set the default to the named filesystem - fs status {
} Show the status of a CephFS filesystem - fsid show cluster FSID/UUID
- health {detail} show cluster health
- heap dump|start_profiler|stop_profiler|release|stats show heap usage info (available only if compiled with
- tcmalloc)
- hello {
} Prints hello world to mgr.x.log - influx config-set
Set a configuration value - influx config-show Show current configuration
- influx self-test debug the module
- influx send Force sending data to Influx
- injectargs
[...] inject config arguments into monitor - iostat Get IO rates
- iostat self-test Run a self test the iostat module
- log
[...] log supplied text to the monitor log - log last {
} {debug|info|sec|warn|error} {*| print last few lines of the cluster log - cluster|audit}
- mds compat rm_compat
remove compatible feature - mds compat rm_incompat
remove incompatible feature - mds compat show show mds compatibility settings
- mds count-metadata
count MDSs by metadata field property - mds fail
Mark MDS failed: trigger a failover if a standby is - available
- mds metadata {
} fetch metadata for mds - mds repaired
mark a damaged MDS rank as no longer damaged - mds rm
remove nonactive mds - mds rmfailed
{} remove failed mds - mds set_state
set mds state of to - mds stat show MDS status
- mds versions check running versions of MDSs
- mgr count-metadata
count ceph-mgr daemons by metadata field property - mgr dump {
} dump the latest MgrMap - mgr fail
treat the named manager daemon as failed - mgr metadata {
} dump metadata for all daemons or a specific daemon - mgr module disable
disable mgr module - mgr module enable
{--force} enable mgr module - mgr module ls list active mgr modules
- mgr self-test background start
Activate a background workload (one of command_spam, throw_ - exception)
- mgr self-test background stop Stop background workload if any is running
- mgr self-test config get
Peek at a configuration value - mgr self-test config get_localized
Peek at a configuration value (localized variant) - mgr self-test remote Test inter-module calls
- mgr self-test run Run mgr python interface tests
- mgr services list service endpoints provided by mgr modules
- mgr versions check running versions of ceph-mgr daemons
- mon add
add new monitor named at - mon compact cause compaction of monitor's leveldb/rocksdb storage
- mon count-metadata
count mons by metadata field property - mon dump {
} dump formatted monmap (optionally from epoch) - mon feature ls {--with-value} list available mon map features to be set/unset
- mon feature set
{--yes-i-really-mean-it} set provided feature on mon map - mon getmap {
} get monmap - mon metadata {<id>} fetch metadata for mon <id>
- mon rm
remove monitor named - mon scrub scrub the monitor stores
- mon stat summarize monitor status
- mon sync force {--yes-i-really-mean-it} {--i-know-what-i- force sync of and clear monitor store
- am-doing}
- mon versions check running versions of monitors
- mon_status report status of monitors
- node ls {all|osd|mon|mds|mgr} list all nodes in cluster [type]
- osd add-nodown
[...] mark osd(s) <id> [<id>...] as nodown, or use to - mark all osds as nodown
- osd add-noin
[...] mark osd(s) <id> [<id>...] as noin, or use to - mark all osds as noin
- osd add-noout
[...] mark osd(s) <id> [<id>...] as noout, or use to - mark all osds as noout
- osd add-noup
[...] mark osd(s) <id> [<id>...] as noup, or use to - mark all osds as noup
- osd blacklist add|rm
{<float[0.0-]>} add (optionally until seconds from now) or remove -
from blacklist - osd blacklist clear clear all blacklisted clients
- osd blacklist ls show blacklisted clients
- osd blocked-by print histogram of which OSDs are blocking their peers
- osd count-metadata
count OSDs by metadata field property - osd crush add
id|osd.id)> <float[0.0-]> add or update crushmap position and weight for with - [
...] and location - osd crush add-bucket
<type> { [...]} add no-parent (probably root) crush bucket of type - <type> to location
- osd crush class ls list all crush device classes
- osd crush class ls-osd
list all osds belonging to the specific - osd crush class rename
rename crush device class to - osd crush create-or-move
id|osd.id)> <float[0.0- create entry or move existing entry for at/ - ]>
[...] to location - osd crush dump dump crush map
- osd crush get-tunable straw_calc_version get crush tunable
- osd crush link
[...] link existing entry for under location - osd crush ls
list items beneath a node in the CRUSH tree - osd crush move
[...] move existing entry for to location - osd crush rename-bucket
rename bucket to - osd crush reweight
<float[0.0-]> change 's weight to in crush map - osd crush reweight-all recalculate the weights for the tree to ensure they sum
- correctly
- osd crush reweight-subtree
change all leaf items beneath to in crush - map
- osd crush rm
{} remove from crush map (everywhere, or just at -
) - osd crush rm-device-class
[...] remove class of the osd(s) [...],or use - to remove all.
- osd crush rule create-erasure
{} create crush rule for erasure coded pool created - with
(default default) - osd crush rule create-replicated
create crush rule for replicated pool to start from - {
} , replicate across buckets of type , using a - choose mode of
(default firstn; indep best - for erasure pools)
- osd crush rule create-simple
{firstn| create crush rule to start from , replicate - indep} across buckets of type
, using a choose mode of -
(default firstn; indep best for erasure - pools)
- osd crush rule dump {
} dump crush rule (default all) - osd crush rule ls list crush rules
- osd crush rule ls-by-class
list all crush rules that reference the same - osd crush rule rename
rename crush rule to - osd crush rule rm
remove crush rule - osd crush set
update crushmap position and weight for to - [
...] with location - osd crush set {
} set crush map from input file - osd crush set-all-straw-buckets-to-straw2 convert all CRUSH current straw buckets to use the straw2
- algorithm
- osd crush set-device-class
[...] set the of the osd(s) [...],or use - any> to set all.
- osd crush set-tunable straw_calc_version
set crush tunable to - osd crush show-tunables show current crush tunables
- osd crush swap-bucket
{--yes-i-really-mean- swap existing bucket contents from (orphan) bucket - it} and
- osd crush tree {--show-shadow} dump crush buckets and items in a tree view
- osd crush tunables legacy|argonaut|bobtail|firefly|hammer| set crush tunables values to
- jewel|optimal|default
- osd crush unlink
{} unlink from crush map (everywhere, or just at -
) - osd crush weight-set create
flat|positional create a weight-set for a given pool - osd crush weight-set create-compat create a default backward-compatible weight-set
- osd crush weight-set dump dump crush weight sets
- osd crush weight-set ls list crush weight sets
- osd crush weight-set reweight
-
s weight- - ]> [<float[0.0-]>...] set
- osd crush weight-set reweight-compat
- <float[0.0-]> set weight for an item (bucket or osd) in the backward-
- [<float[0.0-]>...] compatible weight-set
- osd crush weight-set rm
remove the weight-set for a given pool - osd crush weight-set rm-compat remove the backward-compatible weight-set
- osd deep-scrub <who> initiate deep scrub on osd <who>, or use
to deep - scrub all
- osd destroy
id|osd.id)> {--yes-i-really-mean-it} mark osd as being destroyed. Keeps the ID intact (allowing - reuse), but removes cephx keys, config-key data and
- lockbox keys, rendering data permanently unreadable.
- osd df {plain|tree} show OSD utilization
- osd down
[...] set osd(s) <id> [<id>...] down, or use to set all - osds down
- osd dump {
} print summary of OSD map - osd erasure-code-profile get
get erasure code profile - osd erasure-code-profile ls list all erasure code profiles
- osd erasure-code-profile rm
remove erasure code profile - osd erasure-code-profile set
{ [.. create erasure code profile with [ ...] - .]} pairs. Add a --force at the end to override an existing
- profile (VERY DANGEROUS)
- osd find
id|osd.id)> find osd <id> in the CRUSH map and show its location - osd force-create-pg
{--yes-i-really-mean-it} force creation of pg - osd get-require-min-compat-client get the minimum client version we will maintain
- compatibility with
- osd getcrushmap {
} get CRUSH map - osd getmap {
} get OSD map - osd getmaxosd show largest OSD id
- osd in
[...] set osd(s) <id> [<id>...] in, can use to - automatically set all previously out osds in
- osd last-stat-seq
id|osd.id)> get the last pg stats sequence number reported for this osd - osd lost
id|osd.id)> {--yes-i-really-mean-it} mark osd as permanently lost. THIS DESTROYS DATA IF NO MORE - REPLICAS EXIST, BE CAREFUL
- osd ls {
} show all OSD ids - osd ls-tree {
} show OSD ids under bucket in the CRUSH map - osd lspools {
} list pools - osd map
{} find pg for - osd metadata {
id|osd.id)>} fetch metadata for osd {id} (default all) - osd new
{id|osd.id)>} Create a new OSD. If supplied, the `id` to be replaced - needs to exist and have been previously destroyed. Reads
- secrets from JSON file via `-i
` (see man page). - osd ok-to-stop
[...] check whether osd(s) can be safely stopped without reducing - immediate data availability
- osd out
[...] set osd(s) <id> [<id>...] out, or use to set all - osds out
- osd pause pause osd
- osd perf print dump of OSD perf summary stats
- osd pg-temp
{id|osd.id)> [id| set pg_temp mapping pgid:[<id> [<id>...]] (developers only) - osd.id)>...]}
- osd pg-upmap
id|osd.id)> [id| set pg_upmap mapping :[<id> [<id>...]] (developers - osd.id)>...] only)
- osd pg-upmap-items
id|osd.id)> [set pg_upmap_items mapping :{<id> to <id>, [...]} ( - id|osd.id)>...] developers only)
- osd pool application disable
{--yes-i- disables use of an application on pool - really-mean-it}
- osd pool application enable
{--yes-i- enable use of an application [cephfs,rbd,rgw] on pool - really-mean-it}
- osd pool application get {
} {} {} get value of key of application on pool -
- osd pool application rm
removes application metadata key on pool -
- osd pool application set
sets application metadata key to on - pool
- osd pool create
{} create pool - {replicated|erasure} {
} {} - {
} - osd pool get
size|min_size|pg_num|pgp_num|crush_ get pool parameter - rule|hashpspool|nodelete|nopgchange|nosizechange|write_
- fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_
- set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|auid|
- target_max_objects|target_max_bytes|cache_target_dirty_
- ratio|cache_target_dirty_high_ratio|cache_target_full_
- ratio|cache_min_flush_age|cache_min_evict_age|erasure_
- code_profile|min_read_recency_for_promote|all|min_write_
- recency_for_promote|fast_read|hit_set_grade_decay_rate|
- hit_set_search_last_n|scrub_min_interval|scrub_max_
- interval|deep_scrub_interval|recovery_priority|recovery_
- op_priority|scrub_priority|compression_mode|compression_
- algorithm|compression_required_ratio|compression_max_blob_
- size|compression_min_blob_size|csum_type|csum_min_block|
- csum_max_block|allow_ec_overwrites
- osd pool get-quota
obtain object or byte limits for pool - osd pool ls {detail} list pools
- osd pool mksnap
make snapshot in - osd pool rename
rename to - osd pool rm
{} {} remove pool - osd pool rmsnap
remove snapshot from - osd pool set
size|min_size|pg_num|pgp_num|crush_ set pool parameter to - rule|hashpspool|nodelete|nopgchange|nosizechange|write_
- fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_
- set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|
- target_max_bytes|target_max_objects|cache_target_dirty_
- ratio|cache_target_dirty_high_ratio|cache_target_full_
- ratio|cache_min_flush_age|cache_min_evict_age|auid|min_
- read_recency_for_promote|min_write_recency_for_promote|
- fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|
- scrub_min_interval|scrub_max_interval|deep_scrub_interval|
- recovery_priority|recovery_op_priority|scrub_priority|
- compression_mode|compression_algorithm|compression_
- required_ratio|compression_max_blob_size|compression_min_
- blob_size|csum_type|csum_min_block|csum_max_block|allow_
- ec_overwrites
{--yes-i-really-mean-it} - osd pool set-quota
max_objects|max_bytes set object or byte limit on pool - osd pool stats {
} obtain stats from all pools, or from specified pool - osd primary-affinity
id|osd.id)> <float[0.0-1.0]> adjust osd primary-affinity from 0.0 <= <= 1.0 - osd primary-temp
id|osd.id)> set primary_temp mapping pgid:<id>|-1 (developers only) - osd purge
id|osd.id)> {--yes-i-really-mean-it} purge all osd data from the monitors. Combines `osd destroy` - , `osd rm`, and `osd crush rm`.
- osd purge-new
id|osd.id)> {--yes-i-really-mean- purge all traces of an OSD that was partially created but - it} never started
- osd repair <who> initiate repair on osd <who>, or use
to repair all - osd require-osd-release luminous|mimic {--yes-i-really- set the minimum allowed OSD release to participate in the
- mean-it} cluster
- osd reweight
id|osd.id)> <float[0.0-1.0]> reweight osd to 0.0 < < 1.0 - osd reweight-by-pg {
} {<float>} {} { reweight OSDs by PG distribution [overload-percentage-for- - [
...]} consideration, default 120] - osd reweight-by-utilization {
} {<float>} {} {-- reweight OSDs by utilization [overload-percentage-for- - no-increasing} consideration, default 120]
- osd reweightn
reweight osds with {<id>: ,...}) - osd rm
[...] remove osd(s) <id> [<id>...], or use to remove - all osds
- osd rm-nodown
[...] allow osd(s) <id> [<id>...] to be marked down (if they are - currently marked as nodown), can use
to - automatically filter out all nodown osds
- osd rm-noin
[...] allow osd(s) <id> [<id>...] to be marked in (if they are - currently marked as noin), can use
to - automatically filter out all noin osds
- osd rm-noout
[...] allow osd(s) <id> [<id>...] to be marked out (if they are - currently marked as noout), can use
to - automatically filter out all noout osds
- osd rm-noup
[...] allow osd(s) <id> [<id>...] to be marked up (if they are - currently marked as noup), can use
to - automatically filter out all noup osds
- osd rm-pg-upmap
clear pg_upmap mapping for (developers only) - osd rm-pg-upmap-items
clear pg_upmap_items mapping for (developers only) - osd safe-to-destroy
[...] check whether osd(s) can be safely destroyed without - reducing data durability
- osd scrub <who> initiate scrub on osd <who>, or use
to scrub all - osd set full|pause|noup|nodown|noout|noin|nobackfill| set
- norebalance|norecover|noscrub|nodeep-scrub|notieragent|
- nosnaptrim|sortbitwise|recovery_deletes|require_jewel_
- osds|require_kraken_osds|pglog_hardlimit {--yes-i-really-
- mean-it}
- osd set-backfillfull-ratio <float[0.0-1.0]> set usage ratio at which OSDs are marked too full to
- backfill
- osd set-full-ratio <float[0.0-1.0]> set usage ratio at which OSDs are marked full
- osd set-nearfull-ratio <float[0.0-1.0]> set usage ratio at which OSDs are marked near-full
- osd set-require-min-compat-client
{--yes-i- set the minimum client version we will maintain - really-mean-it} compatibility with
- osd setcrushmap {
} set crush map from input file - osd setmaxosd
set new maximum osd value - osd smart get
Get smart data for osd.id - osd stat print summary of OSD map
- osd status {
} Show the status of OSDs within a bucket, or all - osd test-reweight-by-pg {
} {<float>} {} dry run of reweight OSDs by PG distribution [overload- - {
[...]} percentage-for-consideration, default 120] - osd test-reweight-by-utilization {
} {<float>} {} dry run of reweight OSDs by utilization [overload- - {--no-increasing} percentage-for-consideration, default 120]
- osd tier add
{--force-nonempty} add the tier (the second one) to base pool -
(the first one) - osd tier add-cache
add a cache (the second one) of size to - existing pool
(the first one) - osd tier cache-mode
none|writeback|forward| specify the caching mode for cache tier - readonly|readforward|proxy|readproxy {--yes-i-really-mean-
- it}
- osd tier rm
remove the tier (the second one) from base pool -
(the first one) - osd tier rm-overlay
remove the overlay pool for base pool - osd tier set-overlay
set the overlay pool for base pool to be -
- osd tree {
} {up|down|in|out|destroyed [up|down|in| print OSD tree - out|destroyed...]}
- osd tree-from {
} {up|down|in|out| print OSD tree in bucket - destroyed [up|down|in|out|destroyed...]}
- osd unpause unpause osd
- osd unset full|pause|noup|nodown|noout|noin|nobackfill| unset
- norebalance|norecover|noscrub|nodeep-scrub|notieragent|
- nosnaptrim
- osd utilization get basic pg distribution stats
- osd versions check running versions of OSDs
- pg cancel-force-backfill
[...] restore normal backfill priority of - pg cancel-force-recovery
[...] restore normal recovery priority of - pg debug unfound_objects_exist|degraded_pgs_exist show debug info about pgs
- pg deep-scrub
start deep-scrub on - pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief show human-readable versions of pg map (only 'all' valid
- [all|summary|sum|delta|pools|osds|pgs|pgs_brief...]} with plain)
- pg dump_json {all|summary|sum|pools|osds|pgs [all|summary| show human-readable version of pg map in json only
- sum|pools|osds|pgs...]}
- pg dump_pools_json show pg pools info in json only
- pg dump_stuck {inactive|unclean|stale|undersized|degraded show information about stuck pgs
- [inactive|unclean|stale|undersized|degraded...]} {
} - pg force-backfill
[...] force backfill of first - pg force-recovery
[...] force recovery of first - pg getmap get binary pg map to -o/stdout
- pg ls {
} { [...]} list pg with specific pool, osd, state - pg ls-by-osd
id|osd.id)> {} { list pg on osd [osd] - [
...]} - pg ls-by-pool
{ [...]} list pg with pool = [poolname] - pg ls-by-primary
id|osd.id)> {} { list pg with primary = [osd] - [
...]} - pg map
show mapping of pg to osds - pg repair
start repair on - pg scrub
start scrub on - pg stat show placement group status.
- prometheus file_sd_config Return file_sd compatible prometheus config for mgr cluster
- prometheus self-test Run a self test on the prometheus module
- quorum enter|exit enter or exit quorum
- quorum_status report status of monitor quorum
- report {
[...]} report full status of cluster, optional title tag strings - restful create-key
Create an API key with this name - restful create-self-signed-cert Create localized self signed certificate
- restful delete-key
Delete an API key with this name - restful list-keys List all API keys
- restful restart Restart API server
- service dump dump service map
- service status dump service state
- status show cluster status
- telegraf config-set
Set a configuration value - telegraf config-show Show current configuration
- telegraf self-test debug the module
- telegraf send Force sending data to Telegraf
- telemetry config-set
Set a configuration value - telemetry config-show Show current configuration
- telemetry self-test Perform a self-test
- telemetry send Force sending data to Ceph telemetry
- telemetry show Show last report or report to be sent
- tell
[...] send a command to a specific daemon - time-sync-status show time sync status
- version show mon daemon version
- versions check running versions of ceph daemons
- zabbix config-set
Set a configuration value - zabbix config-show Show current configuration
- zabbix self-test Run a self-test on the Zabbix module
- zabbix send Force sending data to Zabbix
以上,感谢。
2022年11月26日