• ceph命令应用


    记录:337

    场景:在CentOS 7.9操作系统上,在ceph集群中,使用ceph命令查看ceph集群信息,以及mon、mgr、mds、osd、rgw等组件信息。

    版本:

    操作系统:CentOS 7.9

    ceph版本:ceph-13.2.10

    名词:

    ceph:ceph集群的工具命令。

    1.基础环境

    在规划集群主机上需安装ceph-deploy、ceph、ceph-radosgw软件。

    (1)集群主节点安装软件

    安装命令:yum install -y ceph-deploy ceph-13.2.10

    安装命令:yum install -y ceph-radosgw-13.2.10

    解析:集群中主节点安装ceph-deploy、ceph、ceph-radosgw软件。

    (2)集群从节点安装软件

    安装命令:yum install -y ceph-13.2.10

    安装命令:yum install -y ceph-radosgw-13.2.10

    解析:集群中从节点安装ceph、ceph-radosgw软件。

    2.命令应用

    ceph命令,在集群主节点的/etc/ceph目录下使用。

    (1)查看ceph版本

    命令:ceph --version

    解析:查的当前主机安装ceph的版本。

    (2)查看集群状态

    命令:ceph -s

    解析:查看集群状态,使用频率高。

    (3)查看集群实时状态

    命令:ceph -w

    解析:查看集群实时状态,使用命令时,控制台实时监控集群变化。

    (4)查看mgr服务

    命令:ceph mgr services

    解析:命令会打印出:"dashboard": "https://app162:18443/";使用浏览器就能登录dashboard。

    (5)查看mon 状态汇总信息

    命令:ceph mon stat

    解析:汇总mon状态。

    (6)查看mds状态汇总信息

    命令:ceph mds stat

    解析:汇总mds状态。

    (7)查看osd状态汇总信息

    命令:ceph osd stat

    解析:汇总osd状态。

    (8)创建pool

    命令:ceph osd pool create hz_data 16

    解析:创建一个存储池,名称hz_data,分配16个pg。

    (9)查看存储池

    命令:ceph osd pool ls

    解析:能看到存储列表。

    (10)查看pool的pg数量

    命令:ceph osd pool get hz_data pg_num

    解析:查看pool的pg_num数量。

    (11)设置pool的pg数量

    命令:ceph osd pool set hz_data pg_num 18

    解析:设置pool的pg_num数量。

    (12)删除pool

    命令:ceph osd pool delete hz_data hz_data --yes-i-really-really-mean-it

    解析:删除pool时,pool的名称需要传两次。

    (13)创建ceph文件系统

    命令:ceph fs new hangzhoufs xihu_metadata xihu_data

    解析:使用ceph fs new创建ceph文件系统;文件系统名称:hangzhoufs;存储池xihu_data和xihu_metadata。

    (14)查ceph文件系统

    命令:ceph fs ls

    解析:查看ceph文件系统,打印文件系统名称和存储池。

    (15)查ceph文件系统状态

    命令:ceph fs status

    解析:查ceph文件系统状态,打印文件系统的Pool的信息、类型等。

    (16)删除ceph文件系统

    命令:ceph fs rm hangzhoufs --yes-i-really-mean-it

    解析:hangzhoufs是已创建的ceph文件系统名称。

    (17)查看服务状态

    命令:ceph service status

    解析:查看服务状态,查看服务最后一次反应时间。

    (18)查看节点quorum状态

    命令:ceph quorum_status

    解析:查看节点quorum状态。

    (19)查看pg状态

    命令:ceph pg stat

    解析:查看pg状态;pg,placement group。

    (20)查看pg清单

    命令:ceph pg ls

    解析:列出所有pg。

    (21)查看osd磁盘信息

    命令:ceph osd df

    解析:打印osd磁盘信息,包括容量,可用空间,已经使用空间等。

    3.命令帮助手册

    (1)ceph帮助命令

    命令:ceph --help

    解析:查看ceph支持全部命令和选项,在实际工作中,查看这个手册应该是必备之选。

    1. General usage:
    2. ==============
    3. usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE]
    4. [--setuser SETUSER] [--setgroup SETGROUP] [--id CLIENT_ID]
    5. [--name CLIENT_NAME] [--cluster CLUSTER]
    6. [--admin-daemon ADMIN_SOCKET] [-s] [-w] [--watch-debug]
    7. [--watch-info] [--watch-sec] [--watch-warn] [--watch-error]
    8. [--watch-channel {cluster,audit,*}] [--version] [--verbose]
    9. [--concise] [-f {json,json-pretty,xml,xml-pretty,plain}]
    10. [--connect-timeout CLUSTER_TIMEOUT] [--block] [--period PERIOD]
    11. Ceph administration tool
    12. optional arguments:
    13. -h, --help request mon help
    14. -c CEPHCONF, --conf CEPHCONF
    15. ceph configuration file
    16. -i INPUT_FILE, --in-file INPUT_FILE
    17. input file, or "-" for stdin
    18. -o OUTPUT_FILE, --out-file OUTPUT_FILE
    19. output file, or "-" for stdout
    20. --setuser SETUSER set user file permission
    21. --setgroup SETGROUP set group file permission
    22. --id CLIENT_ID, --user CLIENT_ID
    23. client id for authentication
    24. --name CLIENT_NAME, -n CLIENT_NAME
    25. client name for authentication
    26. --cluster CLUSTER cluster name
    27. --admin-daemon ADMIN_SOCKET
    28. submit admin-socket commands ("help" for help
    29. -s, --status show cluster status
    30. -w, --watch watch live cluster changes
    31. --watch-debug watch debug events
    32. --watch-info watch info events
    33. --watch-sec watch security events
    34. --watch-warn watch warn events
    35. --watch-error watch error events
    36. --watch-channel {cluster,audit,*}
    37. which log channel to follow when using -w/--watch. One
    38. of ['cluster', 'audit', '*']
    39. --version, -v display version
    40. --verbose make verbose
    41. --concise make less verbose
    42. -f {json,json-pretty,xml,xml-pretty,plain}, --format {json,json-pretty,xml,xml-pretty,plain}
    43. --connect-timeout CLUSTER_TIMEOUT
    44. set a timeout for connecting to the cluster
    45. --block block until completion (scrub and deep-scrub only)
    46. --period PERIOD, -p PERIOD
    47. polling period, default 1.0 second (for polling
    48. commands only)
    49. Local commands:
    50. ===============
    51. ping Send simple presence/life test to a mon
    52. may be 'mon.*' for all mons
    53. daemon {type.id|path}
    54. Same as --admin-daemon, but auto-find admin socket
    55. daemonperf {type.id | path} [stat-pats] [priority] [] []
    56. daemonperf {type.id | path} list|ls [stat-pats] [priority]
    57. Get selected perf stats from daemon/admin socket
    58. Optional shell-glob comma-delim match string stat-pats
    59. Optional selection priority (can abbreviate name):
    60. critical, interesting, useful, noninteresting, debug
    61. List shows a table of all available stats
    62. Run times (default forever),
    63. once per seconds (default 1)
    64. Monitor commands:
    65. =================
    66. auth add { [...]} add auth info for from input file, or random key
    67. if no input is given, and/or any caps specified in the
    68. command
    69. auth caps [...] update caps for from caps specified in the command
    70. auth export {} write keyring for requested entity, or master keyring if
    71. none given
    72. auth get write keyring file with requested key
    73. auth get-key display requested key
    74. auth get-or-create { [...]} add auth info for from input file, or random key
    75. if no input given, and/or any caps specified in the command
    76. auth get-or-create-key { [...]} get, or add, key for from system/caps pairs
    77. specified in the command. If key already exists, any
    78. given caps must match the existing caps for that key.
    79. auth import auth import: read keyring file from -i
    80. auth ls list authentication state
    81. auth print-key display requested key
    82. auth print_key display requested key
    83. auth rm remove all caps for
    84. balancer dump Show an optimization plan
    85. balancer eval {
    86. specific pool or specific plan
    87. balancer eval-verbose {
    88. specific pool or specific plan (verbosely)
    89. balancer execute Execute an optimization plan
    90. balancer ls List all plans
    91. balancer mode none|crush-compat|upmap Set balancer mode
    92. balancer off Disable automatic balancing
    93. balancer on Enable automatic balancing
    94. balancer optimize { [...]} Run optimizer to create a new plan
    95. balancer reset Discard all optimization plans
    96. balancer rm Discard an optimization plan
    97. balancer show Show details of an optimization plan
    98. balancer sleep Set balancer sleep interval
    99. balancer status Show balancer status
    100. config assimilate-conf Assimilate options from a conf, and return a new, minimal
    101. conf file
    102. config dump Show all configuration option(s)
    103. config get <who> {} Show configuration option(s) for an entity
    104. config help Describe a configuration option
    105. config log {} Show recent history of config changes
    106. config reset Revert configuration to previous state
    107. config rm <who> Clear a configuration option for one or more entities
    108. config set <who> Set a configuration option for one or more entities
    109. config show <who> {} Show running configuration
    110. config show-with-defaults <who> Show running configuration (including compiled-in defaults)
    111. config-key dump {} dump keys and values (with optional prefix)
    112. config-key exists check for 's existence
    113. config-key get get
    114. config-key ls list keys
    115. config-key rm rm
    116. config-key set {} set to value
    117. crash info show crash dump metadata
    118. crash json_report Crashes in the last hours
    119. crash ls Show saved crash dumps
    120. crash post Add a crash dump (use -i )
    121. crash prune Remove crashes older than days
    122. crash rm Remove a saved crash
    123. crash self-test Run a self test of the crash module
    124. crash stat Summarize recorded crashes
    125. dashboard create-self-signed-cert Create self signed certificate
    126. dashboard get-enable-browsable-api Get the ENABLE_BROWSABLE_API option value
    127. dashboard get-rest-requests-timeout Get the REST_REQUESTS_TIMEOUT option value
    128. dashboard get-rgw-api-access-key Get the RGW_API_ACCESS_KEY option value
    129. dashboard get-rgw-api-admin-resource Get the RGW_API_ADMIN_RESOURCE option value
    130. dashboard get-rgw-api-host Get the RGW_API_HOST option value
    131. dashboard get-rgw-api-port Get the RGW_API_PORT option value
    132. dashboard get-rgw-api-scheme Get the RGW_API_SCHEME option value
    133. dashboard get-rgw-api-secret-key Get the RGW_API_SECRET_KEY option value
    134. dashboard get-rgw-api-ssl-verify Get the RGW_API_SSL_VERIFY option value
    135. dashboard get-rgw-api-user-id Get the RGW_API_USER_ID option value
    136. dashboard set-enable-browsable-api Set the ENABLE_BROWSABLE_API option value
    137. dashboard set-login-credentials Set the login credentials
    138. dashboard set-rest-requests-timeout Set the REST_REQUESTS_TIMEOUT option value
    139. dashboard set-rgw-api-access-key Set the RGW_API_ACCESS_KEY option value
    140. dashboard set-rgw-api-admin-resource Set the RGW_API_ADMIN_RESOURCE option value
    141. dashboard set-rgw-api-host Set the RGW_API_HOST option value
    142. dashboard set-rgw-api-port Set the RGW_API_PORT option value
    143. dashboard set-rgw-api-scheme Set the RGW_API_SCHEME option value
    144. dashboard set-rgw-api-secret-key Set the RGW_API_SECRET_KEY option value
    145. dashboard set-rgw-api-ssl-verify Set the RGW_API_SSL_VERIFY option value
    146. dashboard set-rgw-api-user-id Set the RGW_API_USER_ID option value
    147. dashboard set-session-expire Set the session expire timeout
    148. df {detail} show cluster free space stats
    149. features report of connected features
    150. fs add_data_pool add data pool
    151. fs authorize [...] add auth for to access file system
    152. based on following directory and permissions pairs
    153. fs dump {} dump all CephFS status, optionally from epoch
    154. fs flag set enable_multiple {--yes-i-really-mean-it} Set a global CephFS flag
    155. fs get get info about one filesystem
    156. fs ls list filesystems
    157. fs new {--force} {--allow- make new filesystem using named pools and
    158. dangerous-metadata-overlay}
    159. fs reset {--yes-i-really-mean-it} disaster recovery only: reset to a single-MDS map
    160. fs rm {--yes-i-really-mean-it} disable the named filesystem
    161. fs rm_data_pool remove data pool
    162. fs set max_mds|max_file_size|allow_new_snaps| set fs parameter to
    163. inline_data|cluster_down|allow_dirfrags|balancer|standby_
    164. count_wanted|session_timeout|session_autoclose|down|
    165. joinable|min_compat_client {}
    166. fs set-default set the default to the named filesystem
    167. fs status {} Show the status of a CephFS filesystem
    168. fsid show cluster FSID/UUID
    169. health {detail} show cluster health
    170. heap dump|start_profiler|stop_profiler|release|stats show heap usage info (available only if compiled with
    171. tcmalloc)
    172. hello {} Prints hello world to mgr.x.log
    173. influx config-set Set a configuration value
    174. influx config-show Show current configuration
    175. influx self-test debug the module
    176. influx send Force sending data to Influx
    177. injectargs [...] inject config arguments into monitor
    178. iostat Get IO rates
    179. iostat self-test Run a self test the iostat module
    180. log [...] log supplied text to the monitor log
    181. log last {} {debug|info|sec|warn|error} {*| print last few lines of the cluster log
    182. cluster|audit}
    183. mds compat rm_compat remove compatible feature
    184. mds compat rm_incompat remove incompatible feature
    185. mds compat show show mds compatibility settings
    186. mds count-metadata count MDSs by metadata field property
    187. mds fail Mark MDS failed: trigger a failover if a standby is
    188. available
    189. mds metadata {} fetch metadata for mds
    190. mds repaired mark a damaged MDS rank as no longer damaged
    191. mds rm remove nonactive mds
    192. mds rmfailed {} remove failed mds
    193. mds set_state set mds state of to
    194. mds stat show MDS status
    195. mds versions check running versions of MDSs
    196. mgr count-metadata count ceph-mgr daemons by metadata field property
    197. mgr dump {} dump the latest MgrMap
    198. mgr fail treat the named manager daemon as failed
    199. mgr metadata {} dump metadata for all daemons or a specific daemon
    200. mgr module disable disable mgr module
    201. mgr module enable {--force} enable mgr module
    202. mgr module ls list active mgr modules
    203. mgr self-test background start Activate a background workload (one of command_spam, throw_
    204. exception)
    205. mgr self-test background stop Stop background workload if any is running
    206. mgr self-test config get Peek at a configuration value
    207. mgr self-test config get_localized Peek at a configuration value (localized variant)
    208. mgr self-test remote Test inter-module calls
    209. mgr self-test run Run mgr python interface tests
    210. mgr services list service endpoints provided by mgr modules
    211. mgr versions check running versions of ceph-mgr daemons
    212. mon add add new monitor named at
    213. mon compact cause compaction of monitor's leveldb/rocksdb storage
    214. mon count-metadata count mons by metadata field property
    215. mon dump {} dump formatted monmap (optionally from epoch)
    216. mon feature ls {--with-value} list available mon map features to be set/unset
    217. mon feature set {--yes-i-really-mean-it} set provided feature on mon map
    218. mon getmap {} get monmap
    219. mon metadata {<id>} fetch metadata for mon <id>
    220. mon rm remove monitor named
    221. mon scrub scrub the monitor stores
    222. mon stat summarize monitor status
    223. mon sync force {--yes-i-really-mean-it} {--i-know-what-i- force sync of and clear monitor store
    224. am-doing}
    225. mon versions check running versions of monitors
    226. mon_status report status of monitors
    227. node ls {all|osd|mon|mds|mgr} list all nodes in cluster [type]
    228. osd add-nodown [...] mark osd(s) <id> [<id>...] as nodown, or use to
    229. mark all osds as nodown
    230. osd add-noin [...] mark osd(s) <id> [<id>...] as noin, or use to
    231. mark all osds as noin
    232. osd add-noout [...] mark osd(s) <id> [<id>...] as noout, or use to
    233. mark all osds as noout
    234. osd add-noup [...] mark osd(s) <id> [<id>...] as noup, or use to
    235. mark all osds as noup
    236. osd blacklist add|rm {<float[0.0-]>} add (optionally until seconds from now) or remove
    237. from blacklist
    238. osd blacklist clear clear all blacklisted clients
    239. osd blacklist ls show blacklisted clients
    240. osd blocked-by print histogram of which OSDs are blocking their peers
    241. osd count-metadata count OSDs by metadata field property
    242. osd crush add id|osd.id)> <float[0.0-]> add or update crushmap position and weight for with
    243. [...] and location
    244. osd crush add-bucket <type> { [...]} add no-parent (probably root) crush bucket of type
    245. <type> to location
    246. osd crush class ls list all crush device classes
    247. osd crush class ls-osd list all osds belonging to the specific
    248. osd crush class rename rename crush device class to
    249. osd crush create-or-move id|osd.id)> <float[0.0- create entry or move existing entry for at/
    250. ]> [...] to location
    251. osd crush dump dump crush map
    252. osd crush get-tunable straw_calc_version get crush tunable
    253. osd crush link [...] link existing entry for under location
    254. osd crush ls list items beneath a node in the CRUSH tree
    255. osd crush move [...] move existing entry for to location
    256. osd crush rename-bucket rename bucket to
    257. osd crush reweight <float[0.0-]> change 's weight to in crush map
    258. osd crush reweight-all recalculate the weights for the tree to ensure they sum
    259. correctly
    260. osd crush reweight-subtree change all leaf items beneath to in crush
    261. map
    262. osd crush rm {} remove from crush map (everywhere, or just at
    263. )
    264. osd crush rm-device-class [...] remove class of the osd(s) [...],or use
    265. to remove all.
    266. osd crush rule create-erasure {} create crush rule for erasure coded pool created
    267. with (default default)
    268. osd crush rule create-replicated create crush rule for replicated pool to start from
    269. {} , replicate across buckets of type , using a
    270. choose mode of (default firstn; indep best
    271. for erasure pools)
    272. osd crush rule create-simple {firstn| create crush rule to start from , replicate
    273. indep} across buckets of type , using a choose mode of
    274. (default firstn; indep best for erasure
    275. pools)
    276. osd crush rule dump {} dump crush rule (default all)
    277. osd crush rule ls list crush rules
    278. osd crush rule ls-by-class list all crush rules that reference the same
    279. osd crush rule rename rename crush rule to
    280. osd crush rule rm remove crush rule
    281. osd crush set update crushmap position and weight for to
    282. [...] with location
    283. osd crush set {} set crush map from input file
    284. osd crush set-all-straw-buckets-to-straw2 convert all CRUSH current straw buckets to use the straw2
    285. algorithm
    286. osd crush set-device-class [...] set the of the osd(s) [...],or use
    287. any> to set all.
    288. osd crush set-tunable straw_calc_version set crush tunable to
    289. osd crush show-tunables show current crush tunables
    290. osd crush swap-bucket {--yes-i-really-mean- swap existing bucket contents from (orphan) bucket
    291. it} and
    292. osd crush tree {--show-shadow} dump crush buckets and items in a tree view
    293. osd crush tunables legacy|argonaut|bobtail|firefly|hammer| set crush tunables values to
    294. jewel|optimal|default
    295. osd crush unlink {} unlink from crush map (everywhere, or just at
    296. )
    297. osd crush weight-set create flat|positional create a weight-set for a given pool
    298. osd crush weight-set create-compat create a default backward-compatible weight-set
    299. osd crush weight-set dump dump crush weight sets
    300. osd crush weight-set ls list crush weight sets
    301. osd crush weight-set reweight s weight-
    302. ]> [<float[0.0-]>...] set
    303. osd crush weight-set reweight-compat <float[0.0-]> set weight for an item (bucket or osd) in the backward-
    304. [<float[0.0-]>...] compatible weight-set
    305. osd crush weight-set rm remove the weight-set for a given pool
    306. osd crush weight-set rm-compat remove the backward-compatible weight-set
    307. osd deep-scrub <who> initiate deep scrub on osd <who>, or use to deep
    308. scrub all
    309. osd destroy id|osd.id)> {--yes-i-really-mean-it} mark osd as being destroyed. Keeps the ID intact (allowing
    310. reuse), but removes cephx keys, config-key data and
    311. lockbox keys, rendering data permanently unreadable.
    312. osd df {plain|tree} show OSD utilization
    313. osd down [...] set osd(s) <id> [<id>...] down, or use to set all
    314. osds down
    315. osd dump {} print summary of OSD map
    316. osd erasure-code-profile get get erasure code profile
    317. osd erasure-code-profile ls list all erasure code profiles
    318. osd erasure-code-profile rm remove erasure code profile
    319. osd erasure-code-profile set { [.. create erasure code profile with [ ...]
    320. .]} pairs. Add a --force at the end to override an existing
    321. profile (VERY DANGEROUS)
    322. osd find id|osd.id)> find osd <id> in the CRUSH map and show its location
    323. osd force-create-pg {--yes-i-really-mean-it} force creation of pg
    324. osd get-require-min-compat-client get the minimum client version we will maintain
    325. compatibility with
    326. osd getcrushmap {} get CRUSH map
    327. osd getmap {} get OSD map
    328. osd getmaxosd show largest OSD id
    329. osd in [...] set osd(s) <id> [<id>...] in, can use to
    330. automatically set all previously out osds in
    331. osd last-stat-seq id|osd.id)> get the last pg stats sequence number reported for this osd
    332. osd lost id|osd.id)> {--yes-i-really-mean-it} mark osd as permanently lost. THIS DESTROYS DATA IF NO MORE
    333. REPLICAS EXIST, BE CAREFUL
    334. osd ls {} show all OSD ids
    335. osd ls-tree {} show OSD ids under bucket in the CRUSH map
    336. osd lspools {} list pools
    337. osd map {} find pg for in with [namespace]
    338. osd metadata {id|osd.id)>} fetch metadata for osd {id} (default all)
    339. osd new {id|osd.id)>} Create a new OSD. If supplied, the `id` to be replaced
    340. needs to exist and have been previously destroyed. Reads
    341. secrets from JSON file via `-i ` (see man page).
    342. osd ok-to-stop [...] check whether osd(s) can be safely stopped without reducing
    343. immediate data availability
    344. osd out [...] set osd(s) <id> [<id>...] out, or use to set all
    345. osds out
    346. osd pause pause osd
    347. osd perf print dump of OSD perf summary stats
    348. osd pg-temp {id|osd.id)> [id| set pg_temp mapping pgid:[<id> [<id>...]] (developers only)
    349. osd.id)>...]}
    350. osd pg-upmap id|osd.id)> [id| set pg_upmap mapping :[<id> [<id>...]] (developers
    351. osd.id)>...] only)
    352. osd pg-upmap-items id|osd.id)> [set pg_upmap_items mapping :{<id> to <id>, [...]} (
    353. id|osd.id)>...] developers only)
    354. osd pool application disable {--yes-i- disables use of an application on pool
    355. really-mean-it}
    356. osd pool application enable {--yes-i- enable use of an application [cephfs,rbd,rgw] on pool
    357. really-mean-it}
    358. osd pool application get {} {} {} get value of key of application on pool
    359. osd pool application rm removes application metadata key on pool
    360. osd pool application set sets application metadata key to on
    361. pool
    362. osd pool create {} create pool
    363. {replicated|erasure} {} {}
    364. {}
    365. osd pool get size|min_size|pg_num|pgp_num|crush_ get pool parameter
    366. rule|hashpspool|nodelete|nopgchange|nosizechange|write_
    367. fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_
    368. set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|auid|
    369. target_max_objects|target_max_bytes|cache_target_dirty_
    370. ratio|cache_target_dirty_high_ratio|cache_target_full_
    371. ratio|cache_min_flush_age|cache_min_evict_age|erasure_
    372. code_profile|min_read_recency_for_promote|all|min_write_
    373. recency_for_promote|fast_read|hit_set_grade_decay_rate|
    374. hit_set_search_last_n|scrub_min_interval|scrub_max_
    375. interval|deep_scrub_interval|recovery_priority|recovery_
    376. op_priority|scrub_priority|compression_mode|compression_
    377. algorithm|compression_required_ratio|compression_max_blob_
    378. size|compression_min_blob_size|csum_type|csum_min_block|
    379. csum_max_block|allow_ec_overwrites
    380. osd pool get-quota obtain object or byte limits for pool
    381. osd pool ls {detail} list pools
    382. osd pool mksnap make snapshot in
    383. osd pool rename rename to
    384. osd pool rm {} {} remove pool
    385. osd pool rmsnap remove snapshot from
    386. osd pool set size|min_size|pg_num|pgp_num|crush_ set pool parameter to
    387. rule|hashpspool|nodelete|nopgchange|nosizechange|write_
    388. fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_
    389. set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|
    390. target_max_bytes|target_max_objects|cache_target_dirty_
    391. ratio|cache_target_dirty_high_ratio|cache_target_full_
    392. ratio|cache_min_flush_age|cache_min_evict_age|auid|min_
    393. read_recency_for_promote|min_write_recency_for_promote|
    394. fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|
    395. scrub_min_interval|scrub_max_interval|deep_scrub_interval|
    396. recovery_priority|recovery_op_priority|scrub_priority|
    397. compression_mode|compression_algorithm|compression_
    398. required_ratio|compression_max_blob_size|compression_min_
    399. blob_size|csum_type|csum_min_block|csum_max_block|allow_
    400. ec_overwrites {--yes-i-really-mean-it}
    401. osd pool set-quota max_objects|max_bytes set object or byte limit on pool
    402. osd pool stats {} obtain stats from all pools, or from specified pool
    403. osd primary-affinity id|osd.id)> <float[0.0-1.0]> adjust osd primary-affinity from 0.0 <= <= 1.0
    404. osd primary-temp id|osd.id)> set primary_temp mapping pgid:<id>|-1 (developers only)
    405. osd purge id|osd.id)> {--yes-i-really-mean-it} purge all osd data from the monitors. Combines `osd destroy`
    406. , `osd rm`, and `osd crush rm`.
    407. osd purge-new id|osd.id)> {--yes-i-really-mean- purge all traces of an OSD that was partially created but
    408. it} never started
    409. osd repair <who> initiate repair on osd <who>, or use to repair all
    410. osd require-osd-release luminous|mimic {--yes-i-really- set the minimum allowed OSD release to participate in the
    411. mean-it} cluster
    412. osd reweight id|osd.id)> <float[0.0-1.0]> reweight osd to 0.0 < < 1.0
    413. osd reweight-by-pg {} {<float>} {} { reweight OSDs by PG distribution [overload-percentage-for-
    414. [...]} consideration, default 120]
    415. osd reweight-by-utilization {} {<float>} {} {-- reweight OSDs by utilization [overload-percentage-for-
    416. no-increasing} consideration, default 120]
    417. osd reweightn reweight osds with {<id>: ,...})
    418. osd rm [...] remove osd(s) <id> [<id>...], or use to remove
    419. all osds
    420. osd rm-nodown [...] allow osd(s) <id> [<id>...] to be marked down (if they are
    421. currently marked as nodown), can use to
    422. automatically filter out all nodown osds
    423. osd rm-noin [...] allow osd(s) <id> [<id>...] to be marked in (if they are
    424. currently marked as noin), can use to
    425. automatically filter out all noin osds
    426. osd rm-noout [...] allow osd(s) <id> [<id>...] to be marked out (if they are
    427. currently marked as noout), can use to
    428. automatically filter out all noout osds
    429. osd rm-noup [...] allow osd(s) <id> [<id>...] to be marked up (if they are
    430. currently marked as noup), can use to
    431. automatically filter out all noup osds
    432. osd rm-pg-upmap clear pg_upmap mapping for (developers only)
    433. osd rm-pg-upmap-items clear pg_upmap_items mapping for (developers only)
    434. osd safe-to-destroy [...] check whether osd(s) can be safely destroyed without
    435. reducing data durability
    436. osd scrub <who> initiate scrub on osd <who>, or use to scrub all
    437. osd set full|pause|noup|nodown|noout|noin|nobackfill| set
    438. norebalance|norecover|noscrub|nodeep-scrub|notieragent|
    439. nosnaptrim|sortbitwise|recovery_deletes|require_jewel_
    440. osds|require_kraken_osds|pglog_hardlimit {--yes-i-really-
    441. mean-it}
    442. osd set-backfillfull-ratio <float[0.0-1.0]> set usage ratio at which OSDs are marked too full to
    443. backfill
    444. osd set-full-ratio <float[0.0-1.0]> set usage ratio at which OSDs are marked full
    445. osd set-nearfull-ratio <float[0.0-1.0]> set usage ratio at which OSDs are marked near-full
    446. osd set-require-min-compat-client {--yes-i- set the minimum client version we will maintain
    447. really-mean-it} compatibility with
    448. osd setcrushmap {} set crush map from input file
    449. osd setmaxosd set new maximum osd value
    450. osd smart get Get smart data for osd.id
    451. osd stat print summary of OSD map
    452. osd status {} Show the status of OSDs within a bucket, or all
    453. osd test-reweight-by-pg {} {<float>} {} dry run of reweight OSDs by PG distribution [overload-
    454. { [...]} percentage-for-consideration, default 120]
    455. osd test-reweight-by-utilization {} {<float>} {} dry run of reweight OSDs by utilization [overload-
    456. {--no-increasing} percentage-for-consideration, default 120]
    457. osd tier add {--force-nonempty} add the tier (the second one) to base pool
    458. (the first one)
    459. osd tier add-cache add a cache (the second one) of size to
    460. existing pool (the first one)
    461. osd tier cache-mode none|writeback|forward| specify the caching mode for cache tier
    462. readonly|readforward|proxy|readproxy {--yes-i-really-mean-
    463. it}
    464. osd tier rm remove the tier (the second one) from base pool
    465. (the first one)
    466. osd tier rm-overlay remove the overlay pool for base pool
    467. osd tier set-overlay set the overlay pool for base pool to be
    468. osd tree {} {up|down|in|out|destroyed [up|down|in| print OSD tree
    469. out|destroyed...]}
    470. osd tree-from {} {up|down|in|out| print OSD tree in bucket
    471. destroyed [up|down|in|out|destroyed...]}
    472. osd unpause unpause osd
    473. osd unset full|pause|noup|nodown|noout|noin|nobackfill| unset
    474. norebalance|norecover|noscrub|nodeep-scrub|notieragent|
    475. nosnaptrim
    476. osd utilization get basic pg distribution stats
    477. osd versions check running versions of OSDs
    478. pg cancel-force-backfill [...] restore normal backfill priority of
    479. pg cancel-force-recovery [...] restore normal recovery priority of
    480. pg debug unfound_objects_exist|degraded_pgs_exist show debug info about pgs
    481. pg deep-scrub start deep-scrub on
    482. pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief show human-readable versions of pg map (only 'all' valid
    483. [all|summary|sum|delta|pools|osds|pgs|pgs_brief...]} with plain)
    484. pg dump_json {all|summary|sum|pools|osds|pgs [all|summary| show human-readable version of pg map in json only
    485. sum|pools|osds|pgs...]}
    486. pg dump_pools_json show pg pools info in json only
    487. pg dump_stuck {inactive|unclean|stale|undersized|degraded show information about stuck pgs
    488. [inactive|unclean|stale|undersized|degraded...]} {}
    489. pg force-backfill [...] force backfill of first
    490. pg force-recovery [...] force recovery of first
    491. pg getmap get binary pg map to -o/stdout
    492. pg ls {} { [...]} list pg with specific pool, osd, state
    493. pg ls-by-osd id|osd.id)> {} { list pg on osd [osd]
    494. [...]}
    495. pg ls-by-pool { [...]} list pg with pool = [poolname]
    496. pg ls-by-primary id|osd.id)> {} { list pg with primary = [osd]
    497. [...]}
    498. pg map show mapping of pg to osds
    499. pg repair start repair on
    500. pg scrub start scrub on
    501. pg stat show placement group status.
    502. prometheus file_sd_config Return file_sd compatible prometheus config for mgr cluster
    503. prometheus self-test Run a self test on the prometheus module
    504. quorum enter|exit enter or exit quorum
    505. quorum_status report status of monitor quorum
    506. report { [...]} report full status of cluster, optional title tag strings
    507. restful create-key Create an API key with this name
    508. restful create-self-signed-cert Create localized self signed certificate
    509. restful delete-key Delete an API key with this name
    510. restful list-keys List all API keys
    511. restful restart Restart API server
    512. service dump dump service map
    513. service status dump service state
    514. status show cluster status
    515. telegraf config-set Set a configuration value
    516. telegraf config-show Show current configuration
    517. telegraf self-test debug the module
    518. telegraf send Force sending data to Telegraf
    519. telemetry config-set Set a configuration value
    520. telemetry config-show Show current configuration
    521. telemetry self-test Perform a self-test
    522. telemetry send Force sending data to Ceph telemetry
    523. telemetry show Show last report or report to be sent
    524. tell [...] send a command to a specific daemon
    525. time-sync-status show time sync status
    526. version show mon daemon version
    527. versions check running versions of ceph daemons
    528. zabbix config-set Set a configuration value
    529. zabbix config-show Show current configuration
    530. zabbix self-test Run a self-test on the Zabbix module
    531. zabbix send Force sending data to Zabbix
    532. 以上,感谢。

      2022年11月26日

    533. 相关阅读:
      【生成对抗网络学习 其三】BiGAN论文阅读笔记及其原理理解
      实战经验分享FastAPI 是什么
      首次做CMMI,如何选择适合的评估级别
      [Linux/UOS]同一解决方案下的控制台程序依赖SO库的方法
      图片怎么转文字?建议收藏这些方法
      string和stringbuilder
      UG\NX二次开发 计算一个向量的反向向量UF_VEC3_negate
      shell脚本
      python 斐波那契数列多种方法
      数据结构之链表(带头双向循环链表)
    534. 原文地址:https://blog.csdn.net/zhangbeizhen18/article/details/128058658