1CEPH(8) Ceph CEPH(8)
2
3
4
6 ceph - ceph administration tool
7
9 ceph auth [ add | caps | del | export | get | get-key | get-or-create | get-or-create-key | import | list | print-key | print_key ] ...
10
11 ceph compact
12
13 ceph config [ dump | ls | help | get | show | show-with-defaults | set | rm | log | reset | assimilate-conf | generate-minimal-conf ] ...
14
15 ceph config-key [ rm | exists | get | ls | dump | set ] ...
16
17 ceph daemon <name> | <path> <command> ...
18
19 ceph daemonperf <name> | <path> [ interval [ count ] ]
20
21 ceph df {detail}
22
23 ceph fs [ ls | new | reset | rm | authorize ] ...
24
25 ceph fsid
26
27 ceph health {detail}
28
29 ceph injectargs <injectedargs> [ <injectedargs>... ]
30
31 ceph log <logtext> [ <logtext>... ]
32
33 ceph mds [ compat | fail | rm | rmfailed | set_state | stat | repaired ] ...
34
35 ceph mon [ add | dump | getmap | remove | stat ] ...
36
37 ceph osd [ blocklist | blocked-by | create | new | deep-scrub | df | down | dump | erasure-code-profile | find | getcrushmap | getmap | getmaxosd | in | ls | lspools | map | metadata | ok-to-stop | out | pause | perf | pg-temp | force-create-pg | primary-affinity | primary-temp | repair | reweight | reweight-by-pg | rm | destroy | purge | safe-to-destroy | scrub | set | setcrushmap | setmaxosd | stat | tree | unpause | unset ] ...
38
39 ceph osd crush [ add | add-bucket | create-or-move | dump | get-tunable | link | move | remove | rename-bucket | reweight | reweight-all | reweight-subtree | rm | rule | set | set-tunable | show-tunables | tunables | unlink ] ...
40
41 ceph osd pool [ create | delete | get | get-quota | ls | mksnap | rename | rmsnap | set | set-quota | stats ] ...
42
43 ceph osd pool application [ disable | enable | get | rm | set ] ...
44
45 ceph osd tier [ add | add-cache | cache-mode | remove | remove-overlay | set-overlay ] ...
46
47 ceph pg [ debug | deep-scrub | dump | dump_json | dump_pools_json | dump_stuck | getmap | ls | ls-by-osd | ls-by-pool | ls-by-primary | map | repair | scrub | stat ] ...
48
49 ceph quorum_status
50
51 ceph report { <tags> [ <tags>... ] }
52
53 ceph status
54
55 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
56
57 ceph tell <name (type.id)> <command> [options...]
58
59 ceph version
60
61
63 ceph is a control utility which is used for manual deployment and main‐
64 tenance of a Ceph cluster. It provides a diverse set of commands that
65 allows deployment of monitors, OSDs, placement groups, MDS and overall
66 maintenance, administration of the cluster.
67
69 auth
70 Manage authentication keys. It is used for adding, removing, exporting
71 or updating of authentication keys for a particular entity such as a
72 monitor or OSD. It uses some additional subcommands.
73
74 Subcommand add adds authentication info for a particular entity from
75 input file, or random key if no input is given and/or any caps speci‐
76 fied in the command.
77
78 Usage:
79
80 ceph auth add <entity> {<caps> [<caps>...]}
81
82 Subcommand caps updates caps for name from caps specified in the com‐
83 mand.
84
85 Usage:
86
87 ceph auth caps <entity> <caps> [<caps>...]
88
89 Subcommand del deletes all caps for name.
90
91 Usage:
92
93 ceph auth del <entity>
94
95 Subcommand export writes keyring for requested entity, or master
96 keyring if none given.
97
98 Usage:
99
100 ceph auth export {<entity>}
101
102 Subcommand get writes keyring file with requested key.
103
104 Usage:
105
106 ceph auth get <entity>
107
108 Subcommand get-key displays requested key.
109
110 Usage:
111
112 ceph auth get-key <entity>
113
114 Subcommand get-or-create adds authentication info for a particular en‐
115 tity from input file, or random key if no input given and/or any caps
116 specified in the command.
117
118 Usage:
119
120 ceph auth get-or-create <entity> {<caps> [<caps>...]}
121
122 Subcommand get-or-create-key gets or adds key for name from system/caps
123 pairs specified in the command. If key already exists, any given caps
124 must match the existing caps for that key.
125
126 Usage:
127
128 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
129
130 Subcommand import reads keyring from input file.
131
132 Usage:
133
134 ceph auth import
135
136 Subcommand ls lists authentication state.
137
138 Usage:
139
140 ceph auth ls
141
142 Subcommand print-key displays requested key.
143
144 Usage:
145
146 ceph auth print-key <entity>
147
148 Subcommand print_key displays requested key.
149
150 Usage:
151
152 ceph auth print_key <entity>
153
154 compact
155 Causes compaction of monitor's leveldb storage.
156
157 Usage:
158
159 ceph compact
160
161 config
162 Configure the cluster. By default, Ceph daemons and clients retrieve
163 their configuration options from monitor when they start, and are up‐
164 dated if any of the tracked options is changed at run time. It uses
165 following additional subcommand.
166
167 Subcommand dump to dump all options for the cluster
168
169 Usage:
170
171 ceph config dump
172
173 Subcommand ls to list all option names for the cluster
174
175 Usage:
176
177 ceph config ls
178
179 Subcommand help to describe the specified configuration option
180
181 Usage:
182
183 ceph config help <option>
184
185 Subcommand get to dump the option(s) for the specified entity.
186
187 Usage:
188
189 ceph config get <who> {<option>}
190
191 Subcommand show to display the running configuration of the specified
192 entity. Please note, unlike get, which only shows the options managed
193 by monitor, show displays all the configurations being actively used.
194 These options are pulled from several sources, for instance, the com‐
195 piled-in default value, the monitor's configuration database, ceph.conf
196 file on the host. The options can even be overridden at runtime. So,
197 there is chance that the configuration options in the output of show
198 could be different from those in the output of get.
199
200 Usage:
201
202 ceph config show {<who>}
203
204 Subcommand show-with-defaults to display the running configuration
205 along with the compiled-in defaults of the specified entity
206
207 Usage:
208
209 ceph config show {<who>}
210
211 Subcommand set to set an option for one or more specified entities
212
213 Usage:
214
215 ceph config set <who> <option> <value> {--force}
216
217 Subcommand rm to clear an option for one or more entities
218
219 Usage:
220
221 ceph config rm <who> <option>
222
223 Subcommand log to show recent history of config changes. If count op‐
224 tion is omitted it defeaults to 10.
225
226 Usage:
227
228 ceph config log {<count>}
229
230 Subcommand reset to revert configuration to the specified historical
231 version
232
233 Usage:
234
235 ceph config reset <version>
236
237 Subcommand assimilate-conf to assimilate options from stdin, and return
238 a new, minimal conf file
239
240 Usage:
241
242 ceph config assimilate-conf -i <input-config-path> > <output-config-path>
243 ceph config assimilate-conf < <input-config-path>
244
245 Subcommand generate-minimal-conf to generate a minimal ceph.conf file,
246 which can be used for bootstrapping a daemon or a client.
247
248 Usage:
249
250 ceph config generate-minimal-conf > <minimal-config-path>
251
252 config-key
253 Manage configuration key. Config-key is a general purpose key/value
254 service offered by the monitors. This service is mainly used by Ceph
255 tools and daemons for persisting various settings. Among which,
256 ceph-mgr modules uses it for storing their options. It uses some addi‐
257 tional subcommands.
258
259 Subcommand rm deletes configuration key.
260
261 Usage:
262
263 ceph config-key rm <key>
264
265 Subcommand exists checks for configuration keys existence.
266
267 Usage:
268
269 ceph config-key exists <key>
270
271 Subcommand get gets the configuration key.
272
273 Usage:
274
275 ceph config-key get <key>
276
277 Subcommand ls lists configuration keys.
278
279 Usage:
280
281 ceph config-key ls
282
283 Subcommand dump dumps configuration keys and values.
284
285 Usage:
286
287 ceph config-key dump
288
289 Subcommand set puts configuration key and value.
290
291 Usage:
292
293 ceph config-key set <key> {<val>}
294
295 daemon
296 Submit admin-socket commands.
297
298 Usage:
299
300 ceph daemon {daemon_name|socket_path} {command} ...
301
302 Example:
303
304 ceph daemon osd.0 help
305
306 daemonperf
307 Watch performance counters from a Ceph daemon.
308
309 Usage:
310
311 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
312
313 df
314 Show cluster's free space status.
315
316 Usage:
317
318 ceph df {detail}
319
320 features
321 Show the releases and features of all connected daemons and clients
322 connected to the cluster, along with the numbers of them in each bucket
323 grouped by the corresponding features/releases. Each release of Ceph
324 supports a different set of features, expressed by the features bit‐
325 mask. New cluster features require that clients support the feature, or
326 else they are not allowed to connect to these new features. As new fea‐
327 tures or capabilities are enabled after an upgrade, older clients are
328 prevented from connecting.
329
330 Usage:
331
332 ceph features
333
334 fs
335 Manage cephfs file systems. It uses some additional subcommands.
336
337 Subcommand ls to list file systems
338
339 Usage:
340
341 ceph fs ls
342
343 Subcommand new to make a new file system using named pools <metadata>
344 and <data>
345
346 Usage:
347
348 ceph fs new <fs_name> <metadata> <data>
349
350 Subcommand reset is used for disaster recovery only: reset to a sin‐
351 gle-MDS map
352
353 Usage:
354
355 ceph fs reset <fs_name> {--yes-i-really-mean-it}
356
357 Subcommand rm to disable the named file system
358
359 Usage:
360
361 ceph fs rm <fs_name> {--yes-i-really-mean-it}
362
363 Subcommand authorize creates a new client that will be authorized for
364 the given path in <fs_name>. Pass / to authorize for the entire FS.
365 <perms> below can be r, rw or rwp.
366
367 Usage:
368
369 ceph fs authorize <fs_name> client.<client_id> <path> <perms> [<path> <perms>...]
370
371 fsid
372 Show cluster's FSID/UUID.
373
374 Usage:
375
376 ceph fsid
377
378 health
379 Show cluster's health.
380
381 Usage:
382
383 ceph health {detail}
384
385 heap
386 Show heap usage info (available only if compiled with tcmalloc)
387
388 Usage:
389
390 ceph tell <name (type.id)> heap dump|start_profiler|stop_profiler|stats
391
392 Subcommand release to make TCMalloc to releases no-longer-used memory
393 back to the kernel at once.
394
395 Usage:
396
397 ceph tell <name (type.id)> heap release
398
399 Subcommand (get|set)_release_rate get or set the TCMalloc memory re‐
400 lease rate. TCMalloc releases no-longer-used memory back to the kernel
401 gradually. the rate controls how quickly this happens. Increase this
402 setting to make TCMalloc to return unused memory more frequently. 0
403 means never return memory to system, 1 means wait for 1000 pages after
404 releasing a page to system. It is 1.0 by default..
405
406 Usage:
407
408 ceph tell <name (type.id)> heap get_release_rate|set_release_rate {<val>}
409
410 injectargs
411 Inject configuration arguments into monitor.
412
413 Usage:
414
415 ceph injectargs <injected_args> [<injected_args>...]
416
417 log
418 Log supplied text to the monitor log.
419
420 Usage:
421
422 ceph log <logtext> [<logtext>...]
423
424 mds
425 Manage metadata server configuration and administration. It uses some
426 additional subcommands.
427
428 Subcommand compat manages compatible features. It uses some additional
429 subcommands.
430
431 Subcommand rm_compat removes compatible feature.
432
433 Usage:
434
435 ceph mds compat rm_compat <int[0-]>
436
437 Subcommand rm_incompat removes incompatible feature.
438
439 Usage:
440
441 ceph mds compat rm_incompat <int[0-]>
442
443 Subcommand show shows mds compatibility settings.
444
445 Usage:
446
447 ceph mds compat show
448
449 Subcommand fail forces mds to status fail.
450
451 Usage:
452
453 ceph mds fail <role|gid>
454
455 Subcommand rm removes inactive mds.
456
457 Usage:
458
459 ceph mds rm <int[0-]> <name> (type.id)>
460
461 Subcommand rmfailed removes failed mds.
462
463 Usage:
464
465 ceph mds rmfailed <int[0-]>
466
467 Subcommand set_state sets mds state of <gid> to <numeric-state>.
468
469 Usage:
470
471 ceph mds set_state <int[0-]> <int[0-20]>
472
473 Subcommand stat shows MDS status.
474
475 Usage:
476
477 ceph mds stat
478
479 Subcommand repaired mark a damaged MDS rank as no longer damaged.
480
481 Usage:
482
483 ceph mds repaired <role>
484
485 mon
486 Manage monitor configuration and administration. It uses some addi‐
487 tional subcommands.
488
489 Subcommand add adds new monitor named <name> at <addr>.
490
491 Usage:
492
493 ceph mon add <name> <IPaddr[:port]>
494
495 Subcommand dump dumps formatted monmap (optionally from epoch)
496
497 Usage:
498
499 ceph mon dump {<int[0-]>}
500
501 Subcommand getmap gets monmap.
502
503 Usage:
504
505 ceph mon getmap {<int[0-]>}
506
507 Subcommand remove removes monitor named <name>.
508
509 Usage:
510
511 ceph mon remove <name>
512
513 Subcommand stat summarizes monitor status.
514
515 Usage:
516
517 ceph mon stat
518
519 mgr
520 Ceph manager daemon configuration and management.
521
522 Subcommand dump dumps the latest MgrMap, which describes the active and
523 standby manager daemons.
524
525 Usage:
526
527 ceph mgr dump
528
529 Subcommand fail will mark a manager daemon as failed, removing it from
530 the manager map. If it is the active manager daemon a standby will
531 take its place.
532
533 Usage:
534
535 ceph mgr fail <name>
536
537 Subcommand module ls will list currently enabled manager modules (plug‐
538 ins).
539
540 Usage:
541
542 ceph mgr module ls
543
544 Subcommand module enable will enable a manager module. Available mod‐
545 ules are included in MgrMap and visible via mgr dump.
546
547 Usage:
548
549 ceph mgr module enable <module>
550
551 Subcommand module disable will disable an active manager module.
552
553 Usage:
554
555 ceph mgr module disable <module>
556
557 Subcommand metadata will report metadata about all manager daemons or,
558 if the name is specified, a single manager daemon.
559
560 Usage:
561
562 ceph mgr metadata [name]
563
564 Subcommand versions will report a count of running daemon versions.
565
566 Usage:
567
568 ceph mgr versions
569
570 Subcommand count-metadata will report a count of any daemon metadata
571 field.
572
573 Usage:
574
575 ceph mgr count-metadata <field>
576
577 osd
578 Manage OSD configuration and administration. It uses some additional
579 subcommands.
580
581 Subcommand blocklist manage blocklisted clients. It uses some addi‐
582 tional subcommands.
583
584 Subcommand add add <addr> to blocklist (optionally until <expire> sec‐
585 onds from now)
586
587 Usage:
588
589 ceph osd blocklist add <EntityAddr> {<float[0.0-]>}
590
591 Subcommand ls show blocklisted clients
592
593 Usage:
594
595 ceph osd blocklist ls
596
597 Subcommand rm remove <addr> from blocklist
598
599 Usage:
600
601 ceph osd blocklist rm <EntityAddr>
602
603 Subcommand blocked-by prints a histogram of which OSDs are blocking
604 their peers
605
606 Usage:
607
608 ceph osd blocked-by
609
610 Subcommand create creates new osd (with optional UUID and ID).
611
612 This command is DEPRECATED as of the Luminous release, and will be re‐
613 moved in a future release.
614
615 Subcommand new should instead be used.
616
617 Usage:
618
619 ceph osd create {<uuid>} {<id>}
620
621 Subcommand new can be used to create a new OSD or to recreate a previ‐
622 ously destroyed OSD with a specific id. The new OSD will have the spec‐
623 ified uuid, and the command expects a JSON file containing the base64
624 cephx key for auth entity client.osd.<id>, as well as optional base64
625 cepx key for dm-crypt lockbox access and a dm-crypt key. Specifying a
626 dm-crypt requires specifying the accompanying lockbox cephx key.
627
628 Usage:
629
630 ceph osd new {<uuid>} {<id>} -i {<params.json>}
631
632 The parameters JSON file is optional but if provided, is expected to
633 maintain a form of the following format:
634
635 {
636 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
637 "crush_device_class": "myclass"
638 }
639
640 Or:
641
642 {
643 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
644 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
645 "dmcrypt_key": "<dm-crypt key>",
646 "crush_device_class": "myclass"
647 }
648
649 Or:
650
651 {
652 "crush_device_class": "myclass"
653 }
654
655 The "crush_device_class" property is optional. If specified, it will
656 set the initial CRUSH device class for the new OSD.
657
658 Subcommand crush is used for CRUSH management. It uses some additional
659 subcommands.
660
661 Subcommand add adds or updates crushmap position and weight for <name>
662 with <weight> and location <args>.
663
664 Usage:
665
666 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
667
668 Subcommand add-bucket adds no-parent (probably root) crush bucket
669 <name> of type <type>.
670
671 Usage:
672
673 ceph osd crush add-bucket <name> <type>
674
675 Subcommand create-or-move creates entry or moves existing entry for
676 <name> <weight> at/to location <args>.
677
678 Usage:
679
680 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
681 [<args>...]
682
683 Subcommand dump dumps crush map.
684
685 Usage:
686
687 ceph osd crush dump
688
689 Subcommand get-tunable get crush tunable straw_calc_version
690
691 Usage:
692
693 ceph osd crush get-tunable straw_calc_version
694
695 Subcommand link links existing entry for <name> under location <args>.
696
697 Usage:
698
699 ceph osd crush link <name> <args> [<args>...]
700
701 Subcommand move moves existing entry for <name> to location <args>.
702
703 Usage:
704
705 ceph osd crush move <name> <args> [<args>...]
706
707 Subcommand remove removes <name> from crush map (everywhere, or just at
708 <ancestor>).
709
710 Usage:
711
712 ceph osd crush remove <name> {<ancestor>}
713
714 Subcommand rename-bucket renames bucket <srcname> to <dstname>
715
716 Usage:
717
718 ceph osd crush rename-bucket <srcname> <dstname>
719
720 Subcommand reweight change <name>'s weight to <weight> in crush map.
721
722 Usage:
723
724 ceph osd crush reweight <name> <float[0.0-]>
725
726 Subcommand reweight-all recalculate the weights for the tree to ensure
727 they sum correctly
728
729 Usage:
730
731 ceph osd crush reweight-all
732
733 Subcommand reweight-subtree changes all leaf items beneath <name> to
734 <weight> in crush map
735
736 Usage:
737
738 ceph osd crush reweight-subtree <name> <weight>
739
740 Subcommand rm removes <name> from crush map (everywhere, or just at
741 <ancestor>).
742
743 Usage:
744
745 ceph osd crush rm <name> {<ancestor>}
746
747 Subcommand rule is used for creating crush rules. It uses some addi‐
748 tional subcommands.
749
750 Subcommand create-erasure creates crush rule <name> for erasure coded
751 pool created with <profile> (default default).
752
753 Usage:
754
755 ceph osd crush rule create-erasure <name> {<profile>}
756
757 Subcommand create-simple creates crush rule <name> to start from
758 <root>, replicate across buckets of type <type>, using a choose mode of
759 <firstn|indep> (default firstn; indep best for erasure pools).
760
761 Usage:
762
763 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
764
765 Subcommand dump dumps crush rule <name> (default all).
766
767 Usage:
768
769 ceph osd crush rule dump {<name>}
770
771 Subcommand ls lists crush rules.
772
773 Usage:
774
775 ceph osd crush rule ls
776
777 Subcommand rm removes crush rule <name>.
778
779 Usage:
780
781 ceph osd crush rule rm <name>
782
783 Subcommand set used alone, sets crush map from input file.
784
785 Usage:
786
787 ceph osd crush set
788
789 Subcommand set with osdname/osd.id update crushmap position and weight
790 for <name> to <weight> with location <args>.
791
792 Usage:
793
794 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
795
796 Subcommand set-tunable set crush tunable <tunable> to <value>. The
797 only tunable that can be set is straw_calc_version.
798
799 Usage:
800
801 ceph osd crush set-tunable straw_calc_version <value>
802
803 Subcommand show-tunables shows current crush tunables.
804
805 Usage:
806
807 ceph osd crush show-tunables
808
809 Subcommand tree shows the crush buckets and items in a tree view.
810
811 Usage:
812
813 ceph osd crush tree
814
815 Subcommand tunables sets crush tunables values to <profile>.
816
817 Usage:
818
819 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
820
821 Subcommand unlink unlinks <name> from crush map (everywhere, or just at
822 <ancestor>).
823
824 Usage:
825
826 ceph osd crush unlink <name> {<ancestor>}
827
828 Subcommand df shows OSD utilization
829
830 Usage:
831
832 ceph osd df {plain|tree}
833
834 Subcommand deep-scrub initiates deep scrub on specified osd.
835
836 Usage:
837
838 ceph osd deep-scrub <who>
839
840 Subcommand down sets osd(s) <id> [<id>...] down.
841
842 Usage:
843
844 ceph osd down <ids> [<ids>...]
845
846 Subcommand dump prints summary of OSD map.
847
848 Usage:
849
850 ceph osd dump {<int[0-]>}
851
852 Subcommand erasure-code-profile is used for managing the erasure code
853 profiles. It uses some additional subcommands.
854
855 Subcommand get gets erasure code profile <name>.
856
857 Usage:
858
859 ceph osd erasure-code-profile get <name>
860
861 Subcommand ls lists all erasure code profiles.
862
863 Usage:
864
865 ceph osd erasure-code-profile ls
866
867 Subcommand rm removes erasure code profile <name>.
868
869 Usage:
870
871 ceph osd erasure-code-profile rm <name>
872
873 Subcommand set creates erasure code profile <name> with [<key[=value]>
874 ...] pairs. Add a --force at the end to override an existing profile
875 (IT IS RISKY).
876
877 Usage:
878
879 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
880
881 Subcommand find find osd <id> in the CRUSH map and shows its location.
882
883 Usage:
884
885 ceph osd find <int[0-]>
886
887 Subcommand getcrushmap gets CRUSH map.
888
889 Usage:
890
891 ceph osd getcrushmap {<int[0-]>}
892
893 Subcommand getmap gets OSD map.
894
895 Usage:
896
897 ceph osd getmap {<int[0-]>}
898
899 Subcommand getmaxosd shows largest OSD id.
900
901 Usage:
902
903 ceph osd getmaxosd
904
905 Subcommand in sets osd(s) <id> [<id>...] in.
906
907 Usage:
908
909 ceph osd in <ids> [<ids>...]
910
911 Subcommand lost marks osd as permanently lost. THIS DESTROYS DATA IF NO
912 MORE REPLICAS EXIST, BE CAREFUL.
913
914 Usage:
915
916 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
917
918 Subcommand ls shows all OSD ids.
919
920 Usage:
921
922 ceph osd ls {<int[0-]>}
923
924 Subcommand lspools lists pools.
925
926 Usage:
927
928 ceph osd lspools {<int>}
929
930 Subcommand map finds pg for <object> in <pool>.
931
932 Usage:
933
934 ceph osd map <poolname> <objectname>
935
936 Subcommand metadata fetches metadata for osd <id>.
937
938 Usage:
939
940 ceph osd metadata {int[0-]} (default all)
941
942 Subcommand out sets osd(s) <id> [<id>...] out.
943
944 Usage:
945
946 ceph osd out <ids> [<ids>...]
947
948 Subcommand ok-to-stop checks whether the list of OSD(s) can be stopped
949 without immediately making data unavailable. That is, all data should
950 remain readable and writeable, although data redundancy may be reduced
951 as some PGs may end up in a degraded (but active) state. It will re‐
952 turn a success code if it is okay to stop the OSD(s), or an error code
953 and informative message if it is not or if no conclusion can be drawn
954 at the current time. When --max <num> is provided, up to <num> OSDs
955 IDs will return (including the provided OSDs) that can all be stopped
956 simultaneously. This allows larger sets of stoppable OSDs to be gener‐
957 ated easily by providing a single starting OSD and a max. Additional
958 OSDs are drawn from adjacent locations in the CRUSH hierarchy.
959
960 Usage:
961
962 ceph osd ok-to-stop <id> [<ids>...] [--max <num>]
963
964 Subcommand pause pauses osd.
965
966 Usage:
967
968 ceph osd pause
969
970 Subcommand perf prints dump of OSD perf summary stats.
971
972 Usage:
973
974 ceph osd perf
975
976 Subcommand pg-temp set pg_temp mapping pgid:[<id> [<id>...]] (develop‐
977 ers only).
978
979 Usage:
980
981 ceph osd pg-temp <pgid> {<id> [<id>...]}
982
983 Subcommand force-create-pg forces creation of pg <pgid>.
984
985 Usage:
986
987 ceph osd force-create-pg <pgid>
988
989 Subcommand pool is used for managing data pools. It uses some addi‐
990 tional subcommands.
991
992 Subcommand create creates pool.
993
994 Usage:
995
996 ceph osd pool create <poolname> {<int[0-]>} {<int[0-]>} {replicated|erasure}
997 {<erasure_code_profile>} {<rule>} {<int>} {--autoscale-mode=<on,off,warn>}
998
999 Subcommand delete deletes pool.
1000
1001 Usage:
1002
1003 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
1004
1005 Subcommand get gets pool parameter <var>.
1006
1007 Usage:
1008
1009 ceph osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|write_fadvise_dontneed
1010
1011 Only for tiered pools:
1012
1013 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
1014 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
1015 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
1016 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
1017
1018 Only for erasure coded pools:
1019
1020 ceph osd pool get <poolname> erasure_code_profile
1021
1022 Use all to get all pool parameters that apply to the pool's type:
1023
1024 ceph osd pool get <poolname> all
1025
1026 Subcommand get-quota obtains object or byte limits for pool.
1027
1028 Usage:
1029
1030 ceph osd pool get-quota <poolname>
1031
1032 Subcommand ls list pools
1033
1034 Usage:
1035
1036 ceph osd pool ls {detail}
1037
1038 Subcommand mksnap makes snapshot <snap> in <pool>.
1039
1040 Usage:
1041
1042 ceph osd pool mksnap <poolname> <snap>
1043
1044 Subcommand rename renames <srcpool> to <destpool>.
1045
1046 Usage:
1047
1048 ceph osd pool rename <poolname> <poolname>
1049
1050 Subcommand rmsnap removes snapshot <snap> from <pool>.
1051
1052 Usage:
1053
1054 ceph osd pool rmsnap <poolname> <snap>
1055
1056 Subcommand set sets pool parameter <var> to <val>.
1057
1058 Usage:
1059
1060 ceph osd pool set <poolname> size|min_size|pg_num|
1061 pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
1062 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
1063 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
1064 cache_target_dirty_high_ratio|
1065 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
1066 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
1067 hit_set_search_last_n
1068 <val> {--yes-i-really-mean-it}
1069
1070 Subcommand set-quota sets object or byte limit on pool.
1071
1072 Usage:
1073
1074 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
1075
1076 Subcommand stats obtain stats from all pools, or from specified pool.
1077
1078 Usage:
1079
1080 ceph osd pool stats {<name>}
1081
1082 Subcommand application is used for adding an annotation to the given
1083 pool. By default, the possible applications are object, block, and file
1084 storage (corresponding app-names are "rgw", "rbd", and "cephfs"). How‐
1085 ever, there might be other applications as well. Based on the applica‐
1086 tion, there may or may not be some processing conducted.
1087
1088 Subcommand disable disables the given application on the given pool.
1089
1090 Usage:
1091
1092 ceph osd pool application disable <pool-name> <app> {--yes-i-really-mean-it}
1093
1094 Subcommand enable adds an annotation to the given pool for the men‐
1095 tioned application.
1096
1097 Usage:
1098
1099 ceph osd pool application enable <pool-name> <app> {--yes-i-really-mean-it}
1100
1101 Subcommand get displays the value for the given key that is associated
1102 with the given application of the given pool. Not passing the optional
1103 arguments would display all key-value pairs for all applications for
1104 all pools.
1105
1106 Usage:
1107
1108 ceph osd pool application get {<pool-name>} {<app>} {<key>}
1109
1110 Subcommand rm removes the key-value pair for the given key in the given
1111 application of the given pool.
1112
1113 Usage:
1114
1115 ceph osd pool application rm <pool-name> <app> <key>
1116
1117 Subcommand set associates or updates, if it already exists, a key-value
1118 pair with the given application for the given pool.
1119
1120 Usage:
1121
1122 ceph osd pool application set <pool-name> <app> <key> <value>
1123
1124 Subcommand primary-affinity adjust osd primary-affinity from 0.0
1125 <=<weight> <= 1.0
1126
1127 Usage:
1128
1129 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
1130
1131 Subcommand primary-temp sets primary_temp mapping pgid:<id>|-1 (devel‐
1132 opers only).
1133
1134 Usage:
1135
1136 ceph osd primary-temp <pgid> <id>
1137
1138 Subcommand repair initiates repair on a specified osd.
1139
1140 Usage:
1141
1142 ceph osd repair <who>
1143
1144 Subcommand reweight reweights osd to 0.0 < <weight> < 1.0.
1145
1146 Usage:
1147
1148 osd reweight <int[0-]> <float[0.0-1.0]>
1149
1150 Subcommand reweight-by-pg reweight OSDs by PG distribution [over‐
1151 load-percentage-for-consideration, default 120].
1152
1153 Usage:
1154
1155 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1156 {--no-increasing}
1157
1158 Subcommand reweight-by-utilization reweights OSDs by utilization. It
1159 only reweights outlier OSDs whose utilization exceeds the average, eg.
1160 the default 120% limits reweight to those OSDs that are more than 20%
1161 over the average. [overload-threshold, default 120 [max_weight_change,
1162 default 0.05 [max_osds_to_adjust, default 4]]]
1163
1164 Usage:
1165
1166 ceph osd reweight-by-utilization {<int[100-]> {<float[0.0-]> {<int[0-]>}}}
1167 {--no-increasing}
1168
1169 Subcommand rm removes osd(s) <id> [<id>...] from the OSD map.
1170
1171 Usage:
1172
1173 ceph osd rm <ids> [<ids>...]
1174
1175 Subcommand destroy marks OSD id as destroyed, removing its cephx en‐
1176 tity's keys and all of its dm-crypt and daemon-private config key en‐
1177 tries.
1178
1179 This command will not remove the OSD from crush, nor will it remove the
1180 OSD from the OSD map. Instead, once the command successfully completes,
1181 the OSD will show marked as destroyed.
1182
1183 In order to mark an OSD as destroyed, the OSD must first be marked as
1184 lost.
1185
1186 Usage:
1187
1188 ceph osd destroy <id> {--yes-i-really-mean-it}
1189
1190 Subcommand purge performs a combination of osd destroy, osd rm and osd
1191 crush remove.
1192
1193 Usage:
1194
1195 ceph osd purge <id> {--yes-i-really-mean-it}
1196
1197 Subcommand safe-to-destroy checks whether it is safe to remove or de‐
1198 stroy an OSD without reducing overall data redundancy or durability.
1199 It will return a success code if it is definitely safe, or an error
1200 code and informative message if it is not or if no conclusion can be
1201 drawn at the current time.
1202
1203 Usage:
1204
1205 ceph osd safe-to-destroy <id> [<ids>...]
1206
1207 Subcommand scrub initiates scrub on specified osd.
1208
1209 Usage:
1210
1211 ceph osd scrub <who>
1212
1213 Subcommand set sets cluster-wide <flag> by updating OSD map. The full
1214 flag is not honored anymore since the Mimic release, and ceph osd set
1215 full is not supported in the Octopus release.
1216
1217 Usage:
1218
1219 ceph osd set pause|noup|nodown|noout|noin|nobackfill|
1220 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1221
1222 Subcommand setcrushmap sets crush map from input file.
1223
1224 Usage:
1225
1226 ceph osd setcrushmap
1227
1228 Subcommand setmaxosd sets new maximum osd value.
1229
1230 Usage:
1231
1232 ceph osd setmaxosd <int[0-]>
1233
1234 Subcommand set-require-min-compat-client enforces the cluster to be
1235 backward compatible with the specified client version. This subcommand
1236 prevents you from making any changes (e.g., crush tunables, or using
1237 new features) that would violate the current setting. Please note, This
1238 subcommand will fail if any connected daemon or client is not compati‐
1239 ble with the features offered by the given <version>. To see the fea‐
1240 tures and releases of all clients connected to cluster, please see ceph
1241 features.
1242
1243 Usage:
1244
1245 ceph osd set-require-min-compat-client <version>
1246
1247 Subcommand stat prints summary of OSD map.
1248
1249 Usage:
1250
1251 ceph osd stat
1252
1253 Subcommand tier is used for managing tiers. It uses some additional
1254 subcommands.
1255
1256 Subcommand add adds the tier <tierpool> (the second one) to base pool
1257 <pool> (the first one).
1258
1259 Usage:
1260
1261 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1262
1263 Subcommand add-cache adds a cache <tierpool> (the second one) of size
1264 <size> to existing pool <pool> (the first one).
1265
1266 Usage:
1267
1268 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1269
1270 Subcommand cache-mode specifies the caching mode for cache tier <pool>.
1271
1272 Usage:
1273
1274 ceph osd tier cache-mode <poolname> writeback|readproxy|readonly|none
1275
1276 Subcommand remove removes the tier <tierpool> (the second one) from
1277 base pool <pool> (the first one).
1278
1279 Usage:
1280
1281 ceph osd tier remove <poolname> <poolname>
1282
1283 Subcommand remove-overlay removes the overlay pool for base pool
1284 <pool>.
1285
1286 Usage:
1287
1288 ceph osd tier remove-overlay <poolname>
1289
1290 Subcommand set-overlay set the overlay pool for base pool <pool> to be
1291 <overlaypool>.
1292
1293 Usage:
1294
1295 ceph osd tier set-overlay <poolname> <poolname>
1296
1297 Subcommand tree prints OSD tree.
1298
1299 Usage:
1300
1301 ceph osd tree {<int[0-]>}
1302
1303 Subcommand unpause unpauses osd.
1304
1305 Usage:
1306
1307 ceph osd unpause
1308
1309 Subcommand unset unsets cluster-wide <flag> by updating OSD map.
1310
1311 Usage:
1312
1313 ceph osd unset pause|noup|nodown|noout|noin|nobackfill|
1314 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1315
1316 pg
1317 It is used for managing the placement groups in OSDs. It uses some ad‐
1318 ditional subcommands.
1319
1320 Subcommand debug shows debug info about pgs.
1321
1322 Usage:
1323
1324 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1325
1326 Subcommand deep-scrub starts deep-scrub on <pgid>.
1327
1328 Usage:
1329
1330 ceph pg deep-scrub <pgid>
1331
1332 Subcommand dump shows human-readable versions of pg map (only 'all'
1333 valid with plain).
1334
1335 Usage:
1336
1337 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1338
1339 Subcommand dump_json shows human-readable version of pg map in json
1340 only.
1341
1342 Usage:
1343
1344 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1345
1346 Subcommand dump_pools_json shows pg pools info in json only.
1347
1348 Usage:
1349
1350 ceph pg dump_pools_json
1351
1352 Subcommand dump_stuck shows information about stuck pgs.
1353
1354 Usage:
1355
1356 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1357 {<int>}
1358
1359 Subcommand getmap gets binary pg map to -o/stdout.
1360
1361 Usage:
1362
1363 ceph pg getmap
1364
1365 Subcommand ls lists pg with specific pool, osd, state
1366
1367 Usage:
1368
1369 ceph pg ls {<int>} {<pg-state> [<pg-state>...]}
1370
1371 Subcommand ls-by-osd lists pg on osd [osd]
1372
1373 Usage:
1374
1375 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1376 {<pg-state> [<pg-state>...]}
1377
1378 Subcommand ls-by-pool lists pg with pool = [poolname]
1379
1380 Usage:
1381
1382 ceph pg ls-by-pool <poolstr> {<int>} {<pg-state> [<pg-state>...]}
1383
1384 Subcommand ls-by-primary lists pg with primary = [osd]
1385
1386 Usage:
1387
1388 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1389 {<pg-state> [<pg-state>...]}
1390
1391 Subcommand map shows mapping of pg to osds.
1392
1393 Usage:
1394
1395 ceph pg map <pgid>
1396
1397 Subcommand repair starts repair on <pgid>.
1398
1399 Usage:
1400
1401 ceph pg repair <pgid>
1402
1403 Subcommand scrub starts scrub on <pgid>.
1404
1405 Usage:
1406
1407 ceph pg scrub <pgid>
1408
1409 Subcommand stat shows placement group status.
1410
1411 Usage:
1412
1413 ceph pg stat
1414
1415 quorum
1416 Cause a specific MON to enter or exit quorum.
1417
1418 Usage:
1419
1420 ceph tell mon.<id> quorum enter|exit
1421
1422 quorum_status
1423 Reports status of monitor quorum.
1424
1425 Usage:
1426
1427 ceph quorum_status
1428
1429 report
1430 Reports full status of cluster, optional title tag strings.
1431
1432 Usage:
1433
1434 ceph report {<tags> [<tags>...]}
1435
1436 status
1437 Shows cluster status.
1438
1439 Usage:
1440
1441 ceph status
1442
1443 tell
1444 Sends a command to a specific daemon.
1445
1446 Usage:
1447
1448 ceph tell <name (type.id)> <command> [options...]
1449
1450 List all available commands.
1451
1452 Usage:
1453
1454 ceph tell <name (type.id)> help
1455
1456 version
1457 Show mon daemon version
1458
1459 Usage:
1460
1461 ceph version
1462
1464 -i infile
1465 will specify an input file to be passed along as a payload with
1466 the command to the monitor cluster. This is only used for spe‐
1467 cific monitor commands.
1468
1469 -o outfile
1470 will write any payload returned by the monitor cluster with its
1471 reply to outfile. Only specific monitor commands (e.g. osd
1472 getmap) return a payload.
1473
1474 --setuser user
1475 will apply the appropriate user ownership to the file specified
1476 by the option '-o'.
1477
1478 --setgroup group
1479 will apply the appropriate group ownership to the file specified
1480 by the option '-o'.
1481
1482 -c ceph.conf, --conf=ceph.conf
1483 Use ceph.conf configuration file instead of the default
1484 /etc/ceph/ceph.conf to determine monitor addresses during
1485 startup.
1486
1487 --id CLIENT_ID, --user CLIENT_ID
1488 Client id for authentication.
1489
1490 --name CLIENT_NAME, -n CLIENT_NAME
1491 Client name for authentication.
1492
1493 --cluster CLUSTER
1494 Name of the Ceph cluster.
1495
1496 --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1497 Submit admin-socket commands via admin sockets in /var/run/ceph.
1498
1499 --admin-socket ADMIN_SOCKET_NOPE
1500 You probably mean --admin-daemon
1501
1502 -s, --status
1503 Show cluster status.
1504
1505 -w, --watch
1506 Watch live cluster changes on the default 'cluster' channel
1507
1508 -W, --watch-channel
1509 Watch live cluster changes on any channel (cluster, audit,
1510 cephadm, or * for all)
1511
1512 --watch-debug
1513 Watch debug events.
1514
1515 --watch-info
1516 Watch info events.
1517
1518 --watch-sec
1519 Watch security events.
1520
1521 --watch-warn
1522 Watch warning events.
1523
1524 --watch-error
1525 Watch error events.
1526
1527 --version, -v
1528 Display version.
1529
1530 --verbose
1531 Make verbose.
1532
1533 --concise
1534 Make less verbose.
1535
1536 -f {json,json-pretty,xml,xml-pretty,plain,yaml}, --format
1537 Format of output. Note: yaml is only valid for orch commands.
1538
1539 --connect-timeout CLUSTER_TIMEOUT
1540 Set a timeout for connecting to the cluster.
1541
1542 --no-increasing
1543 --no-increasing is off by default. So increasing the osd weight
1544 is allowed using the reweight-by-utilization or
1545 test-reweight-by-utilization commands. If this option is used
1546 with these commands, it will help not to increase osd weight
1547 even the osd is under utilized.
1548
1549 --block
1550 block until completion (scrub and deep-scrub only)
1551
1553 ceph is part of Ceph, a massively scalable, open-source, distributed
1554 storage system. Please refer to the Ceph documentation at
1555 https://docs.ceph.com for more information.
1556
1558 ceph-mon(8), ceph-osd(8), ceph-mds(8)
1559
1561 2010-2022, Inktank Storage, Inc. and contributors. Licensed under Cre‐
1562 ative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
1563
1564
1565
1566
1567dev Oct 18, 2022 CEPH(8)