1CEPH(8) Ceph CEPH(8)
2
3
4
6 ceph - ceph administration tool
7
9 ceph auth [ add | caps | del | export | get | get-key | get-or-create | get-or-create-key | import | list | print-key | print_key ] ...
10
11 ceph compact
12
13 ceph config [ dump | ls | help | get | show | show-with-defaults | set | rm | log | reset | assimilate-conf | generate-minimal-conf ] ...
14
15 ceph config-key [ rm | exists | get | ls | dump | set ] ...
16
17 ceph daemon <name> | <path> <command> ...
18
19 ceph daemonperf <name> | <path> [ interval [ count ] ]
20
21 ceph df {detail}
22
23 ceph fs [ ls | new | reset | rm | authorize ] ...
24
25 ceph fsid
26
27 ceph health {detail}
28
29 ceph injectargs <injectedargs> [ <injectedargs>... ]
30
31 ceph log <logtext> [ <logtext>... ]
32
33 ceph mds [ compat | fail | rm | rmfailed | set_state | stat | repaired ] ...
34
35 ceph mon [ add | dump | enable_stretch_mode | getmap | remove | stat ] ...
36
37 ceph osd [ blocklist | blocked-by | create | new | deep-scrub | df | down | dump | erasure-code-profile | find | getcrushmap | getmap | getmaxosd | in | ls | lspools | map | metadata | ok-to-stop | out | pause | perf | pg-temp | force-create-pg | primary-affinity | primary-temp | repair | reweight | reweight-by-pg | rm | destroy | purge | safe-to-destroy | scrub | set | setcrushmap | setmaxosd | stat | tree | unpause | unset ] ...
38
39 ceph osd crush [ add | add-bucket | create-or-move | dump | get-tunable | link | move | remove | rename-bucket | reweight | reweight-all | reweight-subtree | rm | rule | set | set-tunable | show-tunables | tunables | unlink ] ...
40
41 ceph osd pool [ create | delete | get | get-quota | ls | mksnap | rename | rmsnap | set | set-quota | stats ] ...
42
43 ceph osd pool application [ disable | enable | get | rm | set ] ...
44
45 ceph osd tier [ add | add-cache | cache-mode | remove | remove-overlay | set-overlay ] ...
46
47 ceph pg [ debug | deep-scrub | dump | dump_json | dump_pools_json | dump_stuck | getmap | ls | ls-by-osd | ls-by-pool | ls-by-primary | map | repair | scrub | stat ] ...
48
49 ceph quorum_status
50
51 ceph report { <tags> [ <tags>... ] }
52
53 ceph status
54
55 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
56
57 ceph tell <name (type.id)> <command> [options...]
58
59 ceph version
60
61
63 ceph is a control utility which is used for manual deployment and main‐
64 tenance of a Ceph cluster. It provides a diverse set of commands that
65 allows deployment of monitors, OSDs, placement groups, MDS and overall
66 maintenance, administration of the cluster.
67
69 auth
70 Manage authentication keys. It is used for adding, removing, exporting
71 or updating of authentication keys for a particular entity such as a
72 monitor or OSD. It uses some additional subcommands.
73
74 Subcommand add adds authentication info for a particular entity from
75 input file, or random key if no input is given and/or any caps speci‐
76 fied in the command.
77
78 Usage:
79
80 ceph auth add <entity> {<caps> [<caps>...]}
81
82 Subcommand caps updates caps for name from caps specified in the com‐
83 mand.
84
85 Usage:
86
87 ceph auth caps <entity> <caps> [<caps>...]
88
89 Subcommand del deletes all caps for name.
90
91 Usage:
92
93 ceph auth del <entity>
94
95 Subcommand export writes keyring for requested entity, or master
96 keyring if none given.
97
98 Usage:
99
100 ceph auth export {<entity>}
101
102 Subcommand get writes keyring file with requested key.
103
104 Usage:
105
106 ceph auth get <entity>
107
108 Subcommand get-key displays requested key.
109
110 Usage:
111
112 ceph auth get-key <entity>
113
114 Subcommand get-or-create adds authentication info for a particular en‐
115 tity from input file, or random key if no input given and/or any caps
116 specified in the command.
117
118 Usage:
119
120 ceph auth get-or-create <entity> {<caps> [<caps>...]}
121
122 Subcommand get-or-create-key gets or adds key for name from system/caps
123 pairs specified in the command. If key already exists, any given caps
124 must match the existing caps for that key.
125
126 Usage:
127
128 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
129
130 Subcommand import reads keyring from input file.
131
132 Usage:
133
134 ceph auth import
135
136 Subcommand ls lists authentication state.
137
138 Usage:
139
140 ceph auth ls
141
142 Subcommand print-key displays requested key.
143
144 Usage:
145
146 ceph auth print-key <entity>
147
148 Subcommand print_key displays requested key.
149
150 Usage:
151
152 ceph auth print_key <entity>
153
154 compact
155 Causes compaction of monitor's leveldb storage.
156
157 Usage:
158
159 ceph compact
160
161 config
162 Configure the cluster. By default, Ceph daemons and clients retrieve
163 their configuration options from monitor when they start, and are up‐
164 dated if any of the tracked options is changed at run time. It uses
165 following additional subcommand.
166
167 Subcommand dump to dump all options for the cluster
168
169 Usage:
170
171 ceph config dump
172
173 Subcommand ls to list all option names for the cluster
174
175 Usage:
176
177 ceph config ls
178
179 Subcommand help to describe the specified configuration option
180
181 Usage:
182
183 ceph config help <option>
184
185 Subcommand get to dump the option(s) for the specified entity.
186
187 Usage:
188
189 ceph config get <who> {<option>}
190
191 Subcommand show to display the running configuration of the specified
192 entity. Please note, unlike get, which only shows the options managed
193 by monitor, show displays all the configurations being actively used.
194 These options are pulled from several sources, for instance, the com‐
195 piled-in default value, the monitor's configuration database, ceph.conf
196 file on the host. The options can even be overridden at runtime. So,
197 there is chance that the configuration options in the output of show
198 could be different from those in the output of get.
199
200 Usage:
201
202 ceph config show {<who>}
203
204 Subcommand show-with-defaults to display the running configuration
205 along with the compiled-in defaults of the specified entity
206
207 Usage:
208
209 ceph config show {<who>}
210
211 Subcommand set to set an option for one or more specified entities
212
213 Usage:
214
215 ceph config set <who> <option> <value> {--force}
216
217 Subcommand rm to clear an option for one or more entities
218
219 Usage:
220
221 ceph config rm <who> <option>
222
223 Subcommand log to show recent history of config changes. If count op‐
224 tion is omitted it defaults to 10.
225
226 Usage:
227
228 ceph config log {<count>}
229
230 Subcommand reset to revert configuration to the specified historical
231 version
232
233 Usage:
234
235 ceph config reset <version>
236
237 Subcommand assimilate-conf to assimilate options from stdin, and return
238 a new, minimal conf file
239
240 Usage:
241
242 ceph config assimilate-conf -i <input-config-path> > <output-config-path>
243 ceph config assimilate-conf < <input-config-path>
244
245 Subcommand generate-minimal-conf to generate a minimal ceph.conf file,
246 which can be used for bootstrapping a daemon or a client.
247
248 Usage:
249
250 ceph config generate-minimal-conf > <minimal-config-path>
251
252 config-key
253 Manage configuration key. Config-key is a general purpose key/value
254 service offered by the monitors. This service is mainly used by Ceph
255 tools and daemons for persisting various settings. Among which,
256 ceph-mgr modules uses it for storing their options. It uses some addi‐
257 tional subcommands.
258
259 Subcommand rm deletes configuration key.
260
261 Usage:
262
263 ceph config-key rm <key>
264
265 Subcommand exists checks for configuration keys existence.
266
267 Usage:
268
269 ceph config-key exists <key>
270
271 Subcommand get gets the configuration key.
272
273 Usage:
274
275 ceph config-key get <key>
276
277 Subcommand ls lists configuration keys.
278
279 Usage:
280
281 ceph config-key ls
282
283 Subcommand dump dumps configuration keys and values.
284
285 Usage:
286
287 ceph config-key dump
288
289 Subcommand set puts configuration key and value.
290
291 Usage:
292
293 ceph config-key set <key> {<val>}
294
295 daemon
296 Submit admin-socket commands.
297
298 Usage:
299
300 ceph daemon {daemon_name|socket_path} {command} ...
301
302 Example:
303
304 ceph daemon osd.0 help
305
306 daemonperf
307 Watch performance counters from a Ceph daemon.
308
309 Usage:
310
311 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
312
313 df
314 Show cluster's free space status.
315
316 Usage:
317
318 ceph df {detail}
319
320 features
321 Show the releases and features of all connected daemons and clients
322 connected to the cluster, along with the numbers of them in each bucket
323 grouped by the corresponding features/releases. Each release of Ceph
324 supports a different set of features, expressed by the features bit‐
325 mask. New cluster features require that clients support the feature, or
326 else they are not allowed to connect to these new features. As new fea‐
327 tures or capabilities are enabled after an upgrade, older clients are
328 prevented from connecting.
329
330 Usage:
331
332 ceph features
333
334 fs
335 Manage cephfs file systems. It uses some additional subcommands.
336
337 Subcommand ls to list file systems
338
339 Usage:
340
341 ceph fs ls
342
343 Subcommand new to make a new file system using named pools <metadata>
344 and <data>
345
346 Usage:
347
348 ceph fs new <fs_name> <metadata> <data>
349
350 Subcommand reset is used for disaster recovery only: reset to a sin‐
351 gle-MDS map
352
353 Usage:
354
355 ceph fs reset <fs_name> {--yes-i-really-mean-it}
356
357 Subcommand rm to disable the named file system
358
359 Usage:
360
361 ceph fs rm <fs_name> {--yes-i-really-mean-it}
362
363 Subcommand authorize creates a new client that will be authorized for
364 the given path in <fs_name>. Pass / to authorize for the entire FS.
365 <perms> below can be r, rw or rwp.
366
367 Usage:
368
369 ceph fs authorize <fs_name> client.<client_id> <path> <perms> [<path> <perms>...]
370
371 fsid
372 Show cluster's FSID/UUID.
373
374 Usage:
375
376 ceph fsid
377
378 health
379 Show cluster's health.
380
381 Usage:
382
383 ceph health {detail}
384
385 heap
386 Show heap usage info (available only if compiled with tcmalloc)
387
388 Usage:
389
390 ceph tell <name (type.id)> heap dump|start_profiler|stop_profiler|stats
391
392 Subcommand release to make TCMalloc to releases no-longer-used memory
393 back to the kernel at once.
394
395 Usage:
396
397 ceph tell <name (type.id)> heap release
398
399 Subcommand (get|set)_release_rate get or set the TCMalloc memory re‐
400 lease rate. TCMalloc releases no-longer-used memory back to the kernel
401 gradually. the rate controls how quickly this happens. Increase this
402 setting to make TCMalloc to return unused memory more frequently. 0
403 means never return memory to system, 1 means wait for 1000 pages after
404 releasing a page to system. It is 1.0 by default..
405
406 Usage:
407
408 ceph tell <name (type.id)> heap get_release_rate|set_release_rate {<val>}
409
410 injectargs
411 Inject configuration arguments into monitor.
412
413 Usage:
414
415 ceph injectargs <injected_args> [<injected_args>...]
416
417 log
418 Log supplied text to the monitor log.
419
420 Usage:
421
422 ceph log <logtext> [<logtext>...]
423
424 mds
425 Manage metadata server configuration and administration. It uses some
426 additional subcommands.
427
428 Subcommand compat manages compatible features. It uses some additional
429 subcommands.
430
431 Subcommand rm_compat removes compatible feature.
432
433 Usage:
434
435 ceph mds compat rm_compat <int[0-]>
436
437 Subcommand rm_incompat removes incompatible feature.
438
439 Usage:
440
441 ceph mds compat rm_incompat <int[0-]>
442
443 Subcommand show shows mds compatibility settings.
444
445 Usage:
446
447 ceph mds compat show
448
449 Subcommand fail forces mds to status fail.
450
451 Usage:
452
453 ceph mds fail <role|gid>
454
455 Subcommand rm removes inactive mds.
456
457 Usage:
458
459 ceph mds rm <int[0-]> <name> (type.id)>
460
461 Subcommand rmfailed removes failed mds.
462
463 Usage:
464
465 ceph mds rmfailed <int[0-]>
466
467 Subcommand set_state sets mds state of <gid> to <numeric-state>.
468
469 Usage:
470
471 ceph mds set_state <int[0-]> <int[0-20]>
472
473 Subcommand stat shows MDS status.
474
475 Usage:
476
477 ceph mds stat
478
479 Subcommand repaired mark a damaged MDS rank as no longer damaged.
480
481 Usage:
482
483 ceph mds repaired <role>
484
485 mon
486 Manage monitor configuration and administration. It uses some addi‐
487 tional subcommands.
488
489 Subcommand add adds new monitor named <name> at <addr>.
490
491 Usage:
492
493 ceph mon add <name> <IPaddr[:port]>
494
495 Subcommand dump dumps formatted monmap (optionally from epoch)
496
497 Usage:
498
499 ceph mon dump {<int[0-]>}
500
501 Subcommand getmap gets monmap.
502
503 Usage:
504
505 ceph mon getmap {<int[0-]>}
506
507 Subcommand enable_stretch_mode enables stretch mode, changing the peer‐
508 ing rules and failure handling on all pools. For a given PG to success‐
509 fully peer and be marked active, min_size replicas will now need to be
510 active under all (currently two) CRUSH buckets of type <divid‐
511 ing_bucket>.
512
513 <tiebreaker_mon> is the tiebreaker mon to use if a network split hap‐
514 pens.
515
516 <dividing_bucket> is the bucket type across which to stretch. This
517 will typically be datacenter or other CRUSH hierarchy bucket type that
518 denotes physically or logically distant subdivisions.
519
520 <new_crush_rule> will be set as CRUSH rule for all pools.
521
522 Usage:
523
524 ceph mon enable_stretch_mode <tiebreaker_mon> <new_crush_rule> <dividing_bucket>
525
526 Subcommand remove removes monitor named <name>.
527
528 Usage:
529
530 ceph mon remove <name>
531
532 Subcommand stat summarizes monitor status.
533
534 Usage:
535
536 ceph mon stat
537
538 mgr
539 Ceph manager daemon configuration and management.
540
541 Subcommand dump dumps the latest MgrMap, which describes the active and
542 standby manager daemons.
543
544 Usage:
545
546 ceph mgr dump
547
548 Subcommand fail will mark a manager daemon as failed, removing it from
549 the manager map. If it is the active manager daemon a standby will
550 take its place.
551
552 Usage:
553
554 ceph mgr fail <name>
555
556 Subcommand module ls will list currently enabled manager modules (plug‐
557 ins).
558
559 Usage:
560
561 ceph mgr module ls
562
563 Subcommand module enable will enable a manager module. Available mod‐
564 ules are included in MgrMap and visible via mgr dump.
565
566 Usage:
567
568 ceph mgr module enable <module>
569
570 Subcommand module disable will disable an active manager module.
571
572 Usage:
573
574 ceph mgr module disable <module>
575
576 Subcommand metadata will report metadata about all manager daemons or,
577 if the name is specified, a single manager daemon.
578
579 Usage:
580
581 ceph mgr metadata [name]
582
583 Subcommand versions will report a count of running daemon versions.
584
585 Usage:
586
587 ceph mgr versions
588
589 Subcommand count-metadata will report a count of any daemon metadata
590 field.
591
592 Usage:
593
594 ceph mgr count-metadata <field>
595
596 osd
597 Manage OSD configuration and administration. It uses some additional
598 subcommands.
599
600 Subcommand blocklist manage blocklisted clients. It uses some addi‐
601 tional subcommands.
602
603 Subcommand add add <addr> to blocklist (optionally until <expire> sec‐
604 onds from now)
605
606 Usage:
607
608 ceph osd blocklist add <EntityAddr> {<float[0.0-]>}
609
610 Subcommand ls show blocklisted clients
611
612 Usage:
613
614 ceph osd blocklist ls
615
616 Subcommand rm remove <addr> from blocklist
617
618 Usage:
619
620 ceph osd blocklist rm <EntityAddr>
621
622 Subcommand blocked-by prints a histogram of which OSDs are blocking
623 their peers
624
625 Usage:
626
627 ceph osd blocked-by
628
629 Subcommand create creates new osd (with optional UUID and ID).
630
631 This command is DEPRECATED as of the Luminous release, and will be re‐
632 moved in a future release.
633
634 Subcommand new should instead be used.
635
636 Usage:
637
638 ceph osd create {<uuid>} {<id>}
639
640 Subcommand new can be used to create a new OSD or to recreate a previ‐
641 ously destroyed OSD with a specific id. The new OSD will have the spec‐
642 ified uuid, and the command expects a JSON file containing the base64
643 cephx key for auth entity client.osd.<id>, as well as optional base64
644 cepx key for dm-crypt lockbox access and a dm-crypt key. Specifying a
645 dm-crypt requires specifying the accompanying lockbox cephx key.
646
647 Usage:
648
649 ceph osd new {<uuid>} {<id>} -i {<params.json>}
650
651 The parameters JSON file is optional but if provided, is expected to
652 maintain a form of the following format:
653
654 {
655 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
656 "crush_device_class": "myclass"
657 }
658
659 Or:
660
661 {
662 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
663 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
664 "dmcrypt_key": "<dm-crypt key>",
665 "crush_device_class": "myclass"
666 }
667
668 Or:
669
670 {
671 "crush_device_class": "myclass"
672 }
673
674 The "crush_device_class" property is optional. If specified, it will
675 set the initial CRUSH device class for the new OSD.
676
677 Subcommand crush is used for CRUSH management. It uses some additional
678 subcommands.
679
680 Subcommand add adds or updates crushmap position and weight for <name>
681 with <weight> and location <args>.
682
683 Usage:
684
685 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
686
687 Subcommand add-bucket adds no-parent (probably root) crush bucket
688 <name> of type <type>.
689
690 Usage:
691
692 ceph osd crush add-bucket <name> <type>
693
694 Subcommand create-or-move creates entry or moves existing entry for
695 <name> <weight> at/to location <args>.
696
697 Usage:
698
699 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
700 [<args>...]
701
702 Subcommand dump dumps crush map.
703
704 Usage:
705
706 ceph osd crush dump
707
708 Subcommand get-tunable get crush tunable straw_calc_version
709
710 Usage:
711
712 ceph osd crush get-tunable straw_calc_version
713
714 Subcommand link links existing entry for <name> under location <args>.
715
716 Usage:
717
718 ceph osd crush link <name> <args> [<args>...]
719
720 Subcommand move moves existing entry for <name> to location <args>.
721
722 Usage:
723
724 ceph osd crush move <name> <args> [<args>...]
725
726 Subcommand remove removes <name> from crush map (everywhere, or just at
727 <ancestor>).
728
729 Usage:
730
731 ceph osd crush remove <name> {<ancestor>}
732
733 Subcommand rename-bucket renames bucket <srcname> to <dstname>
734
735 Usage:
736
737 ceph osd crush rename-bucket <srcname> <dstname>
738
739 Subcommand reweight change <name>'s weight to <weight> in crush map.
740
741 Usage:
742
743 ceph osd crush reweight <name> <float[0.0-]>
744
745 Subcommand reweight-all recalculate the weights for the tree to ensure
746 they sum correctly
747
748 Usage:
749
750 ceph osd crush reweight-all
751
752 Subcommand reweight-subtree changes all leaf items beneath <name> to
753 <weight> in crush map
754
755 Usage:
756
757 ceph osd crush reweight-subtree <name> <weight>
758
759 Subcommand rm removes <name> from crush map (everywhere, or just at
760 <ancestor>).
761
762 Usage:
763
764 ceph osd crush rm <name> {<ancestor>}
765
766 Subcommand rule is used for creating crush rules. It uses some addi‐
767 tional subcommands.
768
769 Subcommand create-erasure creates crush rule <name> for erasure coded
770 pool created with <profile> (default default).
771
772 Usage:
773
774 ceph osd crush rule create-erasure <name> {<profile>}
775
776 Subcommand create-simple creates crush rule <name> to start from
777 <root>, replicate across buckets of type <type>, using a choose mode of
778 <firstn|indep> (default firstn; indep best for erasure pools).
779
780 Usage:
781
782 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
783
784 Subcommand dump dumps crush rule <name> (default all).
785
786 Usage:
787
788 ceph osd crush rule dump {<name>}
789
790 Subcommand ls lists crush rules.
791
792 Usage:
793
794 ceph osd crush rule ls
795
796 Subcommand rm removes crush rule <name>.
797
798 Usage:
799
800 ceph osd crush rule rm <name>
801
802 Subcommand set used alone, sets crush map from input file.
803
804 Usage:
805
806 ceph osd crush set
807
808 Subcommand set with osdname/osd.id update crushmap position and weight
809 for <name> to <weight> with location <args>.
810
811 Usage:
812
813 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
814
815 Subcommand set-tunable set crush tunable <tunable> to <value>. The
816 only tunable that can be set is straw_calc_version.
817
818 Usage:
819
820 ceph osd crush set-tunable straw_calc_version <value>
821
822 Subcommand show-tunables shows current crush tunables.
823
824 Usage:
825
826 ceph osd crush show-tunables
827
828 Subcommand tree shows the crush buckets and items in a tree view.
829
830 Usage:
831
832 ceph osd crush tree
833
834 Subcommand tunables sets crush tunables values to <profile>.
835
836 Usage:
837
838 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
839
840 Subcommand unlink unlinks <name> from crush map (everywhere, or just at
841 <ancestor>).
842
843 Usage:
844
845 ceph osd crush unlink <name> {<ancestor>}
846
847 Subcommand df shows OSD utilization
848
849 Usage:
850
851 ceph osd df {plain|tree}
852
853 Subcommand deep-scrub initiates deep scrub on specified osd.
854
855 Usage:
856
857 ceph osd deep-scrub <who>
858
859 Subcommand down sets osd(s) <id> [<id>...] down.
860
861 Usage:
862
863 ceph osd down <ids> [<ids>...]
864
865 Subcommand dump prints summary of OSD map.
866
867 Usage:
868
869 ceph osd dump {<int[0-]>}
870
871 Subcommand erasure-code-profile is used for managing the erasure code
872 profiles. It uses some additional subcommands.
873
874 Subcommand get gets erasure code profile <name>.
875
876 Usage:
877
878 ceph osd erasure-code-profile get <name>
879
880 Subcommand ls lists all erasure code profiles.
881
882 Usage:
883
884 ceph osd erasure-code-profile ls
885
886 Subcommand rm removes erasure code profile <name>.
887
888 Usage:
889
890 ceph osd erasure-code-profile rm <name>
891
892 Subcommand set creates erasure code profile <name> with [<key[=value]>
893 ...] pairs. Add a --force at the end to override an existing profile
894 (IT IS RISKY).
895
896 Usage:
897
898 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
899
900 Subcommand find find osd <id> in the CRUSH map and shows its location.
901
902 Usage:
903
904 ceph osd find <int[0-]>
905
906 Subcommand getcrushmap gets CRUSH map.
907
908 Usage:
909
910 ceph osd getcrushmap {<int[0-]>}
911
912 Subcommand getmap gets OSD map.
913
914 Usage:
915
916 ceph osd getmap {<int[0-]>}
917
918 Subcommand getmaxosd shows largest OSD id.
919
920 Usage:
921
922 ceph osd getmaxosd
923
924 Subcommand in sets osd(s) <id> [<id>...] in.
925
926 Usage:
927
928 ceph osd in <ids> [<ids>...]
929
930 Subcommand lost marks osd as permanently lost. THIS DESTROYS DATA IF NO
931 MORE REPLICAS EXIST, BE CAREFUL.
932
933 Usage:
934
935 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
936
937 Subcommand ls shows all OSD ids.
938
939 Usage:
940
941 ceph osd ls {<int[0-]>}
942
943 Subcommand lspools lists pools.
944
945 Usage:
946
947 ceph osd lspools {<int>}
948
949 Subcommand map finds pg for <object> in <pool>.
950
951 Usage:
952
953 ceph osd map <poolname> <objectname>
954
955 Subcommand metadata fetches metadata for osd <id>.
956
957 Usage:
958
959 ceph osd metadata {int[0-]} (default all)
960
961 Subcommand out sets osd(s) <id> [<id>...] out.
962
963 Usage:
964
965 ceph osd out <ids> [<ids>...]
966
967 Subcommand ok-to-stop checks whether the list of OSD(s) can be stopped
968 without immediately making data unavailable. That is, all data should
969 remain readable and writeable, although data redundancy may be reduced
970 as some PGs may end up in a degraded (but active) state. It will re‐
971 turn a success code if it is okay to stop the OSD(s), or an error code
972 and informative message if it is not or if no conclusion can be drawn
973 at the current time. When --max <num> is provided, up to <num> OSDs
974 IDs will return (including the provided OSDs) that can all be stopped
975 simultaneously. This allows larger sets of stoppable OSDs to be gener‐
976 ated easily by providing a single starting OSD and a max. Additional
977 OSDs are drawn from adjacent locations in the CRUSH hierarchy.
978
979 Usage:
980
981 ceph osd ok-to-stop <id> [<ids>...] [--max <num>]
982
983 Subcommand pause pauses osd.
984
985 Usage:
986
987 ceph osd pause
988
989 Subcommand perf prints dump of OSD perf summary stats.
990
991 Usage:
992
993 ceph osd perf
994
995 Subcommand pg-temp set pg_temp mapping pgid:[<id> [<id>...]] (develop‐
996 ers only).
997
998 Usage:
999
1000 ceph osd pg-temp <pgid> {<id> [<id>...]}
1001
1002 Subcommand force-create-pg forces creation of pg <pgid>.
1003
1004 Usage:
1005
1006 ceph osd force-create-pg <pgid>
1007
1008 Subcommand pool is used for managing data pools. It uses some addi‐
1009 tional subcommands.
1010
1011 Subcommand create creates pool.
1012
1013 Usage:
1014
1015 ceph osd pool create <poolname> {<int[0-]>} {<int[0-]>} {replicated|erasure}
1016 {<erasure_code_profile>} {<rule>} {<int>} {--autoscale-mode=<on,off,warn>}
1017
1018 Subcommand delete deletes pool.
1019
1020 Usage:
1021
1022 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
1023
1024 Subcommand get gets pool parameter <var>.
1025
1026 Usage:
1027
1028 ceph osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|write_fadvise_dontneed
1029
1030 Only for tiered pools:
1031
1032 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
1033 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
1034 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
1035 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
1036
1037 Only for erasure coded pools:
1038
1039 ceph osd pool get <poolname> erasure_code_profile
1040
1041 Use all to get all pool parameters that apply to the pool's type:
1042
1043 ceph osd pool get <poolname> all
1044
1045 Subcommand get-quota obtains object or byte limits for pool.
1046
1047 Usage:
1048
1049 ceph osd pool get-quota <poolname>
1050
1051 Subcommand ls list pools
1052
1053 Usage:
1054
1055 ceph osd pool ls {detail}
1056
1057 Subcommand mksnap makes snapshot <snap> in <pool>.
1058
1059 Usage:
1060
1061 ceph osd pool mksnap <poolname> <snap>
1062
1063 Subcommand rename renames <srcpool> to <destpool>.
1064
1065 Usage:
1066
1067 ceph osd pool rename <poolname> <poolname>
1068
1069 Subcommand rmsnap removes snapshot <snap> from <pool>.
1070
1071 Usage:
1072
1073 ceph osd pool rmsnap <poolname> <snap>
1074
1075 Subcommand set sets pool parameter <var> to <val>.
1076
1077 Usage:
1078
1079 ceph osd pool set <poolname> size|min_size|pg_num|
1080 pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
1081 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
1082 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
1083 cache_target_dirty_high_ratio|
1084 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
1085 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
1086 hit_set_search_last_n
1087 <val> {--yes-i-really-mean-it}
1088
1089 Subcommand set-quota sets object or byte limit on pool.
1090
1091 Usage:
1092
1093 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
1094
1095 Subcommand stats obtain stats from all pools, or from specified pool.
1096
1097 Usage:
1098
1099 ceph osd pool stats {<name>}
1100
1101 Subcommand application is used for adding an annotation to the given
1102 pool. By default, the possible applications are object, block, and file
1103 storage (corresponding app-names are "rgw", "rbd", and "cephfs"). How‐
1104 ever, there might be other applications as well. Based on the applica‐
1105 tion, there may or may not be some processing conducted.
1106
1107 Subcommand disable disables the given application on the given pool.
1108
1109 Usage:
1110
1111 ceph osd pool application disable <pool-name> <app> {--yes-i-really-mean-it}
1112
1113 Subcommand enable adds an annotation to the given pool for the men‐
1114 tioned application.
1115
1116 Usage:
1117
1118 ceph osd pool application enable <pool-name> <app> {--yes-i-really-mean-it}
1119
1120 Subcommand get displays the value for the given key that is associated
1121 with the given application of the given pool. Not passing the optional
1122 arguments would display all key-value pairs for all applications for
1123 all pools.
1124
1125 Usage:
1126
1127 ceph osd pool application get {<pool-name>} {<app>} {<key>}
1128
1129 Subcommand rm removes the key-value pair for the given key in the given
1130 application of the given pool.
1131
1132 Usage:
1133
1134 ceph osd pool application rm <pool-name> <app> <key>
1135
1136 Subcommand set associates or updates, if it already exists, a key-value
1137 pair with the given application for the given pool.
1138
1139 Usage:
1140
1141 ceph osd pool application set <pool-name> <app> <key> <value>
1142
1143 Subcommand primary-affinity adjust osd primary-affinity from 0.0
1144 <=<weight> <= 1.0
1145
1146 Usage:
1147
1148 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
1149
1150 Subcommand primary-temp sets primary_temp mapping pgid:<id>|-1 (devel‐
1151 opers only).
1152
1153 Usage:
1154
1155 ceph osd primary-temp <pgid> <id>
1156
1157 Subcommand repair initiates repair on a specified osd.
1158
1159 Usage:
1160
1161 ceph osd repair <who>
1162
1163 Subcommand reweight reweights osd to 0.0 < <weight> < 1.0.
1164
1165 Usage:
1166
1167 osd reweight <int[0-]> <float[0.0-1.0]>
1168
1169 Subcommand reweight-by-pg reweight OSDs by PG distribution [over‐
1170 load-percentage-for-consideration, default 120].
1171
1172 Usage:
1173
1174 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1175 {--no-increasing}
1176
1177 Subcommand reweight-by-utilization reweights OSDs by utilization. It
1178 only reweights outlier OSDs whose utilization exceeds the average, eg.
1179 the default 120% limits reweight to those OSDs that are more than 20%
1180 over the average. [overload-threshold, default 120 [max_weight_change,
1181 default 0.05 [max_osds_to_adjust, default 4]]]
1182
1183 Usage:
1184
1185 ceph osd reweight-by-utilization {<int[100-]> {<float[0.0-]> {<int[0-]>}}}
1186 {--no-increasing}
1187
1188 Subcommand rm removes osd(s) <id> [<id>...] from the OSD map.
1189
1190 Usage:
1191
1192 ceph osd rm <ids> [<ids>...]
1193
1194 Subcommand destroy marks OSD id as destroyed, removing its cephx en‐
1195 tity's keys and all of its dm-crypt and daemon-private config key en‐
1196 tries.
1197
1198 This command will not remove the OSD from crush, nor will it remove the
1199 OSD from the OSD map. Instead, once the command successfully completes,
1200 the OSD will show marked as destroyed.
1201
1202 In order to mark an OSD as destroyed, the OSD must first be marked as
1203 lost.
1204
1205 Usage:
1206
1207 ceph osd destroy <id> {--yes-i-really-mean-it}
1208
1209 Subcommand purge performs a combination of osd destroy, osd rm and osd
1210 crush remove.
1211
1212 Usage:
1213
1214 ceph osd purge <id> {--yes-i-really-mean-it}
1215
1216 Subcommand safe-to-destroy checks whether it is safe to remove or de‐
1217 stroy an OSD without reducing overall data redundancy or durability.
1218 It will return a success code if it is definitely safe, or an error
1219 code and informative message if it is not or if no conclusion can be
1220 drawn at the current time.
1221
1222 Usage:
1223
1224 ceph osd safe-to-destroy <id> [<ids>...]
1225
1226 Subcommand scrub initiates scrub on specified osd.
1227
1228 Usage:
1229
1230 ceph osd scrub <who>
1231
1232 Subcommand set sets cluster-wide <flag> by updating OSD map. The full
1233 flag is not honored anymore since the Mimic release, and ceph osd set
1234 full is not supported in the Octopus release.
1235
1236 Usage:
1237
1238 ceph osd set pause|noup|nodown|noout|noin|nobackfill|
1239 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1240
1241 Subcommand setcrushmap sets crush map from input file.
1242
1243 Usage:
1244
1245 ceph osd setcrushmap
1246
1247 Subcommand setmaxosd sets new maximum osd value.
1248
1249 Usage:
1250
1251 ceph osd setmaxosd <int[0-]>
1252
1253 Subcommand set-require-min-compat-client enforces the cluster to be
1254 backward compatible with the specified client version. This subcommand
1255 prevents you from making any changes (e.g., crush tunables, or using
1256 new features) that would violate the current setting. Please note, This
1257 subcommand will fail if any connected daemon or client is not compati‐
1258 ble with the features offered by the given <version>. To see the fea‐
1259 tures and releases of all clients connected to cluster, please see ceph
1260 features.
1261
1262 Usage:
1263
1264 ceph osd set-require-min-compat-client <version>
1265
1266 Subcommand stat prints summary of OSD map.
1267
1268 Usage:
1269
1270 ceph osd stat
1271
1272 Subcommand tier is used for managing tiers. It uses some additional
1273 subcommands.
1274
1275 Subcommand add adds the tier <tierpool> (the second one) to base pool
1276 <pool> (the first one).
1277
1278 Usage:
1279
1280 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1281
1282 Subcommand add-cache adds a cache <tierpool> (the second one) of size
1283 <size> to existing pool <pool> (the first one).
1284
1285 Usage:
1286
1287 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1288
1289 Subcommand cache-mode specifies the caching mode for cache tier <pool>.
1290
1291 Usage:
1292
1293 ceph osd tier cache-mode <poolname> writeback|proxy|readproxy|readonly|none
1294
1295 Subcommand remove removes the tier <tierpool> (the second one) from
1296 base pool <pool> (the first one).
1297
1298 Usage:
1299
1300 ceph osd tier remove <poolname> <poolname>
1301
1302 Subcommand remove-overlay removes the overlay pool for base pool
1303 <pool>.
1304
1305 Usage:
1306
1307 ceph osd tier remove-overlay <poolname>
1308
1309 Subcommand set-overlay set the overlay pool for base pool <pool> to be
1310 <overlaypool>.
1311
1312 Usage:
1313
1314 ceph osd tier set-overlay <poolname> <poolname>
1315
1316 Subcommand tree prints OSD tree.
1317
1318 Usage:
1319
1320 ceph osd tree {<int[0-]>}
1321
1322 Subcommand unpause unpauses osd.
1323
1324 Usage:
1325
1326 ceph osd unpause
1327
1328 Subcommand unset unsets cluster-wide <flag> by updating OSD map.
1329
1330 Usage:
1331
1332 ceph osd unset pause|noup|nodown|noout|noin|nobackfill|
1333 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1334
1335 pg
1336 It is used for managing the placement groups in OSDs. It uses some ad‐
1337 ditional subcommands.
1338
1339 Subcommand debug shows debug info about pgs.
1340
1341 Usage:
1342
1343 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1344
1345 Subcommand deep-scrub starts deep-scrub on <pgid>.
1346
1347 Usage:
1348
1349 ceph pg deep-scrub <pgid>
1350
1351 Subcommand dump shows human-readable versions of pg map (only 'all'
1352 valid with plain).
1353
1354 Usage:
1355
1356 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1357
1358 Subcommand dump_json shows human-readable version of pg map in json
1359 only.
1360
1361 Usage:
1362
1363 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1364
1365 Subcommand dump_pools_json shows pg pools info in json only.
1366
1367 Usage:
1368
1369 ceph pg dump_pools_json
1370
1371 Subcommand dump_stuck shows information about stuck pgs.
1372
1373 Usage:
1374
1375 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1376 {<int>}
1377
1378 Subcommand getmap gets binary pg map to -o/stdout.
1379
1380 Usage:
1381
1382 ceph pg getmap
1383
1384 Subcommand ls lists pg with specific pool, osd, state
1385
1386 Usage:
1387
1388 ceph pg ls {<int>} {<pg-state> [<pg-state>...]}
1389
1390 Subcommand ls-by-osd lists pg on osd [osd]
1391
1392 Usage:
1393
1394 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1395 {<pg-state> [<pg-state>...]}
1396
1397 Subcommand ls-by-pool lists pg with pool = [poolname]
1398
1399 Usage:
1400
1401 ceph pg ls-by-pool <poolstr> {<int>} {<pg-state> [<pg-state>...]}
1402
1403 Subcommand ls-by-primary lists pg with primary = [osd]
1404
1405 Usage:
1406
1407 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1408 {<pg-state> [<pg-state>...]}
1409
1410 Subcommand map shows mapping of pg to osds.
1411
1412 Usage:
1413
1414 ceph pg map <pgid>
1415
1416 Subcommand repair starts repair on <pgid>.
1417
1418 Usage:
1419
1420 ceph pg repair <pgid>
1421
1422 Subcommand scrub starts scrub on <pgid>.
1423
1424 Usage:
1425
1426 ceph pg scrub <pgid>
1427
1428 Subcommand stat shows placement group status.
1429
1430 Usage:
1431
1432 ceph pg stat
1433
1434 quorum
1435 Cause a specific MON to enter or exit quorum.
1436
1437 Usage:
1438
1439 ceph tell mon.<id> quorum enter|exit
1440
1441 quorum_status
1442 Reports status of monitor quorum.
1443
1444 Usage:
1445
1446 ceph quorum_status
1447
1448 report
1449 Reports full status of cluster, optional title tag strings.
1450
1451 Usage:
1452
1453 ceph report {<tags> [<tags>...]}
1454
1455 status
1456 Shows cluster status.
1457
1458 Usage:
1459
1460 ceph status
1461
1462 tell
1463 Sends a command to a specific daemon.
1464
1465 Usage:
1466
1467 ceph tell <name (type.id)> <command> [options...]
1468
1469 List all available commands.
1470
1471 Usage:
1472
1473 ceph tell <name (type.id)> help
1474
1475 version
1476 Show mon daemon version
1477
1478 Usage:
1479
1480 ceph version
1481
1483 -i infile
1484 will specify an input file to be passed along as a payload with
1485 the command to the monitor cluster. This is only used for spe‐
1486 cific monitor commands.
1487
1488 -o outfile
1489 will write any payload returned by the monitor cluster with its
1490 reply to outfile. Only specific monitor commands (e.g. osd
1491 getmap) return a payload.
1492
1493 --setuser user
1494 will apply the appropriate user ownership to the file specified
1495 by the option '-o'.
1496
1497 --setgroup group
1498 will apply the appropriate group ownership to the file specified
1499 by the option '-o'.
1500
1501 -c ceph.conf, --conf=ceph.conf
1502 Use ceph.conf configuration file instead of the default
1503 /etc/ceph/ceph.conf to determine monitor addresses during
1504 startup.
1505
1506 --id CLIENT_ID, --user CLIENT_ID
1507 Client id for authentication.
1508
1509 --name CLIENT_NAME, -n CLIENT_NAME
1510 Client name for authentication.
1511
1512 --cluster CLUSTER
1513 Name of the Ceph cluster.
1514
1515 --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1516 Submit admin-socket commands via admin sockets in /var/run/ceph.
1517
1518 --admin-socket ADMIN_SOCKET_NOPE
1519 You probably mean --admin-daemon
1520
1521 -s, --status
1522 Show cluster status.
1523
1524 -w, --watch
1525 Watch live cluster changes on the default 'cluster' channel
1526
1527 -W, --watch-channel
1528 Watch live cluster changes on any channel (cluster, audit,
1529 cephadm, or * for all)
1530
1531 --watch-debug
1532 Watch debug events.
1533
1534 --watch-info
1535 Watch info events.
1536
1537 --watch-sec
1538 Watch security events.
1539
1540 --watch-warn
1541 Watch warning events.
1542
1543 --watch-error
1544 Watch error events.
1545
1546 --version, -v
1547 Display version.
1548
1549 --verbose
1550 Make verbose.
1551
1552 --concise
1553 Make less verbose.
1554
1555 -f {json,json-pretty,xml,xml-pretty,plain,yaml}, --format
1556 Format of output. Note: yaml is only valid for orch commands.
1557
1558 --connect-timeout CLUSTER_TIMEOUT
1559 Set a timeout for connecting to the cluster.
1560
1561 --no-increasing
1562 --no-increasing is off by default. So increasing the osd weight
1563 is allowed using the reweight-by-utilization or
1564 test-reweight-by-utilization commands. If this option is used
1565 with these commands, it will help not to increase osd weight
1566 even the osd is under utilized.
1567
1568 --block
1569 block until completion (scrub and deep-scrub only)
1570
1572 ceph is part of Ceph, a massively scalable, open-source, distributed
1573 storage system. Please refer to the Ceph documentation at
1574 https://docs.ceph.com for more information.
1575
1577 ceph-mon(8), ceph-osd(8), ceph-mds(8)
1578
1580 2010-2023, Inktank Storage, Inc. and contributors. Licensed under Cre‐
1581 ative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
1582
1583
1584
1585
1586dev Nov 15, 2023 CEPH(8)