1CEPH(8) Ceph CEPH(8)
2
3
4
6 ceph - ceph administration tool
7
9 ceph auth [ add | caps | del | export | get | get-key | get-or-create | get-or-create-key | import | list | print-key | print_key ] ...
10
11 ceph compact
12
13 ceph config [ dump | ls | help | get | show | show-with-defaults | set | rm | log | reset | assimilate-conf | generate-minimal-conf ] ...
14
15 ceph config-key [ rm | exists | get | ls | dump | set ] ...
16
17 ceph daemon <name> | <path> <command> ...
18
19 ceph daemonperf <name> | <path> [ interval [ count ] ]
20
21 ceph df {detail}
22
23 ceph fs [ ls | new | reset | rm ] ...
24
25 ceph fsid
26
27 ceph health {detail}
28
29 ceph injectargs <injectedargs> [ <injectedargs>... ]
30
31 ceph log <logtext> [ <logtext>... ]
32
33 ceph mds [ compat | fail | rm | rmfailed | set_state | stat | repaired ] ...
34
35 ceph mon [ add | dump | getmap | remove | stat ] ...
36
37 ceph osd [ blacklist | blocked-by | create | new | deep-scrub | df | down | dump | erasure-code-profile | find | getcrushmap | getmap | getmaxosd | in | ls | lspools | map | metadata | ok-to-stop | out | pause | perf | pg-temp | force-create-pg | primary-affinity | primary-temp | repair | reweight | reweight-by-pg | rm | destroy | purge | safe-to-destroy | scrub | set | setcrushmap | setmaxosd | stat | tree | unpause | unset ] ...
38
39 ceph osd crush [ add | add-bucket | create-or-move | dump | get-tunable | link | move | remove | rename-bucket | reweight | reweight-all | reweight-subtree | rm | rule | set | set-tunable | show-tunables | tunables | unlink ] ...
40
41 ceph osd pool [ create | delete | get | get-quota | ls | mksnap | rename | rmsnap | set | set-quota | stats ] ...
42
43 ceph osd pool application [ disable | enable | get | rm | set ] ...
44
45 ceph osd tier [ add | add-cache | cache-mode | remove | remove-overlay | set-overlay ] ...
46
47 ceph pg [ debug | deep-scrub | dump | dump_json | dump_pools_json | dump_stuck | getmap | ls | ls-by-osd | ls-by-pool | ls-by-primary | map | repair | scrub | stat ] ...
48
49 ceph quorum_status
50
51 ceph report { <tags> [ <tags>... ] }
52
53 ceph status
54
55 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
56
57 ceph tell <name (type.id)> <command> [options...]
58
59 ceph version
60
61
63 ceph is a control utility which is used for manual deployment and main‐
64 tenance of a Ceph cluster. It provides a diverse set of commands that
65 allows deployment of monitors, OSDs, placement groups, MDS and overall
66 maintenance, administration of the cluster.
67
69 auth
70 Manage authentication keys. It is used for adding, removing, exporting
71 or updating of authentication keys for a particular entity such as a
72 monitor or OSD. It uses some additional subcommands.
73
74 Subcommand add adds authentication info for a particular entity from
75 input file, or random key if no input is given and/or any caps speci‐
76 fied in the command.
77
78 Usage:
79
80 ceph auth add <entity> {<caps> [<caps>...]}
81
82 Subcommand caps updates caps for name from caps specified in the com‐
83 mand.
84
85 Usage:
86
87 ceph auth caps <entity> <caps> [<caps>...]
88
89 Subcommand del deletes all caps for name.
90
91 Usage:
92
93 ceph auth del <entity>
94
95 Subcommand export writes keyring for requested entity, or master
96 keyring if none given.
97
98 Usage:
99
100 ceph auth export {<entity>}
101
102 Subcommand get writes keyring file with requested key.
103
104 Usage:
105
106 ceph auth get <entity>
107
108 Subcommand get-key displays requested key.
109
110 Usage:
111
112 ceph auth get-key <entity>
113
114 Subcommand get-or-create adds authentication info for a particular
115 entity from input file, or random key if no input given and/or any caps
116 specified in the command.
117
118 Usage:
119
120 ceph auth get-or-create <entity> {<caps> [<caps>...]}
121
122 Subcommand get-or-create-key gets or adds key for name from system/caps
123 pairs specified in the command. If key already exists, any given caps
124 must match the existing caps for that key.
125
126 Usage:
127
128 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
129
130 Subcommand import reads keyring from input file.
131
132 Usage:
133
134 ceph auth import
135
136 Subcommand ls lists authentication state.
137
138 Usage:
139
140 ceph auth ls
141
142 Subcommand print-key displays requested key.
143
144 Usage:
145
146 ceph auth print-key <entity>
147
148 Subcommand print_key displays requested key.
149
150 Usage:
151
152 ceph auth print_key <entity>
153
154 compact
155 Causes compaction of monitor's leveldb storage.
156
157 Usage:
158
159 ceph compact
160
161 config
162 Configure the cluster. By default, Ceph daemons and clients retrieve
163 their configuration options from monitor when they start, and are
164 updated if any of the tracked options is changed at run time. It uses
165 following additional subcommand.
166
167 Subcommand dump to dump all options for the cluster
168
169 Usage:
170
171 ceph config dump
172
173 Subcommand ls to list all option names for the cluster
174
175 Usage:
176
177 ceph config ls
178
179 Subcommand help to describe the specified configuration option
180
181 Usage:
182
183 ceph config help <option>
184
185 Subcommand get to dump the option(s) for the specified entity.
186
187 Usage:
188
189 ceph config get <who> {<option>}
190
191 Subcommand show to display the running configuration of the specified
192 entity. Please note, unlike get, which only shows the options managed
193 by monitor, show displays all the configurations being actively used.
194 These options are pulled from several sources, for instance, the com‐
195 piled-in default value, the monitor's configuration database, ceph.conf
196 file on the host. The options can even be overridden at runtime. So,
197 there is chance that the configuration options in the output of show
198 could be different from those in the output of get.
199
200 Usage:
201
202 ceph config show {<who>}
203
204 Subcommand show-with-defaults to display the running configuration
205 along with the compiled-in defaults of the specified entity
206
207 Usage:
208
209 ceph config show {<who>}
210
211 Subcommand set to set an option for one or more specified entities
212
213 Usage:
214
215 ceph config set <who> <option> <value> {--force}
216
217 Subcommand rm to clear an option for one or more entities
218
219 Usage:
220
221 ceph config rm <who> <option>
222
223 Subcommand log to show recent history of config changes. If count
224 option is omitted it defeaults to 10.
225
226 Usage:
227
228 ceph config log {<count>}
229
230 Subcommand reset to revert configuration to the specified historical
231 version
232
233 Usage:
234
235 ceph config reset <version>
236
237 Subcommand assimilate-conf to assimilate options from stdin, and return
238 a new, minimal conf file
239
240 Usage:
241
242 ceph config assimilate-conf -i <input-config-path> > <output-config-path>
243 ceph config assimilate-conf < <input-config-path>
244
245 Subcommand generate-minimal-conf to generate a minimal ceph.conf file,
246 which can be used for bootstrapping a daemon or a client.
247
248 Usage:
249
250 ceph config generate-minimal-conf > <minimal-config-path>
251
252 config-key
253 Manage configuration key. Config-key is a general purpose key/value
254 service offered by the monitors. This service is mainly used by Ceph
255 tools and daemons for persisting various settings. Among which,
256 ceph-mgr modules uses it for storing their options. It uses some addi‐
257 tional subcommands.
258
259 Subcommand rm deletes configuration key.
260
261 Usage:
262
263 ceph config-key rm <key>
264
265 Subcommand exists checks for configuration keys existence.
266
267 Usage:
268
269 ceph config-key exists <key>
270
271 Subcommand get gets the configuration key.
272
273 Usage:
274
275 ceph config-key get <key>
276
277 Subcommand ls lists configuration keys.
278
279 Usage:
280
281 ceph config-key ls
282
283 Subcommand dump dumps configuration keys and values.
284
285 Usage:
286
287 ceph config-key dump
288
289 Subcommand set puts configuration key and value.
290
291 Usage:
292
293 ceph config-key set <key> {<val>}
294
295 daemon
296 Submit admin-socket commands.
297
298 Usage:
299
300 ceph daemon {daemon_name|socket_path} {command} ...
301
302 Example:
303
304 ceph daemon osd.0 help
305
306 daemonperf
307 Watch performance counters from a Ceph daemon.
308
309 Usage:
310
311 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
312
313 df
314 Show cluster's free space status.
315
316 Usage:
317
318 ceph df {detail}
319
320 features
321 Show the releases and features of all connected daemons and clients
322 connected to the cluster, along with the numbers of them in each bucket
323 grouped by the corresponding features/releases. Each release of Ceph
324 supports a different set of features, expressed by the features bit‐
325 mask. New cluster features require that clients support the feature, or
326 else they are not allowed to connect to these new features. As new fea‐
327 tures or capabilities are enabled after an upgrade, older clients are
328 prevented from connecting.
329
330 Usage:
331
332 ceph features
333
334 fs
335 Manage cephfs file systems. It uses some additional subcommands.
336
337 Subcommand ls to list file systems
338
339 Usage:
340
341 ceph fs ls
342
343 Subcommand new to make a new file system using named pools <metadata>
344 and <data>
345
346 Usage:
347
348 ceph fs new <fs_name> <metadata> <data>
349
350 Subcommand reset is used for disaster recovery only: reset to a sin‐
351 gle-MDS map
352
353 Usage:
354
355 ceph fs reset <fs_name> {--yes-i-really-mean-it}
356
357 Subcommand rm to disable the named file system
358
359 Usage:
360
361 ceph fs rm <fs_name> {--yes-i-really-mean-it}
362
363 fsid
364 Show cluster's FSID/UUID.
365
366 Usage:
367
368 ceph fsid
369
370 health
371 Show cluster's health.
372
373 Usage:
374
375 ceph health {detail}
376
377 heap
378 Show heap usage info (available only if compiled with tcmalloc)
379
380 Usage:
381
382 ceph tell <name (type.id)> heap dump|start_profiler|stop_profiler|stats
383
384 Subcommand release to make TCMalloc to releases no-longer-used memory
385 back to the kernel at once.
386
387 Usage:
388
389 ceph tell <name (type.id)> heap release
390
391 Subcommand (get|set)_release_rate get or set the TCMalloc memory
392 release rate. TCMalloc releases no-longer-used memory back to the ker‐
393 nel gradually. the rate controls how quickly this happens. Increase
394 this setting to make TCMalloc to return unused memory more frequently.
395 0 means never return memory to system, 1 means wait for 1000 pages
396 after releasing a page to system. It is 1.0 by default..
397
398 Usage:
399
400 ceph tell <name (type.id)> heap get_release_rate|set_release_rate {<val>}
401
402 injectargs
403 Inject configuration arguments into monitor.
404
405 Usage:
406
407 ceph injectargs <injected_args> [<injected_args>...]
408
409 log
410 Log supplied text to the monitor log.
411
412 Usage:
413
414 ceph log <logtext> [<logtext>...]
415
416 mds
417 Manage metadata server configuration and administration. It uses some
418 additional subcommands.
419
420 Subcommand compat manages compatible features. It uses some additional
421 subcommands.
422
423 Subcommand rm_compat removes compatible feature.
424
425 Usage:
426
427 ceph mds compat rm_compat <int[0-]>
428
429 Subcommand rm_incompat removes incompatible feature.
430
431 Usage:
432
433 ceph mds compat rm_incompat <int[0-]>
434
435 Subcommand show shows mds compatibility settings.
436
437 Usage:
438
439 ceph mds compat show
440
441 Subcommand fail forces mds to status fail.
442
443 Usage:
444
445 ceph mds fail <role|gid>
446
447 Subcommand rm removes inactive mds.
448
449 Usage:
450
451 ceph mds rm <int[0-]> <name> (type.id)>
452
453 Subcommand rmfailed removes failed mds.
454
455 Usage:
456
457 ceph mds rmfailed <int[0-]>
458
459 Subcommand set_state sets mds state of <gid> to <numeric-state>.
460
461 Usage:
462
463 ceph mds set_state <int[0-]> <int[0-20]>
464
465 Subcommand stat shows MDS status.
466
467 Usage:
468
469 ceph mds stat
470
471 Subcommand repaired mark a damaged MDS rank as no longer damaged.
472
473 Usage:
474
475 ceph mds repaired <role>
476
477 mon
478 Manage monitor configuration and administration. It uses some addi‐
479 tional subcommands.
480
481 Subcommand add adds new monitor named <name> at <addr>.
482
483 Usage:
484
485 ceph mon add <name> <IPaddr[:port]>
486
487 Subcommand dump dumps formatted monmap (optionally from epoch)
488
489 Usage:
490
491 ceph mon dump {<int[0-]>}
492
493 Subcommand getmap gets monmap.
494
495 Usage:
496
497 ceph mon getmap {<int[0-]>}
498
499 Subcommand remove removes monitor named <name>.
500
501 Usage:
502
503 ceph mon remove <name>
504
505 Subcommand stat summarizes monitor status.
506
507 Usage:
508
509 ceph mon stat
510
511 mgr
512 Ceph manager daemon configuration and management.
513
514 Subcommand dump dumps the latest MgrMap, which describes the active and
515 standby manager daemons.
516
517 Usage:
518
519 ceph mgr dump
520
521 Subcommand fail will mark a manager daemon as failed, removing it from
522 the manager map. If it is the active manager daemon a standby will
523 take its place.
524
525 Usage:
526
527 ceph mgr fail <name>
528
529 Subcommand module ls will list currently enabled manager modules (plug‐
530 ins).
531
532 Usage:
533
534 ceph mgr module ls
535
536 Subcommand module enable will enable a manager module. Available mod‐
537 ules are included in MgrMap and visible via mgr dump.
538
539 Usage:
540
541 ceph mgr module enable <module>
542
543 Subcommand module disable will disable an active manager module.
544
545 Usage:
546
547 ceph mgr module disable <module>
548
549 Subcommand metadata will report metadata about all manager daemons or,
550 if the name is specified, a single manager daemon.
551
552 Usage:
553
554 ceph mgr metadata [name]
555
556 Subcommand versions will report a count of running daemon versions.
557
558 Usage:
559
560 ceph mgr versions
561
562 Subcommand count-metadata will report a count of any daemon metadata
563 field.
564
565 Usage:
566
567 ceph mgr count-metadata <field>
568
569 osd
570 Manage OSD configuration and administration. It uses some additional
571 subcommands.
572
573 Subcommand blacklist manage blacklisted clients. It uses some addi‐
574 tional subcommands.
575
576 Subcommand add add <addr> to blacklist (optionally until <expire> sec‐
577 onds from now)
578
579 Usage:
580
581 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
582
583 Subcommand ls show blacklisted clients
584
585 Usage:
586
587 ceph osd blacklist ls
588
589 Subcommand rm remove <addr> from blacklist
590
591 Usage:
592
593 ceph osd blacklist rm <EntityAddr>
594
595 Subcommand blocked-by prints a histogram of which OSDs are blocking
596 their peers
597
598 Usage:
599
600 ceph osd blocked-by
601
602 Subcommand create creates new osd (with optional UUID and ID).
603
604 This command is DEPRECATED as of the Luminous release, and will be
605 removed in a future release.
606
607 Subcommand new should instead be used.
608
609 Usage:
610
611 ceph osd create {<uuid>} {<id>}
612
613 Subcommand new can be used to create a new OSD or to recreate a previ‐
614 ously destroyed OSD with a specific id. The new OSD will have the spec‐
615 ified uuid, and the command expects a JSON file containing the base64
616 cephx key for auth entity client.osd.<id>, as well as optional base64
617 cepx key for dm-crypt lockbox access and a dm-crypt key. Specifying a
618 dm-crypt requires specifying the accompanying lockbox cephx key.
619
620 Usage:
621
622 ceph osd new {<uuid>} {<id>} -i {<params.json>}
623
624 The parameters JSON file is optional but if provided, is expected to
625 maintain a form of the following format:
626
627 {
628 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
629 "crush_device_class": "myclass"
630 }
631
632 Or:
633
634 {
635 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
636 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
637 "dmcrypt_key": "<dm-crypt key>",
638 "crush_device_class": "myclass"
639 }
640
641 Or:
642
643 {
644 "crush_device_class": "myclass"
645 }
646
647 The "crush_device_class" property is optional. If specified, it will
648 set the initial CRUSH device class for the new OSD.
649
650 Subcommand crush is used for CRUSH management. It uses some additional
651 subcommands.
652
653 Subcommand add adds or updates crushmap position and weight for <name>
654 with <weight> and location <args>.
655
656 Usage:
657
658 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
659
660 Subcommand add-bucket adds no-parent (probably root) crush bucket
661 <name> of type <type>.
662
663 Usage:
664
665 ceph osd crush add-bucket <name> <type>
666
667 Subcommand create-or-move creates entry or moves existing entry for
668 <name> <weight> at/to location <args>.
669
670 Usage:
671
672 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
673 [<args>...]
674
675 Subcommand dump dumps crush map.
676
677 Usage:
678
679 ceph osd crush dump
680
681 Subcommand get-tunable get crush tunable straw_calc_version
682
683 Usage:
684
685 ceph osd crush get-tunable straw_calc_version
686
687 Subcommand link links existing entry for <name> under location <args>.
688
689 Usage:
690
691 ceph osd crush link <name> <args> [<args>...]
692
693 Subcommand move moves existing entry for <name> to location <args>.
694
695 Usage:
696
697 ceph osd crush move <name> <args> [<args>...]
698
699 Subcommand remove removes <name> from crush map (everywhere, or just at
700 <ancestor>).
701
702 Usage:
703
704 ceph osd crush remove <name> {<ancestor>}
705
706 Subcommand rename-bucket renames bucket <srcname> to <dstname>
707
708 Usage:
709
710 ceph osd crush rename-bucket <srcname> <dstname>
711
712 Subcommand reweight change <name>'s weight to <weight> in crush map.
713
714 Usage:
715
716 ceph osd crush reweight <name> <float[0.0-]>
717
718 Subcommand reweight-all recalculate the weights for the tree to ensure
719 they sum correctly
720
721 Usage:
722
723 ceph osd crush reweight-all
724
725 Subcommand reweight-subtree changes all leaf items beneath <name> to
726 <weight> in crush map
727
728 Usage:
729
730 ceph osd crush reweight-subtree <name> <weight>
731
732 Subcommand rm removes <name> from crush map (everywhere, or just at
733 <ancestor>).
734
735 Usage:
736
737 ceph osd crush rm <name> {<ancestor>}
738
739 Subcommand rule is used for creating crush rules. It uses some addi‐
740 tional subcommands.
741
742 Subcommand create-erasure creates crush rule <name> for erasure coded
743 pool created with <profile> (default default).
744
745 Usage:
746
747 ceph osd crush rule create-erasure <name> {<profile>}
748
749 Subcommand create-simple creates crush rule <name> to start from
750 <root>, replicate across buckets of type <type>, using a choose mode of
751 <firstn|indep> (default firstn; indep best for erasure pools).
752
753 Usage:
754
755 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
756
757 Subcommand dump dumps crush rule <name> (default all).
758
759 Usage:
760
761 ceph osd crush rule dump {<name>}
762
763 Subcommand ls lists crush rules.
764
765 Usage:
766
767 ceph osd crush rule ls
768
769 Subcommand rm removes crush rule <name>.
770
771 Usage:
772
773 ceph osd crush rule rm <name>
774
775 Subcommand set used alone, sets crush map from input file.
776
777 Usage:
778
779 ceph osd crush set
780
781 Subcommand set with osdname/osd.id update crushmap position and weight
782 for <name> to <weight> with location <args>.
783
784 Usage:
785
786 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
787
788 Subcommand set-tunable set crush tunable <tunable> to <value>. The
789 only tunable that can be set is straw_calc_version.
790
791 Usage:
792
793 ceph osd crush set-tunable straw_calc_version <value>
794
795 Subcommand show-tunables shows current crush tunables.
796
797 Usage:
798
799 ceph osd crush show-tunables
800
801 Subcommand tree shows the crush buckets and items in a tree view.
802
803 Usage:
804
805 ceph osd crush tree
806
807 Subcommand tunables sets crush tunables values to <profile>.
808
809 Usage:
810
811 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
812
813 Subcommand unlink unlinks <name> from crush map (everywhere, or just at
814 <ancestor>).
815
816 Usage:
817
818 ceph osd crush unlink <name> {<ancestor>}
819
820 Subcommand df shows OSD utilization
821
822 Usage:
823
824 ceph osd df {plain|tree}
825
826 Subcommand deep-scrub initiates deep scrub on specified osd.
827
828 Usage:
829
830 ceph osd deep-scrub <who>
831
832 Subcommand down sets osd(s) <id> [<id>...] down.
833
834 Usage:
835
836 ceph osd down <ids> [<ids>...]
837
838 Subcommand dump prints summary of OSD map.
839
840 Usage:
841
842 ceph osd dump {<int[0-]>}
843
844 Subcommand erasure-code-profile is used for managing the erasure code
845 profiles. It uses some additional subcommands.
846
847 Subcommand get gets erasure code profile <name>.
848
849 Usage:
850
851 ceph osd erasure-code-profile get <name>
852
853 Subcommand ls lists all erasure code profiles.
854
855 Usage:
856
857 ceph osd erasure-code-profile ls
858
859 Subcommand rm removes erasure code profile <name>.
860
861 Usage:
862
863 ceph osd erasure-code-profile rm <name>
864
865 Subcommand set creates erasure code profile <name> with [<key[=value]>
866 ...] pairs. Add a --force at the end to override an existing profile
867 (IT IS RISKY).
868
869 Usage:
870
871 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
872
873 Subcommand find find osd <id> in the CRUSH map and shows its location.
874
875 Usage:
876
877 ceph osd find <int[0-]>
878
879 Subcommand getcrushmap gets CRUSH map.
880
881 Usage:
882
883 ceph osd getcrushmap {<int[0-]>}
884
885 Subcommand getmap gets OSD map.
886
887 Usage:
888
889 ceph osd getmap {<int[0-]>}
890
891 Subcommand getmaxosd shows largest OSD id.
892
893 Usage:
894
895 ceph osd getmaxosd
896
897 Subcommand in sets osd(s) <id> [<id>...] in.
898
899 Usage:
900
901 ceph osd in <ids> [<ids>...]
902
903 Subcommand lost marks osd as permanently lost. THIS DESTROYS DATA IF NO
904 MORE REPLICAS EXIST, BE CAREFUL.
905
906 Usage:
907
908 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
909
910 Subcommand ls shows all OSD ids.
911
912 Usage:
913
914 ceph osd ls {<int[0-]>}
915
916 Subcommand lspools lists pools.
917
918 Usage:
919
920 ceph osd lspools {<int>}
921
922 Subcommand map finds pg for <object> in <pool>.
923
924 Usage:
925
926 ceph osd map <poolname> <objectname>
927
928 Subcommand metadata fetches metadata for osd <id>.
929
930 Usage:
931
932 ceph osd metadata {int[0-]} (default all)
933
934 Subcommand out sets osd(s) <id> [<id>...] out.
935
936 Usage:
937
938 ceph osd out <ids> [<ids>...]
939
940 Subcommand ok-to-stop checks whether the list of OSD(s) can be stopped
941 without immediately making data unavailable. That is, all data should
942 remain readable and writeable, although data redundancy may be reduced
943 as some PGs may end up in a degraded (but active) state. It will
944 return a success code if it is okay to stop the OSD(s), or an error
945 code and informative message if it is not or if no conclusion can be
946 drawn at the current time.
947
948 Usage:
949
950 ceph osd ok-to-stop <id> [<ids>...]
951
952 Subcommand pause pauses osd.
953
954 Usage:
955
956 ceph osd pause
957
958 Subcommand perf prints dump of OSD perf summary stats.
959
960 Usage:
961
962 ceph osd perf
963
964 Subcommand pg-temp set pg_temp mapping pgid:[<id> [<id>...]] (develop‐
965 ers only).
966
967 Usage:
968
969 ceph osd pg-temp <pgid> {<id> [<id>...]}
970
971 Subcommand force-create-pg forces creation of pg <pgid>.
972
973 Usage:
974
975 ceph osd force-create-pg <pgid>
976
977 Subcommand pool is used for managing data pools. It uses some addi‐
978 tional subcommands.
979
980 Subcommand create creates pool.
981
982 Usage:
983
984 ceph osd pool create <poolname> {<int[0-]>} {<int[0-]>} {replicated|erasure}
985 {<erasure_code_profile>} {<rule>} {<int>} {--autoscale-mode=<on,off,warn>}
986
987 Subcommand delete deletes pool.
988
989 Usage:
990
991 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
992
993 Subcommand get gets pool parameter <var>.
994
995 Usage:
996
997 ceph osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|write_fadvise_dontneed
998
999 Only for tiered pools:
1000
1001 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
1002 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
1003 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
1004 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
1005
1006 Only for erasure coded pools:
1007
1008 ceph osd pool get <poolname> erasure_code_profile
1009
1010 Use all to get all pool parameters that apply to the pool's type:
1011
1012 ceph osd pool get <poolname> all
1013
1014 Subcommand get-quota obtains object or byte limits for pool.
1015
1016 Usage:
1017
1018 ceph osd pool get-quota <poolname>
1019
1020 Subcommand ls list pools
1021
1022 Usage:
1023
1024 ceph osd pool ls {detail}
1025
1026 Subcommand mksnap makes snapshot <snap> in <pool>.
1027
1028 Usage:
1029
1030 ceph osd pool mksnap <poolname> <snap>
1031
1032 Subcommand rename renames <srcpool> to <destpool>.
1033
1034 Usage:
1035
1036 ceph osd pool rename <poolname> <poolname>
1037
1038 Subcommand rmsnap removes snapshot <snap> from <pool>.
1039
1040 Usage:
1041
1042 ceph osd pool rmsnap <poolname> <snap>
1043
1044 Subcommand set sets pool parameter <var> to <val>.
1045
1046 Usage:
1047
1048 ceph osd pool set <poolname> size|min_size|pg_num|
1049 pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
1050 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
1051 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
1052 cache_target_dirty_high_ratio|
1053 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
1054 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
1055 hit_set_search_last_n
1056 <val> {--yes-i-really-mean-it}
1057
1058 Subcommand set-quota sets object or byte limit on pool.
1059
1060 Usage:
1061
1062 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
1063
1064 Subcommand stats obtain stats from all pools, or from specified pool.
1065
1066 Usage:
1067
1068 ceph osd pool stats {<name>}
1069
1070 Subcommand application is used for adding an annotation to the given
1071 pool. By default, the possible applications are object, block, and file
1072 storage (corresponding app-names are "rgw", "rbd", and "cephfs"). How‐
1073 ever, there might be other applications as well. Based on the applica‐
1074 tion, there may or may not be some processing conducted.
1075
1076 Subcommand disable disables the given application on the given pool.
1077
1078 Usage:
1079
1080 ceph osd pool application disable <pool-name> <app> {--yes-i-really-mean-it}
1081
1082 Subcommand enable adds an annotation to the given pool for the men‐
1083 tioned application.
1084
1085 Usage:
1086
1087 ceph osd pool application enable <pool-name> <app> {--yes-i-really-mean-it}
1088
1089 Subcommand get displays the value for the given key that is associated
1090 with the given application of the given pool. Not passing the optional
1091 arguments would display all key-value pairs for all applications for
1092 all pools.
1093
1094 Usage:
1095
1096 ceph osd pool application get {<pool-name>} {<app>} {<key>}
1097
1098 Subcommand rm removes the key-value pair for the given key in the given
1099 application of the given pool.
1100
1101 Usage:
1102
1103 ceph osd pool application rm <pool-name> <app> <key>
1104
1105 Subcommand set assosciates or updates, if it already exists, a
1106 key-value pair with the given application for the given pool.
1107
1108 Usage:
1109
1110 ceph osd pool application set <pool-name> <app> <key> <value>
1111
1112 Subcommand primary-affinity adjust osd primary-affinity from 0.0
1113 <=<weight> <= 1.0
1114
1115 Usage:
1116
1117 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
1118
1119 Subcommand primary-temp sets primary_temp mapping pgid:<id>|-1 (devel‐
1120 opers only).
1121
1122 Usage:
1123
1124 ceph osd primary-temp <pgid> <id>
1125
1126 Subcommand repair initiates repair on a specified osd.
1127
1128 Usage:
1129
1130 ceph osd repair <who>
1131
1132 Subcommand reweight reweights osd to 0.0 < <weight> < 1.0.
1133
1134 Usage:
1135
1136 osd reweight <int[0-]> <float[0.0-1.0]>
1137
1138 Subcommand reweight-by-pg reweight OSDs by PG distribution [over‐
1139 load-percentage-for-consideration, default 120].
1140
1141 Usage:
1142
1143 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1144 {--no-increasing}
1145
1146 Subcommand reweight-by-utilization reweight OSDs by utilization [over‐
1147 load-percentage-for-consideration, default 120].
1148
1149 Usage:
1150
1151 ceph osd reweight-by-utilization {<int[100-]>}
1152 {--no-increasing}
1153
1154 Subcommand rm removes osd(s) <id> [<id>...] from the OSD map.
1155
1156 Usage:
1157
1158 ceph osd rm <ids> [<ids>...]
1159
1160 Subcommand destroy marks OSD id as destroyed, removing its cephx
1161 entity's keys and all of its dm-crypt and daemon-private config key
1162 entries.
1163
1164 This command will not remove the OSD from crush, nor will it remove the
1165 OSD from the OSD map. Instead, once the command successfully completes,
1166 the OSD will show marked as destroyed.
1167
1168 In order to mark an OSD as destroyed, the OSD must first be marked as
1169 lost.
1170
1171 Usage:
1172
1173 ceph osd destroy <id> {--yes-i-really-mean-it}
1174
1175 Subcommand purge performs a combination of osd destroy, osd rm and osd
1176 crush remove.
1177
1178 Usage:
1179
1180 ceph osd purge <id> {--yes-i-really-mean-it}
1181
1182 Subcommand safe-to-destroy checks whether it is safe to remove or
1183 destroy an OSD without reducing overall data redundancy or durability.
1184 It will return a success code if it is definitely safe, or an error
1185 code and informative message if it is not or if no conclusion can be
1186 drawn at the current time.
1187
1188 Usage:
1189
1190 ceph osd safe-to-destroy <id> [<ids>...]
1191
1192 Subcommand scrub initiates scrub on specified osd.
1193
1194 Usage:
1195
1196 ceph osd scrub <who>
1197
1198 Subcommand set sets cluster-wide <flag> by updating OSD map. The full
1199 flag is not honored anymore since the Mimic release, and ceph osd set
1200 full is not supported in the Octopus release.
1201
1202 Usage:
1203
1204 ceph osd set pause|noup|nodown|noout|noin|nobackfill|
1205 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1206
1207 Subcommand setcrushmap sets crush map from input file.
1208
1209 Usage:
1210
1211 ceph osd setcrushmap
1212
1213 Subcommand setmaxosd sets new maximum osd value.
1214
1215 Usage:
1216
1217 ceph osd setmaxosd <int[0-]>
1218
1219 Subcommand set-require-min-compat-client enforces the cluster to be
1220 backward compatible with the specified client version. This subcommand
1221 prevents you from making any changes (e.g., crush tunables, or using
1222 new features) that would violate the current setting. Please note, This
1223 subcommand will fail if any connected daemon or client is not compati‐
1224 ble with the features offered by the given <version>. To see the fea‐
1225 tures and releases of all clients connected to cluster, please see ceph
1226 features.
1227
1228 Usage:
1229
1230 ceph osd set-require-min-compat-client <version>
1231
1232 Subcommand stat prints summary of OSD map.
1233
1234 Usage:
1235
1236 ceph osd stat
1237
1238 Subcommand tier is used for managing tiers. It uses some additional
1239 subcommands.
1240
1241 Subcommand add adds the tier <tierpool> (the second one) to base pool
1242 <pool> (the first one).
1243
1244 Usage:
1245
1246 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1247
1248 Subcommand add-cache adds a cache <tierpool> (the second one) of size
1249 <size> to existing pool <pool> (the first one).
1250
1251 Usage:
1252
1253 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1254
1255 Subcommand cache-mode specifies the caching mode for cache tier <pool>.
1256
1257 Usage:
1258
1259 ceph osd tier cache-mode <poolname> writeback|readproxy|readonly|none
1260
1261 Subcommand remove removes the tier <tierpool> (the second one) from
1262 base pool <pool> (the first one).
1263
1264 Usage:
1265
1266 ceph osd tier remove <poolname> <poolname>
1267
1268 Subcommand remove-overlay removes the overlay pool for base pool
1269 <pool>.
1270
1271 Usage:
1272
1273 ceph osd tier remove-overlay <poolname>
1274
1275 Subcommand set-overlay set the overlay pool for base pool <pool> to be
1276 <overlaypool>.
1277
1278 Usage:
1279
1280 ceph osd tier set-overlay <poolname> <poolname>
1281
1282 Subcommand tree prints OSD tree.
1283
1284 Usage:
1285
1286 ceph osd tree {<int[0-]>}
1287
1288 Subcommand unpause unpauses osd.
1289
1290 Usage:
1291
1292 ceph osd unpause
1293
1294 Subcommand unset unsets cluster-wide <flag> by updating OSD map.
1295
1296 Usage:
1297
1298 ceph osd unset pause|noup|nodown|noout|noin|nobackfill|
1299 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1300
1301 pg
1302 It is used for managing the placement groups in OSDs. It uses some
1303 additional subcommands.
1304
1305 Subcommand debug shows debug info about pgs.
1306
1307 Usage:
1308
1309 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1310
1311 Subcommand deep-scrub starts deep-scrub on <pgid>.
1312
1313 Usage:
1314
1315 ceph pg deep-scrub <pgid>
1316
1317 Subcommand dump shows human-readable versions of pg map (only 'all'
1318 valid with plain).
1319
1320 Usage:
1321
1322 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1323
1324 Subcommand dump_json shows human-readable version of pg map in json
1325 only.
1326
1327 Usage:
1328
1329 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1330
1331 Subcommand dump_pools_json shows pg pools info in json only.
1332
1333 Usage:
1334
1335 ceph pg dump_pools_json
1336
1337 Subcommand dump_stuck shows information about stuck pgs.
1338
1339 Usage:
1340
1341 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1342 {<int>}
1343
1344 Subcommand getmap gets binary pg map to -o/stdout.
1345
1346 Usage:
1347
1348 ceph pg getmap
1349
1350 Subcommand ls lists pg with specific pool, osd, state
1351
1352 Usage:
1353
1354 ceph pg ls {<int>} {<pg-state> [<pg-state>...]}
1355
1356 Subcommand ls-by-osd lists pg on osd [osd]
1357
1358 Usage:
1359
1360 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1361 {<pg-state> [<pg-state>...]}
1362
1363 Subcommand ls-by-pool lists pg with pool = [poolname]
1364
1365 Usage:
1366
1367 ceph pg ls-by-pool <poolstr> {<int>} {<pg-state> [<pg-state>...]}
1368
1369 Subcommand ls-by-primary lists pg with primary = [osd]
1370
1371 Usage:
1372
1373 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1374 {<pg-state> [<pg-state>...]}
1375
1376 Subcommand map shows mapping of pg to osds.
1377
1378 Usage:
1379
1380 ceph pg map <pgid>
1381
1382 Subcommand repair starts repair on <pgid>.
1383
1384 Usage:
1385
1386 ceph pg repair <pgid>
1387
1388 Subcommand scrub starts scrub on <pgid>.
1389
1390 Usage:
1391
1392 ceph pg scrub <pgid>
1393
1394 Subcommand stat shows placement group status.
1395
1396 Usage:
1397
1398 ceph pg stat
1399
1400 quorum
1401 Cause a specific MON to enter or exit quorum.
1402
1403 Usage:
1404
1405 ceph tell mon.<id> quorum enter|exit
1406
1407 quorum_status
1408 Reports status of monitor quorum.
1409
1410 Usage:
1411
1412 ceph quorum_status
1413
1414 report
1415 Reports full status of cluster, optional title tag strings.
1416
1417 Usage:
1418
1419 ceph report {<tags> [<tags>...]}
1420
1421 status
1422 Shows cluster status.
1423
1424 Usage:
1425
1426 ceph status
1427
1428 tell
1429 Sends a command to a specific daemon.
1430
1431 Usage:
1432
1433 ceph tell <name (type.id)> <command> [options...]
1434
1435 List all available commands.
1436
1437 Usage:
1438
1439 ceph tell <name (type.id)> help
1440
1441 version
1442 Show mon daemon version
1443
1444 Usage:
1445
1446 ceph version
1447
1449 -i infile
1450 will specify an input file to be passed along as a payload with
1451 the command to the monitor cluster. This is only used for spe‐
1452 cific monitor commands.
1453
1454 -o outfile
1455 will write any payload returned by the monitor cluster with its
1456 reply to outfile. Only specific monitor commands (e.g. osd
1457 getmap) return a payload.
1458
1459 --setuser user
1460 will apply the appropriate user ownership to the file specified
1461 by the option '-o'.
1462
1463 --setgroup group
1464 will apply the appropriate group ownership to the file specified
1465 by the option '-o'.
1466
1467 -c ceph.conf, --conf=ceph.conf
1468 Use ceph.conf configuration file instead of the default
1469 /etc/ceph/ceph.conf to determine monitor addresses during
1470 startup.
1471
1472 --id CLIENT_ID, --user CLIENT_ID
1473 Client id for authentication.
1474
1475 --name CLIENT_NAME, -n CLIENT_NAME
1476 Client name for authentication.
1477
1478 --cluster CLUSTER
1479 Name of the Ceph cluster.
1480
1481 --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1482 Submit admin-socket commands via admin sockets in /var/run/ceph.
1483
1484 --admin-socket ADMIN_SOCKET_NOPE
1485 You probably mean --admin-daemon
1486
1487 -s, --status
1488 Show cluster status.
1489
1490 -w, --watch
1491 Watch live cluster changes on the default 'cluster' channel
1492
1493 -W, --watch-channel
1494 Watch live cluster changes on any channel (cluster, audit,
1495 cephadm, or * for all)
1496
1497 --watch-debug
1498 Watch debug events.
1499
1500 --watch-info
1501 Watch info events.
1502
1503 --watch-sec
1504 Watch security events.
1505
1506 --watch-warn
1507 Watch warning events.
1508
1509 --watch-error
1510 Watch error events.
1511
1512 --version, -v
1513 Display version.
1514
1515 --verbose
1516 Make verbose.
1517
1518 --concise
1519 Make less verbose.
1520
1521 -f {json,json-pretty,xml,xml-pretty,plain}, --format
1522 Format of output.
1523
1524 --connect-timeout CLUSTER_TIMEOUT
1525 Set a timeout for connecting to the cluster.
1526
1527 --no-increasing
1528 --no-increasing is off by default. So increasing the osd weight
1529 is allowed using the reweight-by-utilization or
1530 test-reweight-by-utilization commands. If this option is used
1531 with these commands, it will help not to increase osd weight
1532 even the osd is under utilized.
1533
1534 --block
1535 block until completion (scrub and deep-scrub only)
1536
1538 ceph is part of Ceph, a massively scalable, open-source, distributed
1539 storage system. Please refer to the Ceph documentation at
1540 http://ceph.com/docs for more information.
1541
1543 ceph-mon(8), ceph-osd(8), ceph-mds(8)
1544
1546 2010-2021, Inktank Storage, Inc. and contributors. Licensed under Cre‐
1547 ative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
1548
1549
1550
1551
1552dev Mar 18, 2021 CEPH(8)