1CEPH(8) Ceph CEPH(8)
2
3
4
6 ceph - ceph administration tool
7
9 ceph auth [ add | caps | del | export | get | get-key | get-or-create | get-or-create-key | import | list | print-key | print_key ] ...
10
11 ceph compact
12
13 ceph config-key [ del | exists | get | list | dump | put ] ...
14
15 ceph daemon <name> | <path> <command> ...
16
17 ceph daemonperf <name> | <path> [ interval [ count ] ]
18
19 ceph df {detail}
20
21 ceph fs [ ls | new | reset | rm ] ...
22
23 ceph fsid
24
25 ceph health {detail}
26
27 ceph heap [ dump | start_profiler | stop_profiler | release | stats ] ...
28
29 ceph injectargs <injectedargs> [ <injectedargs>... ]
30
31 ceph log <logtext> [ <logtext>... ]
32
33 ceph mds [ compat | deactivate | fail | rm | rmfailed | set_state | stat | tell ] ...
34
35 ceph mon [ add | dump | getmap | remove | stat ] ...
36
37 ceph mon_status
38
39 ceph osd [ blacklist | blocked-by | create | new | deep-scrub | df | down | dump | erasure-code-profile | find | getcrushmap | getmap | getmaxosd | in | lspools | map | metadata | ok-to-stop | out | pause | perf | pg-temp | force-create-pg | primary-affinity | primary-temp | repair | reweight | reweight-by-pg | rm | destroy | purge | safe-to-destroy | scrub | set | setcrushmap | setmaxosd | stat | tree | unpause | unset ] ...
40
41 ceph osd crush [ add | add-bucket | create-or-move | dump | get-tunable | link | move | remove | rename-bucket | reweight | reweight-all | reweight-subtree | rm | rule | set | set-tunable | show-tunables | tunables | unlink ] ...
42
43 ceph osd pool [ create | delete | get | get-quota | ls | mksnap | rename | rmsnap | set | set-quota | stats ] ...
44
45 ceph osd tier [ add | add-cache | cache-mode | remove | remove-overlay | set-overlay ] ...
46
47 ceph pg [ debug | deep-scrub | dump | dump_json | dump_pools_json | dump_stuck | force_create_pg | getmap | ls | ls-by-osd | ls-by-pool | ls-by-primary | map | repair | scrub | set_full_ratio | set_nearfull_ratio | stat ] ...
48
49 ceph quorum [ enter | exit ]
50
51 ceph quorum_status
52
53 ceph report { <tags> [ <tags>... ] }
54
55 ceph scrub
56
57 ceph status
58
59 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
60
61 ceph tell <name (type.id)> <args> [<args>...]
62
63 ceph version
64
65
67 ceph is a control utility which is used for manual deployment and main‐
68 tenance of a Ceph cluster. It provides a diverse set of commands that
69 allows deployment of monitors, OSDs, placement groups, MDS and overall
70 maintenance, administration of the cluster.
71
73 auth
74 Manage authentication keys. It is used for adding, removing, exporting
75 or updating of authentication keys for a particular entity such as a
76 monitor or OSD. It uses some additional subcommands.
77
78 Subcommand add adds authentication info for a particular entity from
79 input file, or random key if no input is given and/or any caps speci‐
80 fied in the command.
81
82 Usage:
83
84 ceph auth add <entity> {<caps> [<caps>...]}
85
86 Subcommand caps updates caps for name from caps specified in the com‐
87 mand.
88
89 Usage:
90
91 ceph auth caps <entity> <caps> [<caps>...]
92
93 Subcommand del deletes all caps for name.
94
95 Usage:
96
97 ceph auth del <entity>
98
99 Subcommand export writes keyring for requested entity, or master
100 keyring if none given.
101
102 Usage:
103
104 ceph auth export {<entity>}
105
106 Subcommand get writes keyring file with requested key.
107
108 Usage:
109
110 ceph auth get <entity>
111
112 Subcommand get-key displays requested key.
113
114 Usage:
115
116 ceph auth get-key <entity>
117
118 Subcommand get-or-create adds authentication info for a particular
119 entity from input file, or random key if no input given and/or any caps
120 specified in the command.
121
122 Usage:
123
124 ceph auth get-or-create <entity> {<caps> [<caps>...]}
125
126 Subcommand get-or-create-key gets or adds key for name from system/caps
127 pairs specified in the command. If key already exists, any given caps
128 must match the existing caps for that key.
129
130 Usage:
131
132 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
133
134 Subcommand import reads keyring from input file.
135
136 Usage:
137
138 ceph auth import
139
140 Subcommand ls lists authentication state.
141
142 Usage:
143
144 ceph auth ls
145
146 Subcommand print-key displays requested key.
147
148 Usage:
149
150 ceph auth print-key <entity>
151
152 Subcommand print_key displays requested key.
153
154 Usage:
155
156 ceph auth print_key <entity>
157
158 compact
159 Causes compaction of monitor's leveldb storage.
160
161 Usage:
162
163 ceph compact
164
165 config-key
166 Manage configuration key. It uses some additional subcommands.
167
168 Subcommand del deletes configuration key.
169
170 Usage:
171
172 ceph config-key del <key>
173
174 Subcommand exists checks for configuration keys existence.
175
176 Usage:
177
178 ceph config-key exists <key>
179
180 Subcommand get gets the configuration key.
181
182 Usage:
183
184 ceph config-key get <key>
185
186 Subcommand list lists configuration keys.
187
188 Usage:
189
190 ceph config-key ls
191
192 Subcommand dump dumps configuration keys and values.
193
194 Usage:
195
196 ceph config-key dump
197
198 Subcommand set puts configuration key and value.
199
200 Usage:
201
202 ceph config-key set <key> {<val>}
203
204 daemon
205 Submit admin-socket commands.
206
207 Usage:
208
209 ceph daemon {daemon_name|socket_path} {command} ...
210
211 Example:
212
213 ceph daemon osd.0 help
214
215 daemonperf
216 Watch performance counters from a Ceph daemon.
217
218 Usage:
219
220 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
221
222 df
223 Show cluster's free space status.
224
225 Usage:
226
227 ceph df {detail}
228
229 features
230 Show the releases and features of all connected daemons and clients
231 connected to the cluster, along with the numbers of them in each bucket
232 grouped by the corresponding features/releases. Each release of Ceph
233 supports a different set of features, expressed by the features bit‐
234 mask. New cluster features require that clients support the feature, or
235 else they are not allowed to connect to these new features. As new fea‐
236 tures or capabilities are enabled after an upgrade, older clients are
237 prevented from connecting.
238
239 Usage:
240
241 ceph features
242
243 fs
244 Manage cephfs filesystems. It uses some additional subcommands.
245
246 Subcommand ls to list filesystems
247
248 Usage:
249
250 ceph fs ls
251
252 Subcommand new to make a new filesystem using named pools <metadata>
253 and <data>
254
255 Usage:
256
257 ceph fs new <fs_name> <metadata> <data>
258
259 Subcommand reset is used for disaster recovery only: reset to a sin‐
260 gle-MDS map
261
262 Usage:
263
264 ceph fs reset <fs_name> {--yes-i-really-mean-it}
265
266 Subcommand rm to disable the named filesystem
267
268 Usage:
269
270 ceph fs rm <fs_name> {--yes-i-really-mean-it}
271
272 fsid
273 Show cluster's FSID/UUID.
274
275 Usage:
276
277 ceph fsid
278
279 health
280 Show cluster's health.
281
282 Usage:
283
284 ceph health {detail}
285
286 heap
287 Show heap usage info (available only if compiled with tcmalloc)
288
289 Usage:
290
291 ceph heap dump|start_profiler|stop_profiler|release|stats
292
293 injectargs
294 Inject configuration arguments into monitor.
295
296 Usage:
297
298 ceph injectargs <injected_args> [<injected_args>...]
299
300 log
301 Log supplied text to the monitor log.
302
303 Usage:
304
305 ceph log <logtext> [<logtext>...]
306
307 mds
308 Manage metadata server configuration and administration. It uses some
309 additional subcommands.
310
311 Subcommand compat manages compatible features. It uses some additional
312 subcommands.
313
314 Subcommand rm_compat removes compatible feature.
315
316 Usage:
317
318 ceph mds compat rm_compat <int[0-]>
319
320 Subcommand rm_incompat removes incompatible feature.
321
322 Usage:
323
324 ceph mds compat rm_incompat <int[0-]>
325
326 Subcommand show shows mds compatibility settings.
327
328 Usage:
329
330 ceph mds compat show
331
332 Subcommand deactivate stops mds.
333
334 Usage:
335
336 ceph mds deactivate <who>
337
338 Subcommand fail forces mds to status fail.
339
340 Usage:
341
342 ceph mds fail <who>
343
344 Subcommand rm removes inactive mds.
345
346 Usage:
347
348 ceph mds rm <int[0-]> <name> (type.id)>
349
350 Subcommand rmfailed removes failed mds.
351
352 Usage:
353
354 ceph mds rmfailed <int[0-]>
355
356 Subcommand set_state sets mds state of <gid> to <numeric-state>.
357
358 Usage:
359
360 ceph mds set_state <int[0-]> <int[0-20]>
361
362 Subcommand stat shows MDS status.
363
364 Usage:
365
366 ceph mds stat
367
368 Subcommand tell sends command to particular mds.
369
370 Usage:
371
372 ceph mds tell <who> <args> [<args>...]
373
374 mon
375 Manage monitor configuration and administration. It uses some addi‐
376 tional subcommands.
377
378 Subcommand add adds new monitor named <name> at <addr>.
379
380 Usage:
381
382 ceph mon add <name> <IPaddr[:port]>
383
384 Subcommand dump dumps formatted monmap (optionally from epoch)
385
386 Usage:
387
388 ceph mon dump {<int[0-]>}
389
390 Subcommand getmap gets monmap.
391
392 Usage:
393
394 ceph mon getmap {<int[0-]>}
395
396 Subcommand remove removes monitor named <name>.
397
398 Usage:
399
400 ceph mon remove <name>
401
402 Subcommand stat summarizes monitor status.
403
404 Usage:
405
406 ceph mon stat
407
408 mon_status
409 Reports status of monitors.
410
411 Usage:
412
413 ceph mon_status
414
415 mgr
416 Ceph manager daemon configuration and management.
417
418 Subcommand dump dumps the latest MgrMap, which describes the active and
419 standby manager daemons.
420
421 Usage:
422
423 ceph mgr dump
424
425 Subcommand fail will mark a manager daemon as failed, removing it from
426 the manager map. If it is the active manager daemon a standby will
427 take its place.
428
429 Usage:
430
431 ceph mgr fail <name>
432
433 Subcommand module ls will list currently enabled manager modules (plug‐
434 ins).
435
436 Usage:
437
438 ceph mgr module ls
439
440 Subcommand module enable will enable a manager module. Available mod‐
441 ules are included in MgrMap and visible via mgr dump.
442
443 Usage:
444
445 ceph mgr module enable <module>
446
447 Subcommand module disable will disable an active manager module.
448
449 Usage:
450
451 ceph mgr module disable <module>
452
453 Subcommand metadata will report metadata about all manager daemons or,
454 if the name is specified, a single manager daemon.
455
456 Usage:
457
458 ceph mgr metadata [name]
459
460 Subcommand versions will report a count of running daemon versions.
461
462 Usage:
463
464 ceph mgr versions
465
466 Subcommand count-metadata will report a count of any daemon metadata
467 field.
468
469 Usage:
470
471 ceph mgr count-metadata <field>
472
473 osd
474 Manage OSD configuration and administration. It uses some additional
475 subcommands.
476
477 Subcommand blacklist manage blacklisted clients. It uses some addi‐
478 tional subcommands.
479
480 Subcommand add add <addr> to blacklist (optionally until <expire> sec‐
481 onds from now)
482
483 Usage:
484
485 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
486
487 Subcommand ls show blacklisted clients
488
489 Usage:
490
491 ceph osd blacklist ls
492
493 Subcommand rm remove <addr> from blacklist
494
495 Usage:
496
497 ceph osd blacklist rm <EntityAddr>
498
499 Subcommand blocked-by prints a histogram of which OSDs are blocking
500 their peers
501
502 Usage:
503
504 ceph osd blocked-by
505
506 Subcommand create creates new osd (with optional UUID and ID).
507
508 This command is DEPRECATED as of the Luminous release, and will be
509 removed in a future release.
510
511 Subcommand new should instead be used.
512
513 Usage:
514
515 ceph osd create {<uuid>} {<id>}
516
517 Subcommand new can be used to create a new OSD or to recreate a previ‐
518 ously destroyed OSD with a specific id. The new OSD will have the spec‐
519 ified uuid, and the command expects a JSON file containing the base64
520 cephx key for auth entity client.osd.<id>, as well as optional base64
521 cepx key for dm-crypt lockbox access and a dm-crypt key. Specifying a
522 dm-crypt requires specifying the accompanying lockbox cephx key.
523
524 Usage:
525
526 ceph osd new {<uuid>} {<id>} -i {<params.json>}
527
528 The parameters JSON file is optional but if provided, is expected to
529 maintain a form of the following format:
530
531 {
532 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
533 "crush_device_class": "myclass"
534 }
535
536 Or:
537
538 {
539 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
540 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
541 "dmcrypt_key": "<dm-crypt key>",
542 "crush_device_class": "myclass"
543 }
544
545 Or:
546
547 {
548 "crush_device_class": "myclass"
549 }
550
551 The "crush_device_class" property is optional. If specified, it will
552 set the initial CRUSH device class for the new OSD.
553
554 Subcommand crush is used for CRUSH management. It uses some additional
555 subcommands.
556
557 Subcommand add adds or updates crushmap position and weight for <name>
558 with <weight> and location <args>.
559
560 Usage:
561
562 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
563
564 Subcommand add-bucket adds no-parent (probably root) crush bucket
565 <name> of type <type>.
566
567 Usage:
568
569 ceph osd crush add-bucket <name> <type>
570
571 Subcommand create-or-move creates entry or moves existing entry for
572 <name> <weight> at/to location <args>.
573
574 Usage:
575
576 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
577 [<args>...]
578
579 Subcommand dump dumps crush map.
580
581 Usage:
582
583 ceph osd crush dump
584
585 Subcommand get-tunable get crush tunable straw_calc_version
586
587 Usage:
588
589 ceph osd crush get-tunable straw_calc_version
590
591 Subcommand link links existing entry for <name> under location <args>.
592
593 Usage:
594
595 ceph osd crush link <name> <args> [<args>...]
596
597 Subcommand move moves existing entry for <name> to location <args>.
598
599 Usage:
600
601 ceph osd crush move <name> <args> [<args>...]
602
603 Subcommand remove removes <name> from crush map (everywhere, or just at
604 <ancestor>).
605
606 Usage:
607
608 ceph osd crush remove <name> {<ancestor>}
609
610 Subcommand rename-bucket renames buchket <srcname> to <stname>
611
612 Usage:
613
614 ceph osd crush rename-bucket <srcname> <dstname>
615
616 Subcommand reweight change <name>'s weight to <weight> in crush map.
617
618 Usage:
619
620 ceph osd crush reweight <name> <float[0.0-]>
621
622 Subcommand reweight-all recalculate the weights for the tree to ensure
623 they sum correctly
624
625 Usage:
626
627 ceph osd crush reweight-all
628
629 Subcommand reweight-subtree changes all leaf items beneath <name> to
630 <weight> in crush map
631
632 Usage:
633
634 ceph osd crush reweight-subtree <name> <weight>
635
636 Subcommand rm removes <name> from crush map (everywhere, or just at
637 <ancestor>).
638
639 Usage:
640
641 ceph osd crush rm <name> {<ancestor>}
642
643 Subcommand rule is used for creating crush rules. It uses some addi‐
644 tional subcommands.
645
646 Subcommand create-erasure creates crush rule <name> for erasure coded
647 pool created with <profile> (default default).
648
649 Usage:
650
651 ceph osd crush rule create-erasure <name> {<profile>}
652
653 Subcommand create-simple creates crush rule <name> to start from
654 <root>, replicate across buckets of type <type>, using a choose mode of
655 <firstn|indep> (default firstn; indep best for erasure pools).
656
657 Usage:
658
659 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
660
661 Subcommand dump dumps crush rule <name> (default all).
662
663 Usage:
664
665 ceph osd crush rule dump {<name>}
666
667 Subcommand ls lists crush rules.
668
669 Usage:
670
671 ceph osd crush rule ls
672
673 Subcommand rm removes crush rule <name>.
674
675 Usage:
676
677 ceph osd crush rule rm <name>
678
679 Subcommand set used alone, sets crush map from input file.
680
681 Usage:
682
683 ceph osd crush set
684
685 Subcommand set with osdname/osd.id update crushmap position and weight
686 for <name> to <weight> with location <args>.
687
688 Usage:
689
690 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
691
692 Subcommand set-tunable set crush tunable <tunable> to <value>. The
693 only tunable that can be set is straw_calc_version.
694
695 Usage:
696
697 ceph osd crush set-tunable straw_calc_version <value>
698
699 Subcommand show-tunables shows current crush tunables.
700
701 Usage:
702
703 ceph osd crush show-tunables
704
705 Subcommand tree shows the crush buckets and items in a tree view.
706
707 Usage:
708
709 ceph osd crush tree
710
711 Subcommand tunables sets crush tunables values to <profile>.
712
713 Usage:
714
715 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
716
717 Subcommand unlink unlinks <name> from crush map (everywhere, or just at
718 <ancestor>).
719
720 Usage:
721
722 ceph osd crush unlink <name> {<ancestor>}
723
724 Subcommand df shows OSD utilization
725
726 Usage:
727
728 ceph osd df {plain|tree}
729
730 Subcommand deep-scrub initiates deep scrub on specified osd.
731
732 Usage:
733
734 ceph osd deep-scrub <who>
735
736 Subcommand down sets osd(s) <id> [<id>...] down.
737
738 Usage:
739
740 ceph osd down <ids> [<ids>...]
741
742 Subcommand dump prints summary of OSD map.
743
744 Usage:
745
746 ceph osd dump {<int[0-]>}
747
748 Subcommand erasure-code-profile is used for managing the erasure code
749 profiles. It uses some additional subcommands.
750
751 Subcommand get gets erasure code profile <name>.
752
753 Usage:
754
755 ceph osd erasure-code-profile get <name>
756
757 Subcommand ls lists all erasure code profiles.
758
759 Usage:
760
761 ceph osd erasure-code-profile ls
762
763 Subcommand rm removes erasure code profile <name>.
764
765 Usage:
766
767 ceph osd erasure-code-profile rm <name>
768
769 Subcommand set creates erasure code profile <name> with [<key[=value]>
770 ...] pairs. Add a --force at the end to override an existing profile
771 (IT IS RISKY).
772
773 Usage:
774
775 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
776
777 Subcommand find find osd <id> in the CRUSH map and shows its location.
778
779 Usage:
780
781 ceph osd find <int[0-]>
782
783 Subcommand getcrushmap gets CRUSH map.
784
785 Usage:
786
787 ceph osd getcrushmap {<int[0-]>}
788
789 Subcommand getmap gets OSD map.
790
791 Usage:
792
793 ceph osd getmap {<int[0-]>}
794
795 Subcommand getmaxosd shows largest OSD id.
796
797 Usage:
798
799 ceph osd getmaxosd
800
801 Subcommand in sets osd(s) <id> [<id>...] in.
802
803 Usage:
804
805 ceph osd in <ids> [<ids>...]
806
807 Subcommand lost marks osd as permanently lost. THIS DESTROYS DATA IF NO
808 MORE REPLICAS EXIST, BE CAREFUL.
809
810 Usage:
811
812 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
813
814 Subcommand ls shows all OSD ids.
815
816 Usage:
817
818 ceph osd ls {<int[0-]>}
819
820 Subcommand lspools lists pools.
821
822 Usage:
823
824 ceph osd lspools {<int>}
825
826 Subcommand map finds pg for <object> in <pool>.
827
828 Usage:
829
830 ceph osd map <poolname> <objectname>
831
832 Subcommand metadata fetches metadata for osd <id>.
833
834 Usage:
835
836 ceph osd metadata {int[0-]} (default all)
837
838 Subcommand out sets osd(s) <id> [<id>...] out.
839
840 Usage:
841
842 ceph osd out <ids> [<ids>...]
843
844 Subcommand ok-to-stop checks whether the list of OSD(s) can be stopped
845 without immediately making data unavailable. That is, all data should
846 remain readable and writeable, although data redundancy may be reduced
847 as some PGs may end up in a degraded (but active) state. It will
848 return a success code if it is okay to stop the OSD(s), or an error
849 code and informative message if it is not or if no conclusion can be
850 drawn at the current time.
851
852 Usage:
853
854 ceph osd ok-to-stop <id> [<ids>...]
855
856 Subcommand pause pauses osd.
857
858 Usage:
859
860 ceph osd pause
861
862 Subcommand perf prints dump of OSD perf summary stats.
863
864 Usage:
865
866 ceph osd perf
867
868 Subcommand pg-temp set pg_temp mapping pgid:[<id> [<id>...]] (develop‐
869 ers only).
870
871 Usage:
872
873 ceph osd pg-temp <pgid> {<id> [<id>...]}
874
875 Subcommand force-create-pg forces creation of pg <pgid>.
876
877 Usage:
878
879 ceph osd force-create-pg <pgid>
880
881 Subcommand pool is used for managing data pools. It uses some addi‐
882 tional subcommands.
883
884 Subcommand create creates pool.
885
886 Usage:
887
888 ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure}
889 {<erasure_code_profile>} {<rule>} {<int>}
890
891 Subcommand delete deletes pool.
892
893 Usage:
894
895 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
896
897 Subcommand get gets pool parameter <var>.
898
899 Usage:
900
901 ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|
902 pgp_num|crush_rule|auid|write_fadvise_dontneed
903
904 Only for tiered pools:
905
906 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
907 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
908 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
909 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
910
911 Only for erasure coded pools:
912
913 ceph osd pool get <poolname> erasure_code_profile
914
915 Use all to get all pool parameters that apply to the pool's type:
916
917 ceph osd pool get <poolname> all
918
919 Subcommand get-quota obtains object or byte limits for pool.
920
921 Usage:
922
923 ceph osd pool get-quota <poolname>
924
925 Subcommand ls list pools
926
927 Usage:
928
929 ceph osd pool ls {detail}
930
931 Subcommand mksnap makes snapshot <snap> in <pool>.
932
933 Usage:
934
935 ceph osd pool mksnap <poolname> <snap>
936
937 Subcommand rename renames <srcpool> to <destpool>.
938
939 Usage:
940
941 ceph osd pool rename <poolname> <poolname>
942
943 Subcommand rmsnap removes snapshot <snap> from <pool>.
944
945 Usage:
946
947 ceph osd pool rmsnap <poolname> <snap>
948
949 Subcommand set sets pool parameter <var> to <val>.
950
951 Usage:
952
953 ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|
954 pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
955 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
956 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
957 cache_target_dirty_high_ratio|
958 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|
959 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
960 hit_set_search_last_n
961 <val> {--yes-i-really-mean-it}
962
963 Subcommand set-quota sets object or byte limit on pool.
964
965 Usage:
966
967 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
968
969 Subcommand stats obtain stats from all pools, or from specified pool.
970
971 Usage:
972
973 ceph osd pool stats {<name>}
974
975 Subcommand primary-affinity adjust osd primary-affinity from 0.0
976 <=<weight> <= 1.0
977
978 Usage:
979
980 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
981
982 Subcommand primary-temp sets primary_temp mapping pgid:<id>|-1 (devel‐
983 opers only).
984
985 Usage:
986
987 ceph osd primary-temp <pgid> <id>
988
989 Subcommand repair initiates repair on a specified osd.
990
991 Usage:
992
993 ceph osd repair <who>
994
995 Subcommand reweight reweights osd to 0.0 < <weight> < 1.0.
996
997 Usage:
998
999 osd reweight <int[0-]> <float[0.0-1.0]>
1000
1001 Subcommand reweight-by-pg reweight OSDs by PG distribution [over‐
1002 load-percentage-for-consideration, default 120].
1003
1004 Usage:
1005
1006 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1007 {--no-increasing}
1008
1009 Subcommand reweight-by-utilization reweight OSDs by utilization [over‐
1010 load-percentage-for-consideration, default 120].
1011
1012 Usage:
1013
1014 ceph osd reweight-by-utilization {<int[100-]>}
1015 {--no-increasing}
1016
1017 Subcommand rm removes osd(s) <id> [<id>...] from the OSD map.
1018
1019 Usage:
1020
1021 ceph osd rm <ids> [<ids>...]
1022
1023 Subcommand destroy marks OSD id as destroyed, removing its cephx
1024 entity's keys and all of its dm-crypt and daemon-private config key
1025 entries.
1026
1027 This command will not remove the OSD from crush, nor will it remove the
1028 OSD from the OSD map. Instead, once the command successfully completes,
1029 the OSD will show marked as destroyed.
1030
1031 In order to mark an OSD as destroyed, the OSD must first be marked as
1032 lost.
1033
1034 Usage:
1035
1036 ceph osd destroy <id> {--yes-i-really-mean-it}
1037
1038 Subcommand purge performs a combination of osd destroy, osd rm and osd
1039 crush remove.
1040
1041 Usage:
1042
1043 ceph osd purge <id> {--yes-i-really-mean-it}
1044
1045 Subcommand safe-to-destroy checks whether it is safe to remove or
1046 destroy an OSD without reducing overall data redundancy or durability.
1047 It will return a success code if it is definitely safe, or an error
1048 code and informative message if it is not or if no conclusion can be
1049 drawn at the current time.
1050
1051 Usage:
1052
1053 ceph osd safe-to-destroy <id> [<ids>...]
1054
1055 Subcommand scrub initiates scrub on specified osd.
1056
1057 Usage:
1058
1059 ceph osd scrub <who>
1060
1061 Subcommand set sets <key>.
1062
1063 Usage:
1064
1065 ceph osd set full|pause|noup|nodown|noout|noin|nobackfill|
1066 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1067
1068 Subcommand setcrushmap sets crush map from input file.
1069
1070 Usage:
1071
1072 ceph osd setcrushmap
1073
1074 Subcommand setmaxosd sets new maximum osd value.
1075
1076 Usage:
1077
1078 ceph osd setmaxosd <int[0-]>
1079
1080 Subcommand set-require-min-compat-client enforces the cluster to be
1081 backward compatible with the specified client version. This subcommand
1082 prevents you from making any changes (e.g., crush tunables, or using
1083 new features) that would violate the current setting. Please note, This
1084 subcommand will fail if any connected daemon or client is not compati‐
1085 ble with the features offered by the given <version>. To see the fea‐
1086 tures and releases of all clients connected to cluster, please see ceph
1087 features.
1088
1089 Usage:
1090
1091 ceph osd set-require-min-compat-client <version>
1092
1093 Subcommand stat prints summary of OSD map.
1094
1095 Usage:
1096
1097 ceph osd stat
1098
1099 Subcommand tier is used for managing tiers. It uses some additional
1100 subcommands.
1101
1102 Subcommand add adds the tier <tierpool> (the second one) to base pool
1103 <pool> (the first one).
1104
1105 Usage:
1106
1107 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1108
1109 Subcommand add-cache adds a cache <tierpool> (the second one) of size
1110 <size> to existing pool <pool> (the first one).
1111
1112 Usage:
1113
1114 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1115
1116 Subcommand cache-mode specifies the caching mode for cache tier <pool>.
1117
1118 Usage:
1119
1120 ceph osd tier cache-mode <poolname> none|writeback|forward|readonly|
1121 readforward|readproxy
1122
1123 Subcommand remove removes the tier <tierpool> (the second one) from
1124 base pool <pool> (the first one).
1125
1126 Usage:
1127
1128 ceph osd tier remove <poolname> <poolname>
1129
1130 Subcommand remove-overlay removes the overlay pool for base pool
1131 <pool>.
1132
1133 Usage:
1134
1135 ceph osd tier remove-overlay <poolname>
1136
1137 Subcommand set-overlay set the overlay pool for base pool <pool> to be
1138 <overlaypool>.
1139
1140 Usage:
1141
1142 ceph osd tier set-overlay <poolname> <poolname>
1143
1144 Subcommand tree prints OSD tree.
1145
1146 Usage:
1147
1148 ceph osd tree {<int[0-]>}
1149
1150 Subcommand unpause unpauses osd.
1151
1152 Usage:
1153
1154 ceph osd unpause
1155
1156 Subcommand unset unsets <key>.
1157
1158 Usage:
1159
1160 ceph osd unset full|pause|noup|nodown|noout|noin|nobackfill|
1161 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1162
1163 pg
1164 It is used for managing the placement groups in OSDs. It uses some
1165 additional subcommands.
1166
1167 Subcommand debug shows debug info about pgs.
1168
1169 Usage:
1170
1171 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1172
1173 Subcommand deep-scrub starts deep-scrub on <pgid>.
1174
1175 Usage:
1176
1177 ceph pg deep-scrub <pgid>
1178
1179 Subcommand dump shows human-readable versions of pg map (only 'all'
1180 valid with plain).
1181
1182 Usage:
1183
1184 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1185
1186 Subcommand dump_json shows human-readable version of pg map in json
1187 only.
1188
1189 Usage:
1190
1191 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1192
1193 Subcommand dump_pools_json shows pg pools info in json only.
1194
1195 Usage:
1196
1197 ceph pg dump_pools_json
1198
1199 Subcommand dump_stuck shows information about stuck pgs.
1200
1201 Usage:
1202
1203 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1204 {<int>}
1205
1206 Subcommand getmap gets binary pg map to -o/stdout.
1207
1208 Usage:
1209
1210 ceph pg getmap
1211
1212 Subcommand ls lists pg with specific pool, osd, state
1213
1214 Usage:
1215
1216 ceph pg ls {<int>} {active|clean|down|replay|splitting|
1217 scrubbing|degraded|inconsistent|peering|repair|
1218 recovery|backfill_wait|incomplete|stale| remapped|
1219 deep_scrub|backfill|backfill_toofull|recovery_wait|
1220 undersized [active|clean|down|replay|splitting|
1221 scrubbing|degraded|inconsistent|peering|repair|
1222 recovery|backfill_wait|incomplete|stale|remapped|
1223 deep_scrub|backfill|backfill_toofull|recovery_wait|
1224 undersized...]}
1225
1226 Subcommand ls-by-osd lists pg on osd [osd]
1227
1228 Usage:
1229
1230 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1231 {active|clean|down|replay|splitting|
1232 scrubbing|degraded|inconsistent|peering|repair|
1233 recovery|backfill_wait|incomplete|stale| remapped|
1234 deep_scrub|backfill|backfill_toofull|recovery_wait|
1235 undersized [active|clean|down|replay|splitting|
1236 scrubbing|degraded|inconsistent|peering|repair|
1237 recovery|backfill_wait|incomplete|stale|remapped|
1238 deep_scrub|backfill|backfill_toofull|recovery_wait|
1239 undersized...]}
1240
1241 Subcommand ls-by-pool lists pg with pool = [poolname]
1242
1243 Usage:
1244
1245 ceph pg ls-by-pool <poolstr> {<int>} {active|
1246 clean|down|replay|splitting|
1247 scrubbing|degraded|inconsistent|peering|repair|
1248 recovery|backfill_wait|incomplete|stale| remapped|
1249 deep_scrub|backfill|backfill_toofull|recovery_wait|
1250 undersized [active|clean|down|replay|splitting|
1251 scrubbing|degraded|inconsistent|peering|repair|
1252 recovery|backfill_wait|incomplete|stale|remapped|
1253 deep_scrub|backfill|backfill_toofull|recovery_wait|
1254 undersized...]}
1255
1256 Subcommand ls-by-primary lists pg with primary = [osd]
1257
1258 Usage:
1259
1260 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1261 {active|clean|down|replay|splitting|
1262 scrubbing|degraded|inconsistent|peering|repair|
1263 recovery|backfill_wait|incomplete|stale| remapped|
1264 deep_scrub|backfill|backfill_toofull|recovery_wait|
1265 undersized [active|clean|down|replay|splitting|
1266 scrubbing|degraded|inconsistent|peering|repair|
1267 recovery|backfill_wait|incomplete|stale|remapped|
1268 deep_scrub|backfill|backfill_toofull|recovery_wait|
1269 undersized...]}
1270
1271 Subcommand map shows mapping of pg to osds.
1272
1273 Usage:
1274
1275 ceph pg map <pgid>
1276
1277 Subcommand repair starts repair on <pgid>.
1278
1279 Usage:
1280
1281 ceph pg repair <pgid>
1282
1283 Subcommand scrub starts scrub on <pgid>.
1284
1285 Usage:
1286
1287 ceph pg scrub <pgid>
1288
1289 Subcommand set_full_ratio sets ratio at which pgs are considered full.
1290
1291 Usage:
1292
1293 ceph pg set_full_ratio <float[0.0-1.0]>
1294
1295 Subcommand set_backfillfull_ratio sets ratio at which pgs are consid‐
1296 ered too full to backfill.
1297
1298 Usage:
1299
1300 ceph pg set_backfillfull_ratio <float[0.0-1.0]>
1301
1302 Subcommand set_nearfull_ratio sets ratio at which pgs are considered
1303 nearly full.
1304
1305 Usage:
1306
1307 ceph pg set_nearfull_ratio <float[0.0-1.0]>
1308
1309 Subcommand stat shows placement group status.
1310
1311 Usage:
1312
1313 ceph pg stat
1314
1315 quorum
1316 Cause MON to enter or exit quorum.
1317
1318 Usage:
1319
1320 ceph quorum enter|exit
1321
1322 Note: this only works on the MON to which the ceph command is con‐
1323 nected. If you want a specific MON to enter or exit quorum, use this
1324 syntax:
1325
1326 ceph tell mon.<id> quorum enter|exit
1327
1328 quorum_status
1329 Reports status of monitor quorum.
1330
1331 Usage:
1332
1333 ceph quorum_status
1334
1335 report
1336 Reports full status of cluster, optional title tag strings.
1337
1338 Usage:
1339
1340 ceph report {<tags> [<tags>...]}
1341
1342 scrub
1343 Scrubs the monitor stores.
1344
1345 Usage:
1346
1347 ceph scrub
1348
1349 status
1350 Shows cluster status.
1351
1352 Usage:
1353
1354 ceph status
1355
1356 sync force
1357 Forces sync of and clear monitor store.
1358
1359 Usage:
1360
1361 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
1362
1363 tell
1364 Sends a command to a specific daemon.
1365
1366 Usage:
1367
1368 ceph tell <name (type.id)> <args> [<args>...]
1369
1370 List all available commands.
1371
1372 Usage:
1373
1374 ceph tell <name (type.id)> help
1375
1376 version
1377 Show mon daemon version
1378
1379 Usage:
1380
1381 ceph version
1382
1384 -i infile
1385 will specify an input file to be passed along as a payload with
1386 the command to the monitor cluster. This is only used for spe‐
1387 cific monitor commands.
1388
1389 -o outfile
1390 will write any payload returned by the monitor cluster with its
1391 reply to outfile. Only specific monitor commands (e.g. osd
1392 getmap) return a payload.
1393
1394 --setuser user
1395 will apply the appropriate user ownership to the file specified
1396 by the option '-o'.
1397
1398 --setgroup group
1399 will apply the appropriate group ownership to the file specified
1400 by the option '-o'.
1401
1402 -c ceph.conf, --conf=ceph.conf
1403 Use ceph.conf configuration file instead of the default
1404 /etc/ceph/ceph.conf to determine monitor addresses during
1405 startup.
1406
1407 --id CLIENT_ID, --user CLIENT_ID
1408 Client id for authentication.
1409
1410 --name CLIENT_NAME, -n CLIENT_NAME
1411 Client name for authentication.
1412
1413 --cluster CLUSTER
1414 Name of the Ceph cluster.
1415
1416 --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1417 Submit admin-socket commands via admin sockets in /var/run/ceph.
1418
1419 --admin-socket ADMIN_SOCKET_NOPE
1420 You probably mean --admin-daemon
1421
1422 -s, --status
1423 Show cluster status.
1424
1425 -w, --watch
1426 Watch live cluster changes.
1427
1428 --watch-debug
1429 Watch debug events.
1430
1431 --watch-info
1432 Watch info events.
1433
1434 --watch-sec
1435 Watch security events.
1436
1437 --watch-warn
1438 Watch warning events.
1439
1440 --watch-error
1441 Watch error events.
1442
1443 --version, -v
1444 Display version.
1445
1446 --verbose
1447 Make verbose.
1448
1449 --concise
1450 Make less verbose.
1451
1452 -f {json,json-pretty,xml,xml-pretty,plain}, --format
1453 Format of output.
1454
1455 --connect-timeout CLUSTER_TIMEOUT
1456 Set a timeout for connecting to the cluster.
1457
1458 --no-increasing
1459 --no-increasing is off by default. So increasing the osd weight
1460 is allowed using the reweight-by-utilization or
1461 test-reweight-by-utilization commands. If this option is used
1462 with these commands, it will help not to increase osd weight
1463 even the osd is under utilized.
1464
1466 ceph is part of Ceph, a massively scalable, open-source, distributed
1467 storage system. Please refer to the Ceph documentation at
1468 http://ceph.com/docs for more information.
1469
1471 ceph-mon(8), ceph-osd(8), ceph-mds(8)
1472
1474 2010-2014, Inktank Storage, Inc. and contributors. Licensed under Cre‐
1475 ative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
1476
1477
1478
1479
1480dev Apr 14, 2019 CEPH(8)