1OSDMAPTOOL(8) Ceph OSDMAPTOOL(8)
2
3
4
6 osdmaptool - ceph osd cluster map manipulation tool
7
9 osdmaptool mapfilename [--print] [--createsimple numosd
10 [--pgbits bitsperosd ] ] [--clobber]
11
12
14 osdmaptool is a utility that lets you create, view, and manipulate OSD
15 cluster maps from the Ceph distributed storage system. Notably, it lets
16 you extract the embedded CRUSH map or import a new CRUSH map.
17
19 --print
20 will simply make the tool print a plaintext dump of the map,
21 after any modifications are made.
22
23 --clobber
24 will allow osdmaptool to overwrite mapfilename if changes are
25 made.
26
27 --import-crush mapfile
28 will load the CRUSH map from mapfile and embed it in the OSD
29 map.
30
31 --export-crush mapfile
32 will extract the CRUSH map from the OSD map and write it to map‐
33 file.
34
35 --createsimple numosd [--pgbits bitsperosd]
36 will create a relatively generic OSD map with the numosd
37 devices. If --pgbits is specified, the initial placement group
38 counts will be set with bitsperosd bits per OSD. That is, the
39 pg_num map attribute will be set to numosd shifted by bitsper‐
40 osd.
41
42 --test-map-pgs [--pool poolid]
43 will print out the mappings from placement groups to OSDs.
44
45 --test-map-pgs-dump [--pool poolid]
46 will print out the summary of all placement groups and the map‐
47 pings from them to the mapped OSDs.
48
50 To create a simple map with 16 devices:
51
52 osdmaptool --createsimple 16 osdmap --clobber
53
54 To view the result:
55
56 osdmaptool --print osdmap
57
58 To view the mappings of placement groups for pool 0:
59
60 osdmaptool --test-map-pgs-dump rbd --pool 0
61
62 pool 0 pg_num 8
63 0.0 [0,2,1] 0
64 0.1 [2,0,1] 2
65 0.2 [0,1,2] 0
66 0.3 [2,0,1] 2
67 0.4 [0,2,1] 0
68 0.5 [0,2,1] 0
69 0.6 [0,1,2] 0
70 0.7 [1,0,2] 1
71 #osd count first primary c wt wt
72 osd.0 8 5 5 1 1
73 osd.1 8 1 1 1 1
74 osd.2 8 2 2 1 1
75 in 3
76 avg 8 stddev 0 (0x) (expected 2.3094 0.288675x))
77 min osd.0 8
78 max osd.0 8
79 size 0 0
80 size 1 0
81 size 2 0
82 size 3 8
83
84 In which,
85
86 1. pool 0 has 8 placement groups. And two tables follow:
87
88 2. A table for placement groups. Each row presents a placement
89 group. With columns of:
90
91 · placement group id,
92
93 · acting set, and
94
95 · primary OSD.
96
97 3. A table for all OSDs. Each row presents an OSD. With columns
98 of:
99
100 · count of placement groups being mapped to this OSD,
101
102 · count of placement groups where this OSD is the first one
103 in their acting sets,
104
105 · count of placement groups where this OSD is the primary of
106 them,
107
108 · the CRUSH weight of this OSD, and
109
110 · the weight of this OSD.
111
112 4. Looking at the number of placement groups held by 3 OSDs. We
113 have
114
115 · avarge, stddev, stddev/average, expected stddev, expected
116 stddev / average
117
118 · min and max
119
120 5. The number of placement groups mapping to n OSDs. In this
121 case, all 8 placement groups are mapping to 3 different OSDs.
122
123 In a less-balanced cluster, we could have following output for the sta‐
124 tistics of placement group distribution, whose standard deviation is
125 1.41421:
126
127 #osd count first primary c wt wt
128 osd.0 8 5 5 1 1
129 osd.1 8 1 1 1 1
130 osd.2 8 2 2 1 1
131
132 #osd count first primary c wt wt
133 osd.0 33 9 9 0.0145874 1
134 osd.1 34 14 14 0.0145874 1
135 osd.2 31 7 7 0.0145874 1
136 osd.3 31 13 13 0.0145874 1
137 osd.4 30 14 14 0.0145874 1
138 osd.5 33 7 7 0.0145874 1
139 in 6
140 avg 32 stddev 1.41421 (0.0441942x) (expected 5.16398 0.161374x))
141 min osd.4 30
142 max osd.1 34
143 size 00
144 size 10
145 size 20
146 size 364
147
149 osdmaptool is part of Ceph, a massively scalable, open-source, distrib‐
150 uted storage system. Please refer to the Ceph documentation at
151 http://ceph.com/docs for more information.
152
154 ceph(8), crushtool(8),
155
157 2010-2014, Inktank Storage, Inc. and contributors. Licensed under Cre‐
158 ative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
159
160
161
162
163dev Apr 14, 2019 OSDMAPTOOL(8)