1OSDMAPTOOL(8)                        Ceph                        OSDMAPTOOL(8)
2
3
4

NAME

6       osdmaptool - ceph osd cluster map manipulation tool
7

SYNOPSIS

9       osdmaptool mapfilename [--print] [--createsimple numosd
10       [--pgbits bitsperosd ] ] [--clobber]
11
12

DESCRIPTION

14       osdmaptool  is a utility that lets you create, view, and manipulate OSD
15       cluster maps from the Ceph distributed storage system. Notably, it lets
16       you extract the embedded CRUSH map or import a new CRUSH map.
17

OPTIONS

19       --print
20              will  simply  make  the  tool print a plaintext dump of the map,
21              after any modifications are made.
22
23       --dump <format>
24              displays the map in plain text when <format> is 'plain',  'json'
25              if  specified format is not supported. This is an alternative to
26              the print option.
27
28       --clobber
29              will allow osdmaptool to overwrite mapfilename  if  changes  are
30              made.
31
32       --import-crush mapfile
33              will  load  the  CRUSH  map from mapfile and embed it in the OSD
34              map.
35
36       --export-crush mapfile
37              will extract the CRUSH map from the OSD map and write it to map‐
38              file.
39
40       --createsimple numosd [--pg-bits bitsperosd] [--pgp-bits bits]
41              will  create  a  relatively  generic  OSD  map  with  the numosd
42              devices.  If --pg-bits is specified, the initial placement group
43              counts  will  be  set with bitsperosd bits per OSD. That is, the
44              pg_num map attribute will be set to numosd shifted  by  bitsper‐
45              osd.  If --pgp-bits is specified, then the pgp_num map attribute
46              will be set to numosd shifted by bits.
47
48       --create-from-conf
49              creates an osd map with default configurations.
50
51       --test-map-pgs  [--pool  poolid]  [--range-first  <first>  --range-last
52       <last>]
53              will  print  out the mappings from placement groups to OSDs.  If
54              range is specified, then it iterates from first to last  in  the
55              directory  specified  by argument to osdmaptool.  Eg: osdmaptool
56              --test-map-pgs --range-first 0 --range-last 2 osdmap_dir.   This
57              will iterate through the files named 0,1,2 in osdmap_dir.
58
59       --test-map-pgs-dump [--pool poolid] [--range-first <first> --range-last
60       <last>]
61              will print out the summary of all placement groups and the  map‐
62              pings from them to the mapped OSDs.  If range is specified, then
63              it iterates from first to last in  the  directory  specified  by
64              argument  to  osdmaptool.   Eg:  osdmaptool  --test-map-pgs-dump
65              --range-first 0 --range-last 2 osdmap_dir.   This  will  iterate
66              through the files named 0,1,2 in osdmap_dir.
67
68       --test-map-pgs-dump-all    [--pool   poolid]   [--range-first   <first>
69       --range-last <last>]
70              will print out the summary of all placement groups and the  map‐
71              pings from them to all the OSDs.  If range is specified, then it
72              iterates from first to last in the directory specified by  argu‐
73              ment  to  osdmaptool.   Eg:  osdmaptool  --test-map-pgs-dump-all
74              --range-first 0 --range-last 2 osdmap_dir.   This  will  iterate
75              through the files named 0,1,2 in osdmap_dir.
76
77       --test-random
78              does a random mapping of placement groups to the OSDs.
79
80       --test-map-pg <pgid>
81              map a particular placement group(specified by pgid) to the OSDs.
82
83       --test-map-object <objectname> [--pool <poolid>]
84              map a particular placement group(specified by objectname) to the
85              OSDs.
86
87       --test-crush [--range-first <first> --range-last <last>]
88              map placement groups to acting OSDs.   If  range  is  specified,
89              then  it  iterates from first to last in the directory specified
90              by  argument  to  osdmaptool.    Eg:   osdmaptool   --test-crush
91              --range-first  0  --range-last  2 osdmap_dir.  This will iterate
92              through the files named 0,1,2 in osdmap_dir.
93
94       --mark-up-in
95              mark osds up and in (but do not persist).
96
97       --tree Displays a hierarchical tree of the map.
98
99       --clear-temp
100              clears pg_temp and primary_temp variables.
101

EXAMPLE

103       To create a simple map with 16 devices:
104
105          osdmaptool --createsimple 16 osdmap --clobber
106
107       To view the result:
108
109          osdmaptool --print osdmap
110
111       To view the mappings of placement groups for pool 0:
112
113          osdmaptool --test-map-pgs-dump rbd --pool 0
114
115          pool 0 pg_num 8
116          0.0     [0,2,1] 0
117          0.1     [2,0,1] 2
118          0.2     [0,1,2] 0
119          0.3     [2,0,1] 2
120          0.4     [0,2,1] 0
121          0.5     [0,2,1] 0
122          0.6     [0,1,2] 0
123          0.7     [1,0,2] 1
124          #osd    count   first   primary c wt    wt
125          osd.0   8       5       5       1       1
126          osd.1   8       1       1       1       1
127          osd.2   8       2       2       1       1
128           in 3
129           avg 8 stddev 0 (0x) (expected 2.3094 0.288675x))
130           min osd.0 8
131           max osd.0 8
132          size 0  0
133          size 1  0
134          size 2  0
135          size 3  8
136
137       In which,
138
139              1. pool 0 has 8 placement groups. And two tables follow:
140
141              2. A table for placement groups. Each row presents  a  placement
142                 group. With columns of:
143
144                 · placement group id,
145
146                 · acting set, and
147
148                 · primary OSD.
149
150              3. A  table for all OSDs. Each row presents an OSD. With columns
151                 of:
152
153                 · count of placement groups being mapped to this OSD,
154
155                 · count of placement groups where this OSD is the  first  one
156                   in their acting sets,
157
158                 · count  of placement groups where this OSD is the primary of
159                   them,
160
161                 · the CRUSH weight of this OSD, and
162
163                 · the weight of this OSD.
164
165              4. Looking at the number of placement groups held by 3 OSDs.  We
166                 have
167
168                 · avarge,  stddev,  stddev/average, expected stddev, expected
169                   stddev / average
170
171                 · min and max
172
173              5. The number of placement groups mapping to  n  OSDs.  In  this
174                 case, all 8 placement groups are mapping to 3 different OSDs.
175
176       In a less-balanced cluster, we could have following output for the sta‐
177       tistics of placement group distribution, whose  standard  deviation  is
178       1.41421:
179
180          #osd    count   first   primary c wt    wt
181          osd.0   8       5       5       1       1
182          osd.1   8       1       1       1       1
183          osd.2   8       2       2       1       1
184
185          #osd    count   first    primary c wt    wt
186          osd.0   33      9        9       0.0145874     1
187          osd.1   34      14       14      0.0145874     1
188          osd.2   31      7        7       0.0145874     1
189          osd.3   31      13       13      0.0145874     1
190          osd.4   30      14       14      0.0145874     1
191          osd.5   33      7        7       0.0145874     1
192           in 6
193           avg 32 stddev 1.41421 (0.0441942x) (expected 5.16398 0.161374x))
194           min osd.4 30
195           max osd.1 34
196          size 00
197          size 10
198          size 20
199          size 364
200

AVAILABILITY

202       osdmaptool is part of Ceph, a massively scalable, open-source, distrib‐
203       uted storage  system.   Please  refer  to  the  Ceph  documentation  at
204       http://ceph.com/docs for more information.
205

SEE ALSO

207       ceph(8), crushtool(8),
208
210       2010-2014,  Inktank Storage, Inc. and contributors. Licensed under Cre‐
211       ative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
212
213
214
215
216dev                              Apr 29, 2019                    OSDMAPTOOL(8)
Impressum