1CEPH-BLUESTORE-TOOL(8)               Ceph               CEPH-BLUESTORE-TOOL(8)
2
3
4

NAME

6       ceph-bluestore-tool - bluestore administrative tool
7

SYNOPSIS

9       ceph-bluestore-tool command
10       [ --dev device ... ]
11       [ --path osd path ]
12       [ --out-dir dir ]
13       [ --log-file | -l filename ]
14       [ --deep ]
15       ceph-bluestore-tool fsck|repair --path osd path [ --deep ]
16       ceph-bluestore-tool show-label --dev device ...
17       ceph-bluestore-tool prime-osd-dir --dev device --path osd path
18       ceph-bluestore-tool bluefs-export --path osd path --out-dir dir
19       ceph-bluestore-tool bluefs-bdev-new-wal --path osd path --dev-target new-device
20       ceph-bluestore-tool bluefs-bdev-new-db --path osd path --dev-target new-device
21       ceph-bluestore-tool bluefs-bdev-migrate --path osd path --dev-target new-device --devs-source device1 [--devs-source device2]
22       ceph-bluestore-tool free-dump|free-score --path osd path [ --allocator block/bluefs-wal/bluefs-db/bluefs-slow ]
23
24

DESCRIPTION

26       ceph-bluestore-tool  is  a  utility to perform low-level administrative
27       operations on a BlueStore instance.
28

COMMANDS

30       help
31          show help
32
33       fsck [ --deep ]
34          run consistency check on BlueStore metadata.  If  --deep  is  speci‐
35          fied, also read all object data and verify checksums.
36
37       repair
38          Run a consistency check and repair any errors we can.
39
40       bluefs-export
41          Export  the  contents  of  BlueFS (i.e., RocksDB files) to an output
42          directory.
43
44       bluefs-bdev-sizes --path osd path
45          Print the device sizes, as understood by BlueFS, to stdout.
46
47       bluefs-bdev-expand --path osd path
48          Instruct BlueFS to check the size of its block devices and, if  they
49          have  expanded,  make  use of the additional space. Please note that
50          only the new files created by BlueFS will be allocated on  the  pre‐
51          ferred  block  device  if it has enough free space, and the existing
52          files that have spilled over to the slow device  will  be  gradually
53          removed  when RocksDB performs compaction.  In other words, if there
54          is any data spilled over to the slow device, it will be moved to the
55          fast device over time.
56
57       bluefs-bdev-new-wal --path osd path --dev-target new-device
58          Adds WAL device to BlueFS, fails if WAL device already exists.
59
60       bluefs-bdev-new-db --path osd path --dev-target new-device
61          Adds DB device to BlueFS, fails if DB device already exists.
62
63       bluefs-bdev-migrate   --dev-target   new-device  --devs-source  device1
64       [--devs-source device2]
65          Moves BlueFS data from source device(s) to the  target  one,  source
66          devices  (except the main one) are removed on success. Target device
67          can be both already attached or new device. In the latter case  it's
68          added to OSD replacing one of the source devices. Following replace‐
69          ment rules apply (in the order of  precedence,  stop  on  the  first
70          match):
71
72              · if source list has DB volume - target device replaces it.
73
74              · if source list has WAL volume - target device replace it.
75
76              · if  source list has slow volume only - operation isn't permit‐
77                ted, requires explicit allocation via new-db/new-wal command.
78
79       show-label --dev device [...]
80          Show device label(s).
81
82       free-dump       --path       osd       path        [        --allocator
83       block/bluefs-wal/bluefs-db/bluefs-slow ]
84          Dump all free regions in allocator.
85
86       free-score        --path       osd       path       [       --allocator
87       block/bluefs-wal/bluefs-db/bluefs-slow ]
88          Give a [0-1] number that  represents  quality  of  fragmentation  in
89          allocator.  0 represents case when all free space is in one chunk. 1
90          represents worst possible fragmentation.
91

OPTIONS

93       --dev *device*
94              Add device to the list of devices to consider
95
96       --devs-source *device*
97              Add device to the list of devices to  consider  as  sources  for
98              migrate operation
99
100       --dev-target *device*
101              Specify  target  device  migrate  operation or device to add for
102              adding new DB/WAL.
103
104       --path *osd path*
105              Specify an osd path.  In most cases, the device list is inferred
106              from  the symlinks present in osd path.  This is usually simpler
107              than explicitly specifying the device(s) with --dev.
108
109       --out-dir *dir*
110              Output directory for bluefs-export
111
112       -l, --log-file *log file*
113              file to log to
114
115       --log-level *num*
116              debug log level.  Default is 30 (extremely verbose), 20 is  very
117              verbose, 10 is verbose, and 1 is not very verbose.
118
119       --deep deep scrub/repair (read and validate object data, not just meta‐
120              data)
121
122       --allocator *name*
123              Useful for free-dump and  free-score  actions.  Selects  alloca‐
124              tor(s).
125

DEVICE LABELS

127       Every  BlueStore block device has a single block label at the beginning
128       of the device.  You can dump the contents of the label with:
129
130          ceph-bluestore-tool show-label --dev *device*
131
132       The main device will have a lot of metadata, including information that
133       used to be stored in small files in the OSD data directory.  The auxil‐
134       iary devices (db and wal) will only have the  minimum  required  fields
135       (OSD UUID, size, device type, birth time).
136

OSD DIRECTORY PRIMING

138       You  can  generate the content for an OSD data directory that can start
139       up a BlueStore OSD with the prime-osd-dir command:
140
141          ceph-bluestore-tool prime-osd-dir --dev *main device* --path /var/lib/ceph/osd/ceph-*id*
142

BLUEFS LOG RESCUE

144       Some versions of BlueStore  were  susceptible  to  BlueFS  log  growing
145       extremaly  large  -  beyond the point of making booting OSD impossible.
146       This state is indicated by booting that takes very long  and  fails  in
147       _replay function.
148
149       This can be fixed by::
150              ceph-bluestore-tool  fsck --path osd path --bluefs_replay_recov‐
151              ery=true
152
153       It is advised to first check if rescue process would be successfull::
154              ceph-bluestore-tool fsck --path osd path  --bluefs_replay_recov‐
155              ery=true --bluefs_replay_recovery_disable_compact=true
156
157       If above fsck is successfull fix procedure can be applied.
158

AVAILABILITY

160       ceph-bluestore-tool is part of Ceph, a massively scalable, open-source,
161       distributed storage system. Please refer to the Ceph  documentation  at
162       http://ceph.com/docs for more information.
163

SEE ALSO

165       ceph-osd(8)
166
168       2010-2021,  Inktank Storage, Inc. and contributors. Licensed under Cre‐
169       ative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
170
171
172
173
174dev                              Mar 18, 2021           CEPH-BLUESTORE-TOOL(8)
Impressum