1BTRFS-DEVICE(8)                  Btrfs Manual                  BTRFS-DEVICE(8)
2
3
4

NAME

6       btrfs-device - manage devices of btrfs filesystems
7

SYNOPSIS

9       btrfs device <subcommand> <args>
10

DESCRIPTION

12       The btrfs device command group is used to manage devices of the btrfs
13       filesystems.
14

DEVICE MANAGEMENT

16       Btrfs filesystem can be created on top of single or multiple block
17       devices. Data and metadata are organized in allocation profiles with
18       various redundancy policies. There’s some similarity with traditional
19       RAID levels, but this could be confusing to users familiar with the
20       traditional meaning. Due to the similarity, the RAID terminology is
21       widely used in the documentation. See mkfs.btrfs(8) for more details
22       and the exact profile capabilities and constraints.
23
24       The device management works on a mounted filesystem. Devices can be
25       added, removed or replaced, by commands provided by btrfs device and
26       btrfs replace.
27
28       The profiles can be also changed, provided there’s enough workspace to
29       do the conversion, using the btrfs balance command and namely the
30       filter convert.
31
32       Profile
33           A profile describes an allocation policy based on the
34           redundancy/replication constraints in connection with the number of
35           devices. The profile applies to data and metadata block groups
36           separately.
37
38       RAID level
39           Where applicable, the level refers to a profile that matches
40           constraints of the standard RAID levels. At the moment the
41           supported ones are: RAID0, RAID1, RAID10, RAID5 and RAID6.
42
43       See the section TYPICAL USECASES for some examples.
44

SUBCOMMAND

46       add [-Kf] <device> [<device>...] <path>
47           Add device(s) to the filesystem identified by <path>.
48
49           If applicable, a whole device discard (TRIM) operation is performed
50           prior to adding the device. A device with existing filesystem
51           detected by blkid(8) will prevent device addition and has to be
52           forced. Alternatively the filesystem can be wiped from the device
53           using eg. the wipefs(8) tool.
54
55           The operation is instant and does not affect existing data. The
56           operation merely adds the device to the filesystem structures and
57           creates some block groups headers.
58
59           Options
60
61           -K|--nodiscard
62               do not perform discard (TRIM) by default
63
64           -f|--force
65               force overwrite of existing filesystem on the given disk(s)
66
67           --enqueue
68               wait if there’s another exclusive operation running, otherwise
69               continue
70
71       remove [options] <device>|<devid> [<device>|<devid>...] <path>
72           Remove device(s) from a filesystem identified by <path>
73
74           Device removal must satisfy the profile constraints, otherwise the
75           command fails. The filesystem must be converted to profile(s) that
76           would allow the removal. This can typically happen when going down
77           from 2 devices to 1 and using the RAID1 profile. See the TYPICAL
78           USECASES section below.
79
80           The operation can take long as it needs to move all data from the
81           device.
82
83           It is possible to delete the device that was used to mount the
84           filesystem. The device entry in the mount table will be replaced by
85           another device name with the lowest device id.
86
87           If the filesystem is mounted in degraded mode (-o degraded),
88           special term missing can be used for device. In that case, the
89           first device that is described by the filesystem metadata, but not
90           present at the mount time will be removed.
91
92               Note
93               In most cases, there is only one missing device in degraded
94               mode, otherwise mount fails. If there are two or more devices
95               missing (e.g. possible in RAID6), you need specify missing as
96               many times as the number of missing devices to remove all of
97               them.
98           Options
99
100           --enqueue
101               wait if there’s another exclusive operation running, otherwise
102               continue
103
104       delete <device>|<devid> [<device>|<devid>...] <path>
105           Alias of remove kept for backward compatibility
106
107       ready <device>
108           Wait until all devices of a multiple-device filesystem are scanned
109           and registered within the kernel module. This is to provide a way
110           for automatic filesystem mounting tools to wait before the mount
111           can start. The device scan is only one of the preconditions and the
112           mount can fail for other reasons. Normal users usually do not need
113           this command and may safely ignore it.
114
115       scan [options] [<device> [<device>...]]
116           Scan devices for a btrfs filesystem and register them with the
117           kernel module. This allows mounting multiple-device filesystem by
118           specifying just one from the whole group.
119
120           If no devices are passed, all block devices that blkid reports to
121           contain btrfs are scanned.
122
123           The options --all-devices or -d can be used as a fallback in case
124           blkid is not available. If used, behavior is the same as if no
125           devices are passed.
126
127           The command can be run repeatedly. Devices that have been already
128           registered remain as such. Reloading the kernel module will drop
129           this information. There’s an alternative way of mounting
130           multiple-device filesystem without the need for prior scanning. See
131           the mount option device.
132
133           Options
134
135           -d|--all-devices
136               Enumerate and register all devices, use as a fallback in case
137               blkid is not available.
138
139           -u|--forget
140               Unregister a given device or all stale devices if no path is
141               given, the device must be unmounted otherwise it’s an error.
142
143       stats [options] <path>|<device>
144           Read and print the device IO error statistics for all devices of
145           the given filesystem identified by <path> or for a single <device>.
146           The filesystem must be mounted. See section DEVICE STATS for more
147           information about the reported statistics and the meaning.
148
149           Options
150
151           -z|--reset
152               Print the stats and reset the values to zero afterwards.
153
154           -c|--check
155               Check if the stats are all zeros and return 0 if it is so. Set
156               bit 6 of the return code if any of the statistics is no-zero.
157               The error values is 65 if reading stats from at least one
158               device failed, otherwise it’s 64.
159
160       usage [options] <path> [<path>...]
161           Show detailed information about internal allocations in devices.
162
163           Options
164
165           -b|--raw
166               raw numbers in bytes, without the B suffix
167
168           -h|--human-readable
169               print human friendly numbers, base 1024, this is the default
170
171           -H
172               print human friendly numbers, base 1000
173
174           --iec
175               select the 1024 base for the following options, according to
176               the IEC standard
177
178           --si
179               select the 1000 base for the following options, according to
180               the SI standard
181
182           -k|--kbytes
183               show sizes in KiB, or kB with --si
184
185           -m|--mbytes
186               show sizes in MiB, or MB with --si
187
188           -g|--gbytes
189               show sizes in GiB, or GB with --si
190
191           -t|--tbytes
192               show sizes in TiB, or TB with --si
193
194       If conflicting options are passed, the last one takes precedence.
195

TYPICAL USECASES

197   STARTING WITH A SINGLE-DEVICE FILESYSTEM
198       Assume we’ve created a filesystem on a block device /dev/sda with
199       profile single/single (data/metadata), the device size is 50GiB and
200       we’ve used the whole device for the filesystem. The mount point is
201       /mnt.
202
203       The amount of data stored is 16GiB, metadata have allocated 2GiB.
204
205       ADD NEW DEVICE
206           We want to increase the total size of the filesystem and keep the
207           profiles. The size of the new device /dev/sdb is 100GiB.
208
209               $ btrfs device add /dev/sdb /mnt
210
211           The amount of free data space increases by less than 100GiB, some
212           space is allocated for metadata.
213
214       CONVERT TO RAID1
215           Now we want to increase the redundancy level of both data and
216           metadata, but we’ll do that in steps. Note, that the device sizes
217           are not equal and we’ll use that to show the capabilities of split
218           data/metadata and independent profiles.
219
220           The constraint for RAID1 gives us at most 50GiB of usable space and
221           exactly 2 copies will be stored on the devices.
222
223           First we’ll convert the metadata. As the metadata occupy less than
224           50GiB and there’s enough workspace for the conversion process, we
225           can do:
226
227               $ btrfs balance start -mconvert=raid1 /mnt
228
229           This operation can take a while, because all metadata have to be
230           moved and all block pointers updated. Depending on the physical
231           locations of the old and new blocks, the disk seeking is the key
232           factor affecting performance.
233
234           You’ll note that the system block group has been also converted to
235           RAID1, this normally happens as the system block group also holds
236           metadata (the physical to logical mappings).
237
238           What changed:
239
240           ·   available data space decreased by 3GiB, usable roughly (50 - 3)
241               + (100 - 3) = 144 GiB
242
243           ·   metadata redundancy increased
244
245           IOW, the unequal device sizes allow for combined space for data yet
246           improved redundancy for metadata. If we decide to increase
247           redundancy of data as well, we’re going to lose 50GiB of the second
248           device for obvious reasons.
249
250               $ btrfs balance start -dconvert=raid1 /mnt
251
252           The balance process needs some workspace (ie. a free device space
253           without any data or metadata block groups) so the command could
254           fail if there’s too much data or the block groups occupy the whole
255           first device.
256
257           The device size of /dev/sdb as seen by the filesystem remains
258           unchanged, but the logical space from 50-100GiB will be unused.
259
260       REMOVE DEVICE
261           Device removal must satisfy the profile constraints, otherwise the
262           command fails. For example:
263
264               $ btrfs device remove /dev/sda /mnt
265               ERROR: error removing device '/dev/sda': unable to go below two devices on raid1
266
267           In order to remove a device, you need to convert the profile in
268           this case:
269
270               $ btrfs balance start -mconvert=dup -dconvert=single /mnt
271               $ btrfs device remove /dev/sda /mnt
272

DEVICE STATS

274       The device stats keep persistent record of several error classes
275       related to doing IO. The current values are printed at mount time and
276       updated during filesystem lifetime or from a scrub run.
277
278           $ btrfs device stats /dev/sda3
279           [/dev/sda3].write_io_errs   0
280           [/dev/sda3].read_io_errs    0
281           [/dev/sda3].flush_io_errs   0
282           [/dev/sda3].corruption_errs 0
283           [/dev/sda3].generation_errs 0
284
285       write_io_errs
286           Failed writes to the block devices, means that the layers beneath
287           the filesystem were not able to satisfy the write request.
288
289       read_io_errors
290           Read request analogy to write_io_errs.
291
292       flush_io_errs
293           Number of failed writes with the FLUSH flag set. The flushing is a
294           method of forcing a particular order between write requests and is
295           crucial for implementing crash consistency. In case of btrfs, all
296           the metadata blocks must be permanently stored on the block device
297           before the superblock is written.
298
299       corruption_errs
300           A block checksum mismatched or a corrupted metadata header was
301           found.
302
303       generation_errs
304           The block generation does not match the expected value (eg. stored
305           in the parent node).
306

EXIT STATUS

308       btrfs device returns a zero exit status if it succeeds. Non zero is
309       returned in case of failure.
310
311       If the -s option is used, btrfs device stats will add 64 to the exit
312       status if any of the error counters is non-zero.
313

AVAILABILITY

315       btrfs is part of btrfs-progs. Please refer to the btrfs wiki
316       http://btrfs.wiki.kernel.org for further details.
317

SEE ALSO

319       mkfs.btrfs(8), btrfs-replace(8), btrfs-balance(8)
320
321
322
323Btrfs v5.10                       01/18/2021                   BTRFS-DEVICE(8)
Impressum