1BTRFS-DEVICE(8) Btrfs Manual BTRFS-DEVICE(8)
2
3
4
6 btrfs-device - manage devices of btrfs filesystems
7
9 btrfs device <subcommand> <args>
10
12 The btrfs device command group is used to manage devices of the btrfs
13 filesystems.
14
16 Btrfs filesystem can be created on top of single or multiple block
17 devices. Data and metadata are organized in allocation profiles with
18 various redundancy policies. There’s some similarity with traditional
19 RAID levels, but this could be confusing to users familiar with the
20 traditional meaning. Due to the similarity, the RAID terminology is
21 widely used in the documentation. See mkfs.btrfs(8) for more details
22 and the exact profile capabilities and constraints.
23
24 The device management works on a mounted filesystem. Devices can be
25 added, removed or replaced, by commands provided by btrfs device and
26 btrfs replace.
27
28 The profiles can be also changed, provided there’s enough workspace to
29 do the conversion, using the btrfs balance command and namely the
30 filter convert.
31
32 Profile
33 A profile describes an allocation policy based on the
34 redundancy/replication constraints in connection with the number of
35 devices. The profile applies to data and metadata block groups
36 separately.
37
38 RAID level
39 Where applicable, the level refers to a profile that matches
40 constraints of the standard RAID levels. At the moment the
41 supported ones are: RAID0, RAID1, RAID10, RAID5 and RAID6.
42
43 See the section TYPICAL USECASES for some examples.
44
46 add [-Kf] <device> [<device>...] <path>
47 Add device(s) to the filesystem identified by <path>.
48
49 If applicable, a whole device discard (TRIM) operation is performed
50 prior to adding the device. A device with existing filesystem
51 detected by blkid(8) will prevent device addition and has to be
52 forced. Alternatively the filesystem can be wiped from the device
53 using eg. the wipefs(8) tool.
54
55 The operation is instant and does not affect existing data. The
56 operation merely adds the device to the filesystem structures and
57 creates some block groups headers.
58
59 Options
60
61 -K|--nodiscard
62 do not perform discard (TRIM) by default
63
64 -f|--force
65 force overwrite of existing filesystem on the given disk(s)
66
67 remove <device>|<devid> [<device>|<devid>...] <path>
68 Remove device(s) from a filesystem identified by <path>
69
70 Device removal must satisfy the profile constraints, otherwise the
71 command fails. The filesystem must be converted to profile(s) that
72 would allow the removal. This can typically happen when going down
73 from 2 devices to 1 and using the RAID1 profile. See the TYPICAL
74 USECASES section below.
75
76 The operation can take long as it needs to move all data from the
77 device.
78
79 It is possible to delete the device that was used to mount the
80 filesystem. The device entry in the mount table will be replaced by
81 another device name with the lowest device id.
82
83 If the filesystem is mounted in degraded mode (-o degraded),
84 special term missing can be used for device. In that case, the
85 first device that is described by the filesystem metadata, but not
86 present at the mount time will be removed.
87
88 Note
89 In most cases, there is only one missing device in degraded
90 mode, otherwise mount fails. If there are two or more devices
91 missing (e.g. possible in RAID6), you need specify missing as
92 many times as the number of missing devices to remove all of
93 them.
94
95 delete <device>|<devid> [<device>|<devid>...] <path>
96 Alias of remove kept for backward compatibility
97
98 ready <device>
99 Wait until all devices of a multiple-device filesystem are scanned
100 and registered within the kernel module. This is to provide a way
101 for automatic filesystem mounting tools to wait before the mount
102 can start. The device scan is only one of the preconditions and the
103 mount can fail for other reasons. Normal users usually do not need
104 this command and may safely ignore it.
105
106 scan [options] [<device> [<device>...]]
107 Scan devices for a btrfs filesystem and register them with the
108 kernel module. This allows mounting multiple-device filesystem by
109 specifying just one from the whole group.
110
111 If no devices are passed, all block devices that blkid reports to
112 contain btrfs are scanned.
113
114 The options --all-devices or -d can be used as a fallback in case
115 blkid is not available. If used, behavior is the same as if no
116 devices are passed.
117
118 The command can be run repeatedly. Devices that have been already
119 registered remain as such. Reloading the kernel module will drop
120 this information. There’s an alternative way of mounting
121 multiple-device filesystem without the need for prior scanning. See
122 the mount option device.
123
124 Options
125
126 -d|--all-devices
127 Enumerate and register all devices, use as a fallback in case
128 blkid is not available.
129
130 -u|--forget
131 Unregister a given device or all stale devices if no path is
132 given, the device must be unmounted otherwise it’s an error.
133
134 stats [options] <path>|<device>
135 Read and print the device IO error statistics for all devices of
136 the given filesystem identified by <path> or for a single <device>.
137 The filesystem must be mounted. See section DEVICE STATS for more
138 information about the reported statistics and the meaning.
139
140 Options
141
142 -z|--reset
143 Print the stats and reset the values to zero afterwards.
144
145 -c|--check
146 Check if the stats are all zeros and return 0 if it is so. Set
147 bit 6 of the return code if any of the statistics is no-zero.
148 The error values is 65 if reading stats from at least one
149 device failed, otherwise it’s 64.
150
151 usage [options] <path> [<path>...]
152 Show detailed information about internal allocations in devices.
153
154 Options
155
156 -b|--raw
157 raw numbers in bytes, without the B suffix
158
159 -h|--human-readable
160 print human friendly numbers, base 1024, this is the default
161
162 -H
163 print human friendly numbers, base 1000
164
165 --iec
166 select the 1024 base for the following options, according to
167 the IEC standard
168
169 --si
170 select the 1000 base for the following options, according to
171 the SI standard
172
173 -k|--kbytes
174 show sizes in KiB, or kB with --si
175
176 -m|--mbytes
177 show sizes in MiB, or MB with --si
178
179 -g|--gbytes
180 show sizes in GiB, or GB with --si
181
182 -t|--tbytes
183 show sizes in TiB, or TB with --si
184
185 If conflicting options are passed, the last one takes precedence.
186
188 STARTING WITH A SINGLE-DEVICE FILESYSTEM
189 Assume we’ve created a filesystem on a block device /dev/sda with
190 profile single/single (data/metadata), the device size is 50GiB and
191 we’ve used the whole device for the filesystem. The mount point is
192 /mnt.
193
194 The amount of data stored is 16GiB, metadata have allocated 2GiB.
195
196 ADD NEW DEVICE
197 We want to increase the total size of the filesystem and keep the
198 profiles. The size of the new device /dev/sdb is 100GiB.
199
200 $ btrfs device add /dev/sdb /mnt
201
202 The amount of free data space increases by less than 100GiB, some
203 space is allocated for metadata.
204
205 CONVERT TO RAID1
206 Now we want to increase the redundancy level of both data and
207 metadata, but we’ll do that in steps. Note, that the device sizes
208 are not equal and we’ll use that to show the capabilities of split
209 data/metadata and independent profiles.
210
211 The constraint for RAID1 gives us at most 50GiB of usable space and
212 exactly 2 copies will be stored on the devices.
213
214 First we’ll convert the metadata. As the metadata occupy less than
215 50GiB and there’s enough workspace for the conversion process, we
216 can do:
217
218 $ btrfs balance start -mconvert=raid1 /mnt
219
220 This operation can take a while, because all metadata have to be
221 moved and all block pointers updated. Depending on the physical
222 locations of the old and new blocks, the disk seeking is the key
223 factor affecting performance.
224
225 You’ll note that the system block group has been also converted to
226 RAID1, this normally happens as the system block group also holds
227 metadata (the physical to logical mappings).
228
229 What changed:
230
231 · available data space decreased by 3GiB, usable roughly (50 - 3)
232 + (100 - 3) = 144 GiB
233
234 · metadata redundancy increased
235
236 IOW, the unequal device sizes allow for combined space for data yet
237 improved redundancy for metadata. If we decide to increase
238 redundancy of data as well, we’re going to lose 50GiB of the second
239 device for obvious reasons.
240
241 $ btrfs balance start -dconvert=raid1 /mnt
242
243 The balance process needs some workspace (ie. a free device space
244 without any data or metadata block groups) so the command could
245 fail if there’s too much data or the block groups occupy the whole
246 first device.
247
248 The device size of /dev/sdb as seen by the filesystem remains
249 unchanged, but the logical space from 50-100GiB will be unused.
250
251 REMOVE DEVICE
252 Device removal must satisfy the profile constraints, otherwise the
253 command fails. For example:
254
255 $ btrfs device remove /dev/sda /mnt
256 ERROR: error removing device '/dev/sda': unable to go below two devices on raid1
257
258 In order to remove a device, you need to convert the profile in
259 this case:
260
261 $ btrfs balance start -mconvert=dup -dconvert=single /mnt
262 $ btrfs device remove /dev/sda /mnt
263
265 The device stats keep persistent record of several error classes
266 related to doing IO. The current values are printed at mount time and
267 updated during filesystem lifetime or from a scrub run.
268
269 $ btrfs device stats /dev/sda3
270 [/dev/sda3].write_io_errs 0
271 [/dev/sda3].read_io_errs 0
272 [/dev/sda3].flush_io_errs 0
273 [/dev/sda3].corruption_errs 0
274 [/dev/sda3].generation_errs 0
275
276 write_io_errs
277 Failed writes to the block devices, means that the layers beneath
278 the filesystem were not able to satisfy the write request.
279
280 read_io_errors
281 Read request analogy to write_io_errs.
282
283 flush_io_errs
284 Number of failed writes with the FLUSH flag set. The flushing is a
285 method of forcing a particular order between write requests and is
286 crucial for implementing crash consistency. In case of btrfs, all
287 the metadata blocks must be permanently stored on the block device
288 before the superblock is written.
289
290 corruption_errs
291 A block checksum mismatched or a corrupted metadata header was
292 found.
293
294 generation_errs
295 The block generation does not match the expected value (eg. stored
296 in the parent node).
297
299 btrfs device returns a zero exit status if it succeeds. Non zero is
300 returned in case of failure.
301
302 If the -s option is used, btrfs device stats will add 64 to the exit
303 status if any of the error counters is non-zero.
304
306 btrfs is part of btrfs-progs. Please refer to the btrfs wiki
307 http://btrfs.wiki.kernel.org for further details.
308
310 mkfs.btrfs(8), btrfs-replace(8), btrfs-balance(8)
311
312
313
314Btrfs v5.4 12/03/2019 BTRFS-DEVICE(8)