1BTRFS-BALANCE(8) Btrfs Manual BTRFS-BALANCE(8)
2
3
4
6 btrfs-balance - balance block groups on a btrfs filesystem
7
9 btrfs balance <subcommand> <args>
10
12 The primary purpose of the balance feature is to spread block groups
13 across all devices so they match constraints defined by the respective
14 profiles. See mkfs.btrfs(8) section PROFILES for more details. The
15 scope of the balancing process can be further tuned by use of filters
16 that can select the block groups to process. Balance works only on a
17 mounted filesystem.
18
19 The balance operation is cancellable by the user. The on-disk state of
20 the filesystem is always consistent so an unexpected interruption (eg.
21 system crash, reboot) does not corrupt the filesystem. The progress of
22 the balance operation is temporarily stored as an internal state and
23 will be resumed upon mount, unless the mount option skip_balance is
24 specified.
25
26 Warning
27 running balance without filters will take a lot of time as it
28 basically rewrites the entire filesystem and needs to update all
29 block pointers.
30
31 The filters can be used to perform following actions:
32
33 · convert block group profiles (filter convert)
34
35 · make block group usage more compact (filter usage)
36
37 · perform actions only on a given device (filters devid, drange)
38
39 The filters can be applied to a combination of block group types (data,
40 metadata, system). Note that changing system needs the force option.
41
42 Note
43 the balance operation needs enough work space, ie. space that is
44 completely unused in the filesystem, otherwise this may lead to
45 ENOSPC reports. See the section ENOSPC for more details.
46
48 Note
49 The balance subcommand also exists under the btrfs filesystem
50 namespace. This still works for backward compatibility but is
51 deprecated and should not be used any more.
52
53 Note
54 A short syntax btrfs balance <path> works due to backward
55 compatibility but is deprecated and should not be used any more.
56 Use btrfs balance start command instead.
57
59 Balancing operations are very IO intensive and can also be quite CPU
60 intensive, impacting other ongoing filesystem operations. Typically
61 large amounts of data are copied from one location to another, with
62 corresponding metadata updates.
63
64 Depending upon the block group layout, it can also be seek heavy.
65 Performance on rotational devices is noticeably worse compared to SSDs
66 or fast arrays.
67
69 cancel <path>
70 cancels a running or paused balance, the command will block and
71 wait until the current blockgroup being processed completes
72
73 pause <path>
74 pause running balance operation, this will store the state of the
75 balance progress and used filters to the filesystem
76
77 resume <path>
78 resume interrupted balance, the balance status must be stored on
79 the filesystem from previous run, eg. after it was forcibly
80 interrupted and mounted again with skip_balance
81
82 start [options] <path>
83 start the balance operation according to the specified filters, no
84 filters will rewrite the entire filesystem. The process runs in the
85 foreground.
86
87 Note
88 the balance command without filters will basically rewrite
89 everything in the filesystem. The run time is potentially very
90 long, depending on the filesystem size. To prevent starting a
91 full balance by accident, the user is warned and has a few
92 seconds to cancel the operation before it starts. The warning
93 and delay can be skipped with --full-balance option.
94 Please note that the filters must be written together with the -d,
95 -m and -s options, because they’re optional and bare -d etc also
96 work and mean no filters.
97
98 Options
99
100 -d[<filters>]
101 act on data block groups, see FILTERS section for details about
102 filters
103
104 -m[<filters>]
105 act on metadata chunks, see FILTERS section for details about
106 filters
107
108 -s[<filters>]
109 act on system chunks (requires -f), see FILTERS section for
110 details about filters.
111
112 -v
113 be verbose and print balance filter arguments
114
115 -f
116 force a reduction of metadata integrity, eg. when going from
117 raid1 to single
118
119 --background|--bg
120 run the balance operation asynchronously in the background,
121 uses fork(2) to start the process that calls the kernel ioctl
122
123 status [-v] <path>
124 Show status of running or paused balance.
125
126 If -v option is given, output will be verbose.
127
129 From kernel 3.3 onwards, btrfs balance can limit its action to a subset
130 of the whole filesystem, and can be used to change the replication
131 configuration (e.g. moving data from single to RAID1). This
132 functionality is accessed through the -d, -m or -s options to btrfs
133 balance start, which filter on data, metadata and system blocks
134 respectively.
135
136 A filter has the following structure: type[=params][,type=...]
137
138 The available types are:
139
140 profiles=<profiles>
141 Balances only block groups with the given profiles. Parameters are
142 a list of profile names separated by "|" (pipe).
143
144 usage=<percent>, usage=<range>
145 Balances only block groups with usage under the given percentage.
146 The value of 0 is allowed and will clean up completely unused block
147 groups, this should not require any new work space allocated. You
148 may want to use usage=0 in case balance is returning ENOSPC and
149 your filesystem is not too full.
150
151 The argument may be a single value or a range. The single value N
152 means at most N percent used, equivalent to ..N range syntax.
153 Kernels prior to 4.4 accept only the single value format. The
154 minimum range boundary is inclusive, maximum is exclusive.
155
156 devid=<id>
157 Balances only block groups which have at least one chunk on the
158 given device. To list devices with ids use btrfs filesystem show.
159
160 drange=<range>
161 Balance only block groups which overlap with the given byte range
162 on any device. Use in conjunction with devid to filter on a
163 specific device. The parameter is a range specified as start..end.
164
165 vrange=<range>
166 Balance only block groups which overlap with the given byte range
167 in the filesystem’s internal virtual address space. This is the
168 address space that most reports from btrfs in the kernel log use.
169 The parameter is a range specified as start..end.
170
171 convert=<profile>
172 Convert each selected block group to the given profile name
173 identified by parameters.
174
175 Note
176 starting with kernel 4.5, the data chunks can be converted
177 to/from the DUP profile on a single device.
178
179 Note
180 starting with kernel 4.6, all profiles can be converted to/from
181 DUP on multi-device filesystems.
182
183 limit=<number>, limit=<range>
184 Process only given number of chunks, after all filters are applied.
185 This can be used to specifically target a chunk in connection with
186 other filters (drange, vrange) or just simply limit the amount of
187 work done by a single balance run.
188
189 The argument may be a single value or a range. The single value N
190 means at most N chunks, equivalent to ..N range syntax. Kernels
191 prior to 4.4 accept only the single value format. The range minimum
192 and maximum are inclusive.
193
194 stripes=<range>
195 Balance only block groups which have the given number of stripes.
196 The parameter is a range specified as start..end. Makes sense for
197 block group profiles that utilize striping, ie. RAID0/10/5/6. The
198 range minimum and maximum are inclusive.
199
200 soft
201 Takes no parameters. Only has meaning when converting between
202 profiles. When doing convert from one profile to another and soft
203 mode is on, chunks that already have the target profile are left
204 untouched. This is useful e.g. when half of the filesystem was
205 converted earlier but got cancelled.
206
207 The soft mode switch is (like every other filter) per-type. For
208 example, this means that we can convert metadata chunks the "hard"
209 way while converting data chunks selectively with soft switch.
210
211 Profile names, used in profiles and convert are one of: raid0, raid1,
212 raid10, raid5, raid6, dup, single. The mixed data/metadata profiles can
213 be converted in the same way, but it’s conversion between mixed and
214 non-mixed is not implemented. For the constraints of the profiles
215 please refer to mkfs.btrfs(8), section PROFILES.
216
218 The way balance operates, it usually needs to temporarily create a new
219 block group and move the old data there, before the old block group can
220 be removed. For that it needs the work space, otherwise it fails for
221 ENOSPC reasons. This is not the same ENOSPC as if the free space is
222 exhausted. This refers to the space on the level of block groups, which
223 are bigger parts of the filesystem that contain many file extents.
224
225 The free work space can be calculated from the output of the btrfs
226 filesystem show command:
227
228 Label: 'BTRFS' uuid: 8a9d72cd-ead3-469d-b371-9c7203276265
229 Total devices 2 FS bytes used 77.03GiB
230 devid 1 size 53.90GiB used 51.90GiB path /dev/sdc2
231 devid 2 size 53.90GiB used 51.90GiB path /dev/sde1
232
233 size - used = free work space 53.90GiB - 51.90GiB = 2.00GiB
234
235 An example of a filter that does not require workspace is usage=0. This
236 will scan through all unused block groups of a given type and will
237 reclaim the space. After that it might be possible to run other
238 filters.
239
240 CONVERSIONS ON MULTIPLE DEVICES
241
242 Conversion to profiles based on striping (RAID0, RAID5/6) require the
243 work space on each device. An interrupted balance may leave partially
244 filled block groups that consume the work space.
245
247 A more comprehensive example when going from one to multiple devices,
248 and back, can be found in section TYPICAL USECASES of btrfs-device(8).
249
250 MAKING BLOCK GROUP LAYOUT MORE COMPACT
251 The layout of block groups is not normally visible; most tools report
252 only summarized numbers of free or used space, but there are still some
253 hints provided.
254
255 Let’s use the following real life example and start with the output:
256
257 $ btrfs filesystem df /path
258 Data, single: total=75.81GiB, used=64.44GiB
259 System, RAID1: total=32.00MiB, used=20.00KiB
260 Metadata, RAID1: total=15.87GiB, used=8.84GiB
261 GlobalReserve, single: total=512.00MiB, used=0.00B
262
263 Roughly calculating for data, 75G - 64G = 11G, the used/total ratio is
264 about 85%. How can we can interpret that:
265
266 · chunks are filled by 85% on average, ie. the usage filter with
267 anything smaller than 85 will likely not affect anything
268
269 · in a more realistic scenario, the space is distributed unevenly, we
270 can assume there are completely used chunks and the remaining are
271 partially filled
272
273 Compacting the layout could be used on both. In the former case it
274 would spread data of a given chunk to the others and removing it. Here
275 we can estimate that roughly 850 MiB of data have to be moved (85% of a
276 1 GiB chunk).
277
278 In the latter case, targeting the partially used chunks will have to
279 move less data and thus will be faster. A typical filter command would
280 look like:
281
282 # btrfs balance start -dusage=50 /path
283 Done, had to relocate 2 out of 97 chunks
284
285 $ btrfs filesystem df /path
286 Data, single: total=74.03GiB, used=64.43GiB
287 System, RAID1: total=32.00MiB, used=20.00KiB
288 Metadata, RAID1: total=15.87GiB, used=8.84GiB
289 GlobalReserve, single: total=512.00MiB, used=0.00B
290
291 As you can see, the total amount of data is decreased by just 1 GiB,
292 which is an expected result. Let’s see what will happen when we
293 increase the estimated usage filter.
294
295 # btrfs balance start -dusage=85 /path
296 Done, had to relocate 13 out of 95 chunks
297
298 $ btrfs filesystem df /path
299 Data, single: total=68.03GiB, used=64.43GiB
300 System, RAID1: total=32.00MiB, used=20.00KiB
301 Metadata, RAID1: total=15.87GiB, used=8.85GiB
302 GlobalReserve, single: total=512.00MiB, used=0.00B
303
304 Now the used/total ratio is about 94% and we moved about 74G - 68G = 6G
305 of data to the remaining blockgroups, ie. the 6GiB are now free of
306 filesystem structures, and can be reused for new data or metadata block
307 groups.
308
309 We can do a similar exercise with the metadata block groups, but this
310 should not typically be necessary, unless the used/total ratio is
311 really off. Here the ratio is roughly 50% but the difference as an
312 absolute number is "a few gigabytes", which can be considered normal
313 for a workload with snapshots or reflinks updated frequently.
314
315 # btrfs balance start -musage=50 /path
316 Done, had to relocate 4 out of 89 chunks
317
318 $ btrfs filesystem df /path
319 Data, single: total=68.03GiB, used=64.43GiB
320 System, RAID1: total=32.00MiB, used=20.00KiB
321 Metadata, RAID1: total=14.87GiB, used=8.85GiB
322 GlobalReserve, single: total=512.00MiB, used=0.00B
323
324 Just 1 GiB decrease, which possibly means there are block groups with
325 good utilization. Making the metadata layout more compact would in turn
326 require updating more metadata structures, ie. lots of IO. As running
327 out of metadata space is a more severe problem, it’s not necessary to
328 keep the utilization ratio too high. For the purpose of this example,
329 let’s see the effects of further compaction:
330
331 # btrfs balance start -musage=70 /path
332 Done, had to relocate 13 out of 88 chunks
333
334 $ btrfs filesystem df .
335 Data, single: total=68.03GiB, used=64.43GiB
336 System, RAID1: total=32.00MiB, used=20.00KiB
337 Metadata, RAID1: total=11.97GiB, used=8.83GiB
338 GlobalReserve, single: total=512.00MiB, used=0.00B
339
340 GETTING RID OF COMPLETELY UNUSED BLOCK GROUPS
341 Normally the balance operation needs a work space, to temporarily move
342 the data before the old block groups gets removed. If there’s no work
343 space, it ends with no space left.
344
345 There’s a special case when the block groups are completely unused,
346 possibly left after removing lots of files or deleting snapshots.
347 Removing empty block groups is automatic since 3.18. The same can be
348 achieved manually with a notable exception that this operation does not
349 require the work space. Thus it can be used to reclaim unused block
350 groups to make it available.
351
352 # btrfs balance start -dusage=0 /path
353
354 This should lead to decrease in the total numbers in the btrfs
355 filesystem df output.
356
358 Unless indicated otherwise below, all btrfs balance subcommands return
359 a zero exit status if they succeed, and non zero in case of failure.
360
361 The pause, cancel, and resume subcommands exit with a status of 2 if
362 they fail because a balance operation was not running.
363
364 The status subcommand exits with a status of 0 if a balance operation
365 is not running, 1 if the command-line usage is incorrect or a balance
366 operation is still running, and 2 on other errors.
367
369 btrfs is part of btrfs-progs. Please refer to the btrfs wiki
370 http://btrfs.wiki.kernel.org for further details.
371
373 mkfs.btrfs(8), btrfs-device(8)
374
375
376
377Btrfs v5.4 12/03/2019 BTRFS-BALANCE(8)