1BTRFS-BALANCE(8) Btrfs Manual BTRFS-BALANCE(8)
2
3
4
6 btrfs-balance - balance block groups on a btrfs filesystem
7
9 btrfs balance <subcommand> <args>
10
12 The primary purpose of the balance feature is to spread block groups
13 across all devices so they match constraints defined by the respective
14 profiles. See mkfs.btrfs(8) section PROFILES for more details. The
15 scope of the balancing process can be further tuned by use of filters
16 that can select the block groups to process. Balance works only on a
17 mounted filesystem.
18
19 The balance operation is cancellable by the user. The on-disk state of
20 the filesystem is always consistent so an unexpected interruption (eg.
21 system crash, reboot) does not corrupt the filesystem. The progress of
22 the balance operation is temporarily stored and will be resumed upon
23 mount, unless the mount option skip_balance is specified.
24
25 Warning
26 running balance without filters will take a lot of time as it
27 basically rewrites the entire filesystem and needs to update all
28 block pointers.
29
30 The filters can be used to perform following actions:
31
32 · convert block group profiles (filter convert)
33
34 · make block group usage more compact (filter usage)
35
36 · perform actions only on a given device (filters devid, drange)
37
38 The filters can be applied to a combination of block group types (data,
39 metadata, system). Note that changing system needs the force option.
40
41 Note
42 the balance operation needs enough work space, ie. space that is
43 completely unused in the filesystem, otherwise this may lead to
44 ENOSPC reports. See the section ENOSPC for more details.
45
47 Note
48 The balance subcommand also exists under the btrfs filesystem
49 namespace. This still works for backward compatibility but is
50 deprecated and should not be used anymore.
51
52 Note
53 A short syntax btrfs balance <path> works due to backward
54 compatibility but is deprecated and should not be used anymore. Use
55 btrfs balance start command instead.
56
58 Balance operation is intense namely in the IO respect, but can be also
59 CPU intense. It affects other actions on the filesystem. There are
60 typically lots of data being copied from one location to another, and
61 lots of metadata get updated.
62
63 Depending on the actual block group layout, it can be also seek-heavy.
64 The performance on rotational devices is noticeably worse than on SSDs
65 or fast arrays.
66
68 cancel <path>
69 cancel running or paused balance, the command will block and wait
70 until the actually processed blockgroup is finished
71
72 pause <path>
73 pause running balance operation, this will store the state of the
74 balance progress and used filters to the filesystem
75
76 resume <path>
77 resume interrupted balance, the balance status must be stored on
78 the filesystem from previous run, eg. after it was forcibly
79 interrupted and mounted again with skip_balance
80
81 start [options] <path>
82 start the balance operation according to the specified filters, no
83 filters will rewrite the entire filesystem. The process runs in the
84 foreground.
85
86 Note
87 the balance command without filters will basically rewrite
88 everything in the filesystem. The run time is potentially very
89 long, depending on the filesystem size. To prevent starting a
90 full balance by accident, the user is warned and has a few
91 seconds to cancel the operation before it starts. The warning
92 and delay can be skipped with --full-balance option.
93 Please note that the filters must be written together with the -d,
94 -m and -s options, because they’re optional and bare -d etc alwo
95 work and mean no filters.
96
97 Options
98
99 -d[<filters>]
100 act on data block groups, see FILTERS section for details about
101 filters
102
103 -m[<filters>]
104 act on metadata chunks, see FILTERS section for details about
105 filters
106
107 -s[<filters>]
108 act on system chunks (requires -f), see FILTERS section for
109 details about filters.
110
111 -v
112 be verbose and print balance filter arguments
113
114 -f
115 force reducing of metadata integrity, eg. when going from raid1
116 to single
117
118 --background|--bg
119 run the balance operation asynchronously in the background,
120 uses fork(2) to start the process that calls the kernel ioctl
121
122 status [-v] <path>
123 Show status of running or paused balance.
124
125 If -v option is given, output will be verbose.
126
128 From kernel 3.3 onwards, btrfs balance can limit its action to a subset
129 of the whole filesystem, and can be used to change the replication
130 configuration (e.g. moving data from single to RAID1). This
131 functionality is accessed through the -d, -m or -s options to btrfs
132 balance start, which filter on data, metadata and system blocks
133 respectively.
134
135 A filter has the following structure: type[=params][,type=...]
136
137 The available types are:
138
139 profiles=<profiles>
140 Balances only block groups with the given profiles. Parameters are
141 a list of profile names separated by "|" (pipe).
142
143 usage=<percent>, usage=<range>
144 Balances only block groups with usage under the given percentage.
145 The value of 0 is allowed and will clean up completely unused block
146 groups, this should not require any new work space allocated. You
147 may want to use usage=0 in case balance is returning ENOSPC and
148 your filesystem is not too full.
149
150 The argument may be a single value or a range. The single value N
151 means at most N percent used, equivalent to ..N range syntax.
152 Kernels prior to 4.4 accept only the single value format. The
153 minimum range boundary is inclusive, maximum is exclusive.
154
155 devid=<id>
156 Balances only block groups which have at least one chunk on the
157 given device. To list devices with ids use btrfs fi show.
158
159 drange=<range>
160 Balance only block groups which overlap with the given byte range
161 on any device. Use in conjunction with devid to filter on a
162 specific device. The parameter is a range specified as start..end.
163
164 vrange=<range>
165 Balance only block groups which overlap with the given byte range
166 in the filesystem’s internal virtual address space. This is the
167 address space that most reports from btrfs in the kernel log use.
168 The parameter is a range specified as start..end.
169
170 convert=<profile>
171 Convert each selected block group to the given profile name
172 identified by parameters.
173
174 Note
175 starting with kernel 4.5, the data chunks can be converted
176 to/from the DUP profile on a single device.
177
178 Note
179 starting with kernel 4.6, all profiles can be converted to/from
180 DUP on multi-device filesystems.
181
182 limit=<number>, limit=<range>
183 Process only given number of chunks, after all filters are applied.
184 This can be used to specifically target a chunk in connection with
185 other filters (drange, vrange) or just simply limit the amount of
186 work done by a single balance run.
187
188 The argument may be a single value or a range. The single value N
189 means at most N chunks, equivalent to ..N range syntax. Kernels
190 prior to 4.4 accept only the single value format. The range minimum
191 and maximum are inclusive.
192
193 stripes=<range>
194 Balance only block groups which have the given number of stripes.
195 The parameter is a range specified as start..end. Makes sense for
196 block group profiles that utilize striping, ie. RAID0/10/5/6. The
197 range minimum and maximum are inclusive.
198
199 soft
200 Takes no parameters. Only has meaning when converting between
201 profiles. When doing convert from one profile to another and soft
202 mode is on, chunks that already have the target profile are left
203 untouched. This is useful e.g. when half of the filesystem was
204 converted earlier but got cancelled.
205
206 The soft mode switch is (like every other filter) per-type. For
207 example, this means that we can convert metadata chunks the "hard"
208 way while converting data chunks selectively with soft switch.
209
210 Profile names, used in profiles and convert are one of: raid0, raid1,
211 raid10, raid5, raid6, dup, single. The mixed data/metadata profiles can
212 be converted in the same way, but it’s conversion between mixed and
213 non-mixed is not implemented. For the constraints of the profiles
214 please refer to mkfs.btrfs(8), section PROFILES.
215
217 The way balance operates, it usually needs to temporarily create a new
218 block group and move the old data there. For that it needs work space,
219 otherwise it fails for ENOSPC reasons. This is not the same ENOSPC as
220 if the free space is exhausted. This refers to the space on the level
221 of block groups.
222
223 The free work space can be calculated from the output of the btrfs
224 filesystem show command:
225
226 Label: 'BTRFS' uuid: 8a9d72cd-ead3-469d-b371-9c7203276265
227 Total devices 2 FS bytes used 77.03GiB
228 devid 1 size 53.90GiB used 51.90GiB path /dev/sdc2
229 devid 2 size 53.90GiB used 51.90GiB path /dev/sde1
230
231 size - used = free work space 53.90GiB - 51.90GiB = 2.00GiB
232
233 An example of a filter that does not require workspace is usage=0. This
234 will scan through all unused block groups of a given type and will
235 reclaim the space. After that it might be possible to run other
236 filters.
237
238 CONVERSIONS ON MULTIPLE DEVICES
239
240 Conversion to profiles based on striping (RAID0, RAID5/6) require the
241 work space on each device. An interrupted balance may leave partially
242 filled block groups that might consume the work space.
243
245 A more comprehensive example when going from one to multiple devices,
246 and back, can be found in section TYPICAL USECASES of btrfs-device(8).
247
248 MAKING BLOCK GROUP LAYOUT MORE COMPACT
249 The layout of block groups is not normally visible, most tools report
250 only summarized numbers of free or used space, but there are still some
251 hints provided.
252
253 Let’s use the following real life example and start with the output:
254
255 $ btrfs fi df /path
256 Data, single: total=75.81GiB, used=64.44GiB
257 System, RAID1: total=32.00MiB, used=20.00KiB
258 Metadata, RAID1: total=15.87GiB, used=8.84GiB
259 GlobalReserve, single: total=512.00MiB, used=0.00B
260
261 Roughly calculating for data, 75G - 64G = 11G, the used/total ratio is
262 about 85%. How can we can interpret that:
263
264 · chunks are filled by 85% on average, ie. the usage filter with
265 anything smaller than 85 will likely not affect anything
266
267 · in a more realistic scenario, the space is distributed unevenly, we
268 can assume there are completely used chunks and the remaining are
269 partially filled
270
271 Compacting the layout could be used on both. In the former case it
272 would spread data of a given chunk to the others and removing it. Here
273 we can estimate that roughly 850 MiB of data have to be moved (85% of a
274 1 GiB chunk).
275
276 In the latter case, targeting the partially used chunks will have to
277 move less data and thus will be faster. A typical filter command would
278 look like:
279
280 # btrfs balance start -dusage=50 /path
281 Done, had to relocate 2 out of 97 chunks
282
283 $ btrfs fi df /path
284 Data, single: total=74.03GiB, used=64.43GiB
285 System, RAID1: total=32.00MiB, used=20.00KiB
286 Metadata, RAID1: total=15.87GiB, used=8.84GiB
287 GlobalReserve, single: total=512.00MiB, used=0.00B
288
289 As you can see, the total amount of data is decreased by just 1 GiB,
290 which is an expected result. Let’s see what will happen when we
291 increase the estimated usage filter.
292
293 # btrfs balance start -dusage=85 /path
294 Done, had to relocate 13 out of 95 chunks
295
296 $ btrfs fi df /path
297 Data, single: total=68.03GiB, used=64.43GiB
298 System, RAID1: total=32.00MiB, used=20.00KiB
299 Metadata, RAID1: total=15.87GiB, used=8.85GiB
300 GlobalReserve, single: total=512.00MiB, used=0.00B
301
302 Now the used/total ratio is about 94% and we moved about 74G - 68G = 6G
303 of data to the remaining blockgroups, ie. the 6GiB are now free of
304 filesystem structures, and can be reused for new data or metadata block
305 groups.
306
307 We can do a similar exercise with the metadata block groups, but this
308 should not be typically necessary, unless the used/total ration is
309 really off. Here the ratio is roughly 50% but the difference as an
310 absolute number is "a few gigabytes", which can be considered normal
311 for a workload with snapshots or reflinks updated frequently.
312
313 # btrfs balance start -musage=50 /path
314 Done, had to relocate 4 out of 89 chunks
315
316 $ btrfs fi df /path
317 Data, single: total=68.03GiB, used=64.43GiB
318 System, RAID1: total=32.00MiB, used=20.00KiB
319 Metadata, RAID1: total=14.87GiB, used=8.85GiB
320 GlobalReserve, single: total=512.00MiB, used=0.00B
321
322 Just 1 GiB decrease, which possibly means there are block groups with
323 good utilization. Making the metadata layout more compact would in turn
324 require updating more metadata structures, ie. lots of IO. As running
325 out of metadata space is a more severe problem, it’s not necessary to
326 keep the utilization ratio too high. For the purpose of this example,
327 let’s see the effects of further compaction:
328
329 # btrfs balance start -musage=70 /path
330 Done, had to relocate 13 out of 88 chunks
331
332 $ btrfs fi df .
333 Data, single: total=68.03GiB, used=64.43GiB
334 System, RAID1: total=32.00MiB, used=20.00KiB
335 Metadata, RAID1: total=11.97GiB, used=8.83GiB
336 GlobalReserve, single: total=512.00MiB, used=0.00B
337
338 GETTING RID OF COMPLETELY UNUSED BLOCK GROUPS
339 Normally the balance operation needs a work space, to temporarily move
340 the data before the old block groups gets removed. If there’s no work
341 space, it ends with no space left.
342
343 There’s a special case when the block groups are completely unused,
344 possibly left after removing lots of files or deleting snapshots.
345 Removing empty block groups is automatic since 3.18. The same can be
346 achieved manually with a notable exception that this operation does not
347 require the work space. Thus it can be used to reclaim unused block
348 groups to make it available.
349
350 # btrfs balance start -dusage=0 /path
351
352 This should lead to decrease in the total numbers in the btrfs fi df
353 output.
354
356 btrfs balance returns a zero exit status if it succeeds. Non zero is
357 returned in case of failure.
358
360 btrfs is part of btrfs-progs. Please refer to the btrfs wiki
361 http://btrfs.wiki.kernel.org for further details.
362
364 mkfs.btrfs(8), btrfs-device(8)
365
366
367
368Btrfs v4.9.1 08/06/2017 BTRFS-BALANCE(8)