1MKFS.BTRFS(8)                    Btrfs Manual                    MKFS.BTRFS(8)
2
3
4

NAME

6       mkfs.btrfs - create a btrfs filesystem
7

SYNOPSIS

9       mkfs.btrfs [options] <device> [<device>...]
10

DESCRIPTION

12       mkfs.btrfs is used to create the btrfs filesystem on a single or
13       multiple devices. <device> is typically a block device but can be a
14       file-backed image as well. Multiple devices are grouped by UUID of the
15       filesystem.
16
17       Before mounting such filesystem, the kernel module must know all the
18       devices either via preceding execution of btrfs device scan or using
19       the device mount option. See section MULTIPLE DEVICES for more details.
20
21       The default block group profiles for data and metadata depend on number
22       of devices and possibly other factors. It’s recommended to use specific
23       profiles but the defaults should be OK and allowing future conversions
24       to other profiles. Please see options -d and -m for further detals and
25       btrfs-balance(8) for the profile conversion post mkfs.
26

OPTIONS

28       -b|--byte-count <size>
29           Specify the size of the filesystem. If this option is not used,
30           then mkfs.btrfs uses the entire device space for the filesystem.
31
32       --csum <type>, --checksum <type>
33           Specify the checksum algorithm. Default is crc32c. Valid values are
34           crc32c, xxhash, sha256 or blake2. To mount such filesystem kernel
35           must support the checksums as well. See CHECKSUM ALGORITHMS in
36           btrfs(5).
37
38       -d|--data <profile>
39           Specify the profile for the data block groups. Valid values are
40           raid0, raid1, raid1c3, raid1c4, raid5, raid6, raid10 or single or
41           dup (case does not matter).
42
43           See DUP PROFILES ON A SINGLE DEVICE for more details.
44
45           On multiple devices, the default was raid0 until version 5.7, while
46           it is single since version 5.8. You can still select raid0
47           manually, but it was not suitable as default.
48
49       -m|--metadata <profile>
50           Specify the profile for the metadata block groups. Valid values are
51           raid0, raid1, raid1c3, raid1c4, raid5, raid6, gaid10, single or dup
52           (case does not matter).
53
54           Default on a single device filesystem is DUP, unless an SSD is
55           detected, in which case it will default to single. The detection is
56           based on the value of /sys/block/DEV/queue/rotational, where DEV is
57           the short name of the device.
58
59           Note that the rotational status can be arbitrarily set by the
60           underlying block device driver and may not reflect the true status
61           (network block device, memory-backed SCSI devices etc). It’s
62           recommended to options --data/--metadata to avoid confusion.
63
64           See DUP PROFILES ON A SINGLE DEVICE for more details.
65
66           On multiple devices the default is raid1.
67
68       -M|--mixed
69           Normally the data and metadata block groups are isolated. The mixed
70           mode will remove the isolation and store both types in the same
71           block group type. This helps to utilize the free space regardless
72           of the purpose and is suitable for small devices. The separate
73           allocation of block groups leads to a situation where the space is
74           reserved for the other block group type, is not available for
75           allocation and can lead to ENOSPC state.
76
77           The recommended size for the mixed mode is for filesystems less
78           than 1GiB. The soft recommendation is to use it for filesystems
79           smaller than 5GiB. The mixed mode may lead to degraded performance
80           on larger filesystems, but is otherwise usable, even on multiple
81           devices.
82
83           The nodesize and sectorsize must be equal, and the block group
84           types must match.
85
86               Note
87               versions up to 4.2.x forced the mixed mode for devices smaller
88               than 1GiB. This has been removed in 4.3+ as it caused some
89               usability issues.
90
91       -l|--leafsize <size>
92           Alias for --nodesize. Deprecated.
93
94       -n|--nodesize <size>
95           Specify the nodesize, the tree block size in which btrfs stores
96           metadata. The default value is 16KiB (16384) or the page size,
97           whichever is bigger. Must be a multiple of the sectorsize and a
98           power of 2, but not larger than 64KiB (65536). Leafsize always
99           equals nodesize and the options are aliases.
100
101           Smaller node size increases fragmentation but leads to taller
102           b-trees which in turn leads to lower locking contention. Higher
103           node sizes give better packing and less fragmentation at the cost
104           of more expensive memory operations while updating the metadata
105           blocks.
106
107               Note
108               versions up to 3.11 set the nodesize to 4k.
109
110       -s|--sectorsize <size>
111           Specify the sectorsize, the minimum data block allocation unit.
112
113           The default value is the page size and is autodetected. If the
114           sectorsize differs from the page size, the created filesystem may
115           not be mountable by the running kernel. Therefore it is not
116           recommended to use this option unless you are going to mount it on
117           a system with the appropriate page size.
118
119       -L|--label <string>
120           Specify a label for the filesystem. The string should be less than
121           256 bytes and must not contain newline characters.
122
123       -K|--nodiscard
124           Do not perform whole device TRIM operation on devices that are
125           capable of that. This does not affect discard/trim operation when
126           the filesystem is mounted. Please see the mount option discard for
127           that in btrfs(5).
128
129       -r|--rootdir <rootdir>
130           Populate the toplevel subvolume with files from rootdir. This does
131           not require root permissions to write the new files or to mount the
132           filesystem.
133
134               Note
135               This option may enlarge the image or file to ensure it’s big
136               enough to contain the files from rootdir. Since version 4.14.1
137               the filesystem size is not minimized. Please see option
138               --shrink if you need that functionality.
139
140       --shrink
141           Shrink the filesystem to its minimal size, only works with
142           --rootdir option.
143
144           If the destination block device is a regular file, this option will
145           also truncate the file to the minimal size. Otherwise it will
146           reduce the filesystem available space. Extra space will not be
147           usable unless the filesystem is mounted and resized using btrfs
148           filesystem resize.
149
150               Note
151               prior to version 4.14.1, the shrinking was done automatically.
152
153       -O|--features <feature1>[,<feature2>...]
154           A list of filesystem features turned on at mkfs time. Not all
155           features are supported by old kernels. To disable a feature, prefix
156           it with ^.
157
158           See section FILESYSTEM FEATURES for more details. To see all
159           available features that mkfs.btrfs supports run:
160
161           mkfs.btrfs -O list-all
162
163       -R|--runtime-features <feature1>[,<feature2>...]
164           A list of features that be can enabled at mkfs time, otherwise
165           would have to be turned on a mounted filesystem. Although no
166           runtime feature is enabled by default, to disable a feature, prefix
167           it with ^.
168
169           See section RUNTIME FEATURES for more details. To see all available
170           runtime features that mkfs.btrfs supports run:
171
172           mkfs.btrfs -R list-all
173
174       -f|--force
175           Forcibly overwrite the block devices when an existing filesystem is
176           detected. By default, mkfs.btrfs will utilize libblkid to check for
177           any known filesystem on the devices. Alternatively you can use the
178           wipefs utility to clear the devices.
179
180       -q|--quiet
181           Print only error or warning messages. Options --features or --help
182           are unaffected.
183
184       -U|--uuid <UUID>
185           Create the filesystem with the given UUID. The UUID must not exist
186           on any filesystem currently present.
187
188       -V|--version
189           Print the mkfs.btrfs version and exit.
190
191       --help
192           Print help.
193

SIZE UNITS

195       The default unit is byte. All size parameters accept suffixes in the
196       1024 base. The recognized suffixes are: k, m, g, t, p, e, both
197       uppercase and lowercase.
198

MULTIPLE DEVICES

200       Before mounting a multiple device filesystem, the kernel module must
201       know the association of the block devices that are attached to the
202       filesystem UUID.
203
204       There is typically no action needed from the user. On a system that
205       utilizes a udev-like daemon, any new block device is automatically
206       registered. The rules call btrfs device scan.
207
208       The same command can be used to trigger the device scanning if the
209       btrfs kernel module is reloaded (naturally all previous information
210       about the device registration is lost).
211
212       Another possibility is to use the mount options device to specify the
213       list of devices to scan at the time of mount.
214
215           # mount -o device=/dev/sdb,device=/dev/sdc /dev/sda /mnt
216
217
218           Note
219           that this means only scanning, if the devices do not exist in the
220           system, mount will fail anyway. This can happen on systems without
221           initramfs/initrd and root partition created with RAID1/10/5/6
222           profiles. The mount action can happen before all block devices are
223           discovered. The waiting is usually done on the initramfs/initrd
224           systems.
225
226       As of kernel 4.14, RAID5/6 is still considered experimental and
227       shouldn’t be employed for production use.
228

FILESYSTEM FEATURES

230       Features that can be enabled during creation time. See also btrfs(5)
231       section FILESYSTEM FEATURES.
232
233       mixed-bg
234           (kernel support since 2.6.37)
235
236           mixed data and metadata block groups, also set by option --mixed
237
238       extref
239           (default since btrfs-progs 3.12, kernel support since 3.7)
240
241           increased hardlink limit per file in a directory to 65536, older
242           kernels supported a varying number of hardlinks depending on the
243           sum of all file name sizes that can be stored into one metadata
244           block
245
246       raid56
247           (kernel support since 3.9)
248
249           extended format for RAID5/6, also enabled if raid5 or raid6 block
250           groups are selected
251
252       skinny-metadata
253           (default since btrfs-progs 3.18, kernel support since 3.10)
254
255           reduced-size metadata for extent references, saves a few percent of
256           metadata
257
258       no-holes
259           (kernel support since 3.14)
260
261           improved representation of file extents where holes are not
262           explicitly stored as an extent, saves a few percent of metadata if
263           sparse files are used
264

RUNTIME FEATURES

266       Features that are typically enabled on a mounted filesystem, eg. by a
267       mount option or by an ioctl. Some of them can be enabled early, at mkfs
268       time. This applies to features that need to be enabled once and then
269       the status is permanent, this does not replace mount options.
270
271       quota
272           (kernel support since 3.4)
273
274           Enable quota support (qgroups). The qgroup accounting will be
275           consistent, can be used together with --rootdir. See also
276           btrfs-quota(8).
277
278       free-space-tree
279           (kernel support since 4.5)
280
281           Enable the free space tree (mount option space_cache=v2) for
282           persisting the free space cache.
283

BLOCK GROUPS, CHUNKS, RAID

285       The highlevel organizational units of a filesystem are block groups of
286       three types: data, metadata and system.
287
288       DATA
289           store data blocks and nothing else
290
291       METADATA
292           store internal metadata in b-trees, can store file data if they fit
293           into the inline limit
294
295       SYSTEM
296           store structures that describe the mapping between the physical
297           devices and the linear logical space representing the filesystem
298
299       Other terms commonly used:
300
301       block group, chunk
302           a logical range of space of a given profile, stores data, metadata
303           or both; sometimes the terms are used interchangeably
304
305           A typical size of metadata block group is 256MiB (filesystem
306           smaller than 50GiB) and 1GiB (larger than 50GiB), for data it’s
307           1GiB. The system block group size is a few megabytes.
308
309       RAID
310           a block group profile type that utilizes RAID-like features on
311           multiple devices: striping, mirroring, parity
312
313       profile
314           when used in connection with block groups refers to the allocation
315           strategy and constraints, see the section PROFILES for more details
316

PROFILES

318       There are the following block group types available:
319
320       ┌────────┬────────────────────────────┬─────────────┬─────────────┐
321       │        │                            │             │             │
322Profile Redundancy                 Space       Min/max   
323       │        ├────────┬────────┬──────────┤ utilization devices   
324       │        │        │        │          │             │             │
325       │        │ Copies Parity Striping │             │             │
326       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
327       │        │        │        │          │             │             │
328       │single  │   1    │        │          │        100% │    1/any    │
329       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
330       │        │        │        │          │             │             │
331       │DUP     │ 2 / 1  │        │          │         50% │ 1/any ^(see │
332       │        │ device │        │          │             │ note 1)     │
333       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
334       │        │        │        │          │             │             │
335       │RAID0   │        │        │  1 to N  │        100% │    2/any    │
336       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
337       │        │        │        │          │             │             │
338       │RAID1   │   2    │        │          │         50% │    2/any    │
339       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
340       │        │        │        │          │             │             │
341       │RAID1C3 │   3    │        │          │         33% │    3/any    │
342       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
343       │        │        │        │          │             │             │
344       │RAID1C4 │   4    │        │          │         25% │    4/any    │
345       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
346       │        │        │        │          │             │             │
347       │RAID10  │   2    │        │  1 to N  │         50% │    4/any    │
348       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
349       │        │        │        │          │             │             │
350       │RAID5   │   1    │   1    │ 2 to N-1 │     (N-1)/N │ 2/any ^(see │
351       │        │        │        │          │             │ note 2)     │
352       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
353       │        │        │        │          │             │             │
354       │RAID6   │   1    │   2    │ 3 to N-2 │     (N-2)/N │ 3/any ^(see │
355       │        │        │        │          │             │ note 3)     │
356       └────────┴────────┴────────┴──────────┴─────────────┴─────────────┘
357
358           Warning
359           It’s not recommended to create filesystems with RAID0/1/10/5/6
360           profiles on partitions from the same device. Neither redundancy nor
361           performance will be improved.
362
363       Note 1: DUP may exist on more than 1 device if it starts on a single
364       device and another one is added. Since version 4.5.1, mkfs.btrfs will
365       let you create DUP on multiple devices without restrictions.
366
367       Note 2: It’s not recommended to use 2 devices with RAID5. In that case,
368       parity stripe will contain the same data as the data stripe, making
369       RAID5 degraded to RAID1 with more overhead.
370
371       Note 3: It’s also not recommended to use 3 devices with RAID6, unless
372       you want to get effectively 3 copies in a RAID1-like manner (but not
373       exactly that).
374
375       Note 4: Since kernel 5.5 it’s possible to use RAID1C3 as replacement
376       for RAID6, higher space cost but reliable.
377
378   PROFILE LAYOUT
379       For the following examples, assume devices numbered by 1, 2, 3 and 4,
380       data or metadata blocks A, B, C, D, with possible stripes eg. A1, A2
381       that would be logically A, etc. For parity profiles PA and QA are
382       parity and syndrom, associated with the given stripe. The simple
383       layouts single or DUP are left out. Actual physical block placement on
384       devices depends on current state of the free/allocated space and may
385       appear random. All devices are assumed to be present at the time of the
386       blocks would have been written.
387
388       RAID1
389
390       ┌─────────┬──────────┬──────────┬──────────┐
391device 1 device 2 device 3 device 4 
392       ├─────────┼──────────┼──────────┼──────────┤
393       │         │          │          │          │
394       │   A     │    D     │          │          │
395       ├─────────┼──────────┼──────────┼──────────┤
396       │         │          │          │          │
397       │   B     │          │          │    C     │
398       ├─────────┼──────────┼──────────┼──────────┤
399       │         │          │          │          │
400       │   C     │          │          │          │
401       ├─────────┼──────────┼──────────┼──────────┤
402       │         │          │          │          │
403       │   D     │    A     │    B     │          │
404       └─────────┴──────────┴──────────┴──────────┘
405
406       RAID1C3
407
408       ┌─────────┬──────────┬──────────┬──────────┐
409device 1 device 2 device 3 device 4 
410       ├─────────┼──────────┼──────────┼──────────┤
411       │         │          │          │          │
412       │   A     │    A     │    D     │          │
413       ├─────────┼──────────┼──────────┼──────────┤
414       │         │          │          │          │
415       │   B     │          │    B     │          │
416       ├─────────┼──────────┼──────────┼──────────┤
417       │         │          │          │          │
418       │   C     │          │    A     │    C     │
419       ├─────────┼──────────┼──────────┼──────────┤
420       │         │          │          │          │
421       │   D     │    D     │    C     │    B     │
422       └─────────┴──────────┴──────────┴──────────┘
423
424       RAID0
425
426       ┌─────────┬──────────┬──────────┬──────────┐
427device 1 device 2 device 3 device 4 
428       ├─────────┼──────────┼──────────┼──────────┤
429       │         │          │          │          │
430       │   A2    │    C3    │    A3    │    C2    │
431       ├─────────┼──────────┼──────────┼──────────┤
432       │         │          │          │          │
433       │   B1    │    A1    │    D2    │    B3    │
434       ├─────────┼──────────┼──────────┼──────────┤
435       │         │          │          │          │
436       │   C1    │    D3    │    B4    │    D1    │
437       ├─────────┼──────────┼──────────┼──────────┤
438       │         │          │          │          │
439       │   D4    │    B2    │    C4    │    A4    │
440       └─────────┴──────────┴──────────┴──────────┘
441
442       RAID5
443
444       ┌─────────┬──────────┬──────────┬──────────┐
445device 1 device 2 device 3 device 4 
446       ├─────────┼──────────┼──────────┼──────────┤
447       │         │          │          │          │
448       │   A2    │    C3    │    A3    │    C2    │
449       ├─────────┼──────────┼──────────┼──────────┤
450       │         │          │          │          │
451       │   B1    │    A1    │    D2    │    B3    │
452       ├─────────┼──────────┼──────────┼──────────┤
453       │         │          │          │          │
454       │   C1    │    D3    │    PB    │    D1    │
455       ├─────────┼──────────┼──────────┼──────────┤
456       │         │          │          │          │
457       │   PD    │    B2    │    PC    │    PA    │
458       └─────────┴──────────┴──────────┴──────────┘
459
460       RAID6
461
462       ┌─────────┬──────────┬──────────┬──────────┐
463device 1 device 2 device 3 device 4 
464       ├─────────┼──────────┼──────────┼──────────┤
465       │         │          │          │          │
466       │   A2    │    QC    │    QA    │    C2    │
467       ├─────────┼──────────┼──────────┼──────────┤
468       │         │          │          │          │
469       │   B1    │    A1    │    D2    │    QB    │
470       ├─────────┼──────────┼──────────┼──────────┤
471       │         │          │          │          │
472       │   C1    │    QD    │    PB    │    D1    │
473       ├─────────┼──────────┼──────────┼──────────┤
474       │         │          │          │          │
475       │   PD    │    B2    │    PC    │    PA    │
476       └─────────┴──────────┴──────────┴──────────┘
477

DUP PROFILES ON A SINGLE DEVICE

479       The mkfs utility will let the user create a filesystem with profiles
480       that write the logical blocks to 2 physical locations. Whether there
481       are really 2 physical copies highly depends on the underlying device
482       type.
483
484       For example, a SSD drive can remap the blocks internally to a single
485       copy—thus deduplicating them. This negates the purpose of increased
486       redundancy and just wastes filesystem space without providing the
487       expected level of redundancy.
488
489       The duplicated data/metadata may still be useful to statistically
490       improve the chances on a device that might perform some internal
491       optimizations. The actual details are not usually disclosed by vendors.
492       For example we could expect that not all blocks get deduplicated. This
493       will provide a non-zero probability of recovery compared to a zero
494       chance if the single profile is used. The user should make the tradeoff
495       decision. The deduplication in SSDs is thought to be widely available
496       so the reason behind the mkfs default is to not give a false sense of
497       redundancy.
498
499       As another example, the widely used USB flash or SD cards use a
500       translation layer between the logical and physical view of the device.
501       The data lifetime may be affected by frequent plugging. The memory
502       cells could get damaged, hopefully not destroying both copies of
503       particular data in case of DUP.
504
505       The wear levelling techniques can also lead to reduced redundancy, even
506       if the device does not do any deduplication. The controllers may put
507       data written in a short timespan into the same physical storage unit
508       (cell, block etc). In case this unit dies, both copies are lost. BTRFS
509       does not add any artificial delay between metadata writes.
510
511       The traditional rotational hard drives usually fail at the sector
512       level.
513
514       In any case, a device that starts to misbehave and repairs from the DUP
515       copy should be replaced! DUP is not backup.
516

KNOWN ISSUES

518       SMALL FILESYSTEMS AND LARGE NODESIZE
519
520       The combination of small filesystem size and large nodesize is not
521       recommended in general and can lead to various ENOSPC-related issues
522       during mount time or runtime.
523
524       Since mixed block group creation is optional, we allow small filesystem
525       instances with differing values for sectorsize and nodesize to be
526       created and could end up in the following situation:
527
528           # mkfs.btrfs -f -n 65536 /dev/loop0
529           btrfs-progs v3.19-rc2-405-g976307c
530           See http://btrfs.wiki.kernel.org for more information.
531
532           Performing full device TRIM (512.00MiB) ...
533           Label:              (null)
534           UUID:               49fab72e-0c8b-466b-a3ca-d1bfe56475f0
535           Node size:          65536
536           Sector size:        4096
537           Filesystem size:    512.00MiB
538           Block group profiles:
539             Data:             single            8.00MiB
540             Metadata:         DUP              40.00MiB
541             System:           DUP              12.00MiB
542           SSD detected:       no
543           Incompat features:  extref, skinny-metadata
544           Number of devices:  1
545           Devices:
546             ID        SIZE  PATH
547              1   512.00MiB  /dev/loop0
548
549           # mount /dev/loop0 /mnt/
550           mount: mount /dev/loop0 on /mnt failed: No space left on device
551
552       The ENOSPC occurs during the creation of the UUID tree. This is caused
553       by large metadata blocks and space reservation strategy that allocates
554       more than can fit into the filesystem.
555

AVAILABILITY

557       mkfs.btrfs is part of btrfs-progs. Please refer to the btrfs wiki
558       http://btrfs.wiki.kernel.org for further details.
559

SEE ALSO

561       btrfs(5), btrfs(8), btrfs-balance(8), wipefs(8)
562
563
564
565Btrfs v5.10                       01/18/2021                     MKFS.BTRFS(8)
Impressum