1MKFS.BTRFS(8)                    Btrfs Manual                    MKFS.BTRFS(8)
2
3
4

NAME

6       mkfs.btrfs - create a btrfs filesystem
7

SYNOPSIS

9       mkfs.btrfs [options] <device> [<device>...]
10

DESCRIPTION

12       mkfs.btrfs is used to create the btrfs filesystem on a single or
13       multiple devices. <device> is typically a block device but can be a
14       file-backed image as well. Multiple devices are grouped by UUID of the
15       filesystem.
16
17       Before mounting such filesystem, the kernel module must know all the
18       devices either via preceding execution of btrfs device scan or using
19       the device mount option. See section MULTIPLE DEVICES for more details.
20

OPTIONS

22       -b|--byte-count <size>
23           Specify the size of the filesystem. If this option is not used,
24           then mkfs.btrfs uses the entire device space for the filesystem.
25
26       --csum <type>, --checksum <type>
27           Specify the checksum algorithm. Default is crc32c. Valid values are
28           crc32c, xxhash, sha256 or blake2. To mount such filesystem kernel
29           must support the checksums as well. See CHECKSUM ALGORITHMS in
30           btrfs(5).
31
32       -d|--data <profile>
33           Specify the profile for the data block groups. Valid values are
34           raid0, raid1, raid5, raid6, raid10 or single or dup (case does not
35           matter).
36
37           See DUP PROFILES ON A SINGLE DEVICE for more details.
38
39       -m|--metadata <profile>
40           Specify the profile for the metadata block groups. Valid values are
41           raid0, raid1, raid5, raid6, raid10, single or dup (case does not
42           matter).
43
44           A single device filesystem will default to DUP, unless an SSD is
45           detected, in which case it will default to single. The detection is
46           based on the value of /sys/block/DEV/queue/rotational, where DEV is
47           the short name of the device.
48
49           Note that the rotational status can be arbitrarily set by the
50           underlying block device driver and may not reflect the true status
51           (network block device, memory-backed SCSI devices etc). Use the
52           options --data/--metadata to avoid confusion.
53
54           See DUP PROFILES ON A SINGLE DEVICE for more details.
55
56       -M|--mixed
57           Normally the data and metadata block groups are isolated. The mixed
58           mode will remove the isolation and store both types in the same
59           block group type. This helps to utilize the free space regardless
60           of the purpose and is suitable for small devices. The separate
61           allocation of block groups leads to a situation where the space is
62           reserved for the other block group type, is not available for
63           allocation and can lead to ENOSPC state.
64
65           The recommended size for the mixed mode is for filesystems less
66           than 1GiB. The soft recommendation is to use it for filesystems
67           smaller than 5GiB. The mixed mode may lead to degraded performance
68           on larger filesystems, but is otherwise usable, even on multiple
69           devices.
70
71           The nodesize and sectorsize must be equal, and the block group
72           types must match.
73
74               Note
75               versions up to 4.2.x forced the mixed mode for devices smaller
76               than 1GiB. This has been removed in 4.3+ as it caused some
77               usability issues.
78
79       -l|--leafsize <size>
80           Alias for --nodesize. Deprecated.
81
82       -n|--nodesize <size>
83           Specify the nodesize, the tree block size in which btrfs stores
84           metadata. The default value is 16KiB (16384) or the page size,
85           whichever is bigger. Must be a multiple of the sectorsize and a
86           power of 2, but not larger than 64KiB (65536). Leafsize always
87           equals nodesize and the options are aliases.
88
89           Smaller node size increases fragmentation but leads to taller
90           b-trees which in turn leads to lower locking contention. Higher
91           node sizes give better packing and less fragmentation at the cost
92           of more expensive memory operations while updating the metadata
93           blocks.
94
95               Note
96               versions up to 3.11 set the nodesize to 4k.
97
98       -s|--sectorsize <size>
99           Specify the sectorsize, the minimum data block allocation unit.
100
101           The default value is the page size and is autodetected. If the
102           sectorsize differs from the page size, the created filesystem may
103           not be mountable by the kernel. Therefore it is not recommended to
104           use this option unless you are going to mount it on a system with
105           the appropriate page size.
106
107       -L|--label <string>
108           Specify a label for the filesystem. The string should be less than
109           256 bytes and must not contain newline characters.
110
111       -K|--nodiscard
112           Do not perform whole device TRIM operation on devices that are
113           capable of that. This does not affect discard/trim operation when
114           the filesystem is mounted. Please see the mount option discard for
115           that in btrfs(5).
116
117       -r|--rootdir <rootdir>
118           Populate the toplevel subvolume with files from rootdir. This does
119           not require root permissions to write the new files or to mount the
120           filesystem.
121
122               Note
123               This option may enlarge the image or file to ensure it’s big
124               enough to contain the files from rootdir. Since version 4.14.1
125               the filesystem size is not minimized. Please see option
126               --shrink if you need that functionality.
127
128       --shrink
129           Shrink the filesystem to its minimal size, only works with
130           --rootdir option.
131
132           If the destination is a regular file, this option will also
133           truncate the file to the minimal size. Otherwise it will reduce the
134           filesystem available space. Extra space will not be usable unless
135           the filesystem is mounted and resized using btrfs filesystem
136           resize.
137
138               Note
139               prior to version 4.14.1, the shrinking was done automatically.
140
141       -O|--features <feature1>[,<feature2>...]
142           A list of filesystem features turned on at mkfs time. Not all
143           features are supported by old kernels. To disable a feature, prefix
144           it with ^.
145
146           See section FILESYSTEM FEATURES for more details. To see all
147           available features that mkfs.btrfs supports run:
148
149           mkfs.btrfs -O list-all
150
151       -f|--force
152           Forcibly overwrite the block devices when an existing filesystem is
153           detected. By default, mkfs.btrfs will utilize libblkid to check for
154           any known filesystem on the devices. Alternatively you can use the
155           wipefs utility to clear the devices.
156
157       -q|--quiet
158           Print only error or warning messages. Options --features or --help
159           are unaffected.
160
161       -U|--uuid <UUID>
162           Create the filesystem with the given UUID. The UUID must not exist
163           on any filesystem currently present.
164
165       -V|--version
166           Print the mkfs.btrfs version and exit.
167
168       --help
169           Print help.
170
171       -A|--alloc-start <offset>
172           deprecated, will be removed (An option to help debugging chunk
173           allocator.) Specify the (physical) offset from the start of the
174           device at which allocations start. The default value is zero.
175

SIZE UNITS

177       The default unit is byte. All size parameters accept suffixes in the
178       1024 base. The recognized suffixes are: k, m, g, t, p, e, both
179       uppercase and lowercase.
180

MULTIPLE DEVICES

182       Before mounting a multiple device filesystem, the kernel module must
183       know the association of the block devices that are attached to the
184       filesystem UUID.
185
186       There is typically no action needed from the user. On a system that
187       utilizes a udev-like daemon, any new block device is automatically
188       registered. The rules call btrfs device scan.
189
190       The same command can be used to trigger the device scanning if the
191       btrfs kernel module is reloaded (naturally all previous information
192       about the device registration is lost).
193
194       Another possibility is to use the mount options device to specify the
195       list of devices to scan at the time of mount.
196
197           # mount -o device=/dev/sdb,device=/dev/sdc /dev/sda /mnt
198
199
200           Note
201           that this means only scanning, if the devices do not exist in the
202           system, mount will fail anyway. This can happen on systems without
203           initramfs/initrd and root partition created with RAID1/10/5/6
204           profiles. The mount action can happen before all block devices are
205           discovered. The waiting is usually done on the initramfs/initrd
206           systems.
207
208       As of kernel 4.14, RAID5/6 is still considered experimental and
209       shouldn’t be employed for production use.
210

FILESYSTEM FEATURES

212       Features that can be enabled during creation time. See also btrfs(5)
213       section FILESYSTEM FEATURES.
214
215       mixed-bg
216           (kernel support since 2.6.37)
217
218           mixed data and metadata block groups, also set by option --mixed
219
220       extref
221           (default since btrfs-progs 3.12, kernel support since 3.7)
222
223           increased hardlink limit per file in a directory to 65536, older
224           kernels supported a varying number of hardlinks depending on the
225           sum of all file name sizes that can be stored into one metadata
226           block
227
228       raid56
229           (kernel support since 3.9)
230
231           extended format for RAID5/6, also enabled if raid5 or raid6 block
232           groups are selected
233
234       skinny-metadata
235           (default since btrfs-progs 3.18, kernel support since 3.10)
236
237           reduced-size metadata for extent references, saves a few percent of
238           metadata
239
240       no-holes
241           (kernel support since 3.14)
242
243           improved representation of file extents where holes are not
244           explicitly stored as an extent, saves a few percent of metadata if
245           sparse files are used
246

BLOCK GROUPS, CHUNKS, RAID

248       The highlevel organizational units of a filesystem are block groups of
249       three types: data, metadata and system.
250
251       DATA
252           store data blocks and nothing else
253
254       METADATA
255           store internal metadata in b-trees, can store file data if they fit
256           into the inline limit
257
258       SYSTEM
259           store structures that describe the mapping between the physical
260           devices and the linear logical space representing the filesystem
261
262       Other terms commonly used:
263
264       block group, chunk
265           a logical range of space of a given profile, stores data, metadata
266           or both; sometimes the terms are used interchangeably
267
268           A typical size of metadata block group is 256MiB (filesystem
269           smaller than 50GiB) and 1GiB (larger than 50GiB), for data it’s
270           1GiB. The system block group size is a few megabytes.
271
272       RAID
273           a block group profile type that utilizes RAID-like features on
274           multiple devices: striping, mirroring, parity
275
276       profile
277           when used in connection with block groups refers to the allocation
278           strategy and constraints, see the section PROFILES for more details
279

PROFILES

281       There are the following block group types available:
282
283       ┌────────┬────────────────────────────┬─────────────┬─────────────┐
284       │        │                            │             │             │
285Profile Redundancy                 Space       Min/max   
286       │        ├────────┬────────┬──────────┤ utilization devices   
287       │        │        │        │          │             │             │
288       │        │ Copies Parity Striping │             │             │
289       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
290       │        │        │        │          │             │             │
291       │single  │   1    │        │          │        100% │    1/any    │
292       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
293       │        │        │        │          │             │             │
294       │DUP     │ 2 / 1  │        │          │         50% │ 1/any ^(see │
295       │        │ device │        │          │             │ note 1)     │
296       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
297       │        │        │        │          │             │             │
298       │RAID0   │        │        │  1 to N  │        100% │    2/any    │
299       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
300       │        │        │        │          │             │             │
301       │RAID1   │   2    │        │          │         50% │    2/any    │
302       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
303       │        │        │        │          │             │             │
304       │RAID1C3 │   3    │        │          │         33% │    3/any    │
305       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
306       │        │        │        │          │             │             │
307       │RAID1C4 │   4    │        │          │         25% │    4/any    │
308       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
309       │        │        │        │          │             │             │
310       │RAID10  │   2    │        │  1 to N  │         50% │    4/any    │
311       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
312       │        │        │        │          │             │             │
313       │RAID5   │   1    │   1    │ 2 to N-1 │     (N-1)/N │ 2/any ^(see │
314       │        │        │        │          │             │ note 2)     │
315       ├────────┼────────┼────────┼──────────┼─────────────┼─────────────┤
316       │        │        │        │          │             │             │
317       │RAID6   │   1    │   2    │ 3 to N-2 │     (N-2)/N │ 3/any ^(see │
318       │        │        │        │          │             │ note 3)     │
319       └────────┴────────┴────────┴──────────┴─────────────┴─────────────┘
320
321           Warning
322           It’s not recommended to create filesystems with RAID0/1/10/5/6
323           profiles on partitions from the same device. Neither redundancy nor
324           performance will be improved.
325
326       Note 1: DUP may exist on more than 1 device if it starts on a single
327       device and another one is added. Since version 4.5.1, mkfs.btrfs will
328       let you create DUP on multiple devices without restrictions.
329
330       Note 2: It’s not recommended to use 2 devices with RAID5. In that case,
331       parity stripe will contain the same data as the data stripe, making
332       RAID5 degraded to RAID1 with more overhead.
333
334       Note 3: It’s also not recommended to use 3 devices with RAID6, unless
335       you want to get effectively 3 copies in a RAID1-like manner (but not
336       exactly that).
337
338       Note 4: Since kernel 5.5 it’s possible to use RAID1C3 as replacement
339       for RAID6, higher space cost but reliable.
340
341   PROFILE LAYOUT
342       For the following examples, assume devices numbered by 1, 2, 3 and 4,
343       data or metadata blocks A, B, C, D, with possible stripes eg. A1, A2
344       that would be logically A, etc. For parity profiles PA and QA are
345       parity and syndrom, associated with the given stripe. The simple
346       layouts single or DUP are left out. Actual physical block placement on
347       devices depends on current state of the free/allocated space and may
348       appear random. All devices are assumed to be present at the time of the
349       blocks would have been written.
350
351       RAID1
352
353       ┌─────────┬──────────┬──────────┬──────────┐
354device 1 device 2 device 3 device 4 
355       ├─────────┼──────────┼──────────┼──────────┤
356       │         │          │          │          │
357       │   A     │    D     │          │          │
358       ├─────────┼──────────┼──────────┼──────────┤
359       │         │          │          │          │
360       │   B     │          │          │    C     │
361       ├─────────┼──────────┼──────────┼──────────┤
362       │         │          │          │          │
363       │   C     │          │          │          │
364       ├─────────┼──────────┼──────────┼──────────┤
365       │         │          │          │          │
366       │   D     │    A     │    B     │          │
367       └─────────┴──────────┴──────────┴──────────┘
368
369       RAID1C3
370
371       ┌─────────┬──────────┬──────────┬──────────┐
372device 1 device 2 device 3 device 4 
373       ├─────────┼──────────┼──────────┼──────────┤
374       │         │          │          │          │
375       │   A     │    A     │    D     │          │
376       ├─────────┼──────────┼──────────┼──────────┤
377       │         │          │          │          │
378       │   B     │          │    B     │          │
379       ├─────────┼──────────┼──────────┼──────────┤
380       │         │          │          │          │
381       │   C     │          │    A     │    C     │
382       ├─────────┼──────────┼──────────┼──────────┤
383       │         │          │          │          │
384       │   D     │    D     │    C     │    B     │
385       └─────────┴──────────┴──────────┴──────────┘
386
387       RAID0
388
389       ┌─────────┬──────────┬──────────┬──────────┐
390device 1 device 2 device 3 device 4 
391       ├─────────┼──────────┼──────────┼──────────┤
392       │         │          │          │          │
393       │   A2    │    C3    │    A3    │    C2    │
394       ├─────────┼──────────┼──────────┼──────────┤
395       │         │          │          │          │
396       │   B1    │    A1    │    D2    │    B3    │
397       ├─────────┼──────────┼──────────┼──────────┤
398       │         │          │          │          │
399       │   C1    │    D3    │    B4    │    D1    │
400       ├─────────┼──────────┼──────────┼──────────┤
401       │         │          │          │          │
402       │   D4    │    B2    │    C4    │    A4    │
403       └─────────┴──────────┴──────────┴──────────┘
404
405       RAID5
406
407       ┌─────────┬──────────┬──────────┬──────────┐
408device 1 device 2 device 3 device 4 
409       ├─────────┼──────────┼──────────┼──────────┤
410       │         │          │          │          │
411       │   A2    │    C3    │    A3    │    C2    │
412       ├─────────┼──────────┼──────────┼──────────┤
413       │         │          │          │          │
414       │   B1    │    A1    │    D2    │    B3    │
415       ├─────────┼──────────┼──────────┼──────────┤
416       │         │          │          │          │
417       │   C1    │    D3    │    PB    │    D1    │
418       ├─────────┼──────────┼──────────┼──────────┤
419       │         │          │          │          │
420       │   PD    │    B2    │    PC    │    PA    │
421       └─────────┴──────────┴──────────┴──────────┘
422
423       RAID6
424
425       ┌─────────┬──────────┬──────────┬──────────┐
426device 1 device 2 device 3 device 4 
427       ├─────────┼──────────┼──────────┼──────────┤
428       │         │          │          │          │
429       │   A2    │    QC    │    QA    │    C2    │
430       ├─────────┼──────────┼──────────┼──────────┤
431       │         │          │          │          │
432       │   B1    │    A1    │    D2    │    QB    │
433       ├─────────┼──────────┼──────────┼──────────┤
434       │         │          │          │          │
435       │   C1    │    QD    │    PB    │    D1    │
436       ├─────────┼──────────┼──────────┼──────────┤
437       │         │          │          │          │
438       │   PD    │    B2    │    PC    │    PA    │
439       └─────────┴──────────┴──────────┴──────────┘
440

DUP PROFILES ON A SINGLE DEVICE

442       The mkfs utility will let the user create a filesystem with profiles
443       that write the logical blocks to 2 physical locations. Whether there
444       are really 2 physical copies highly depends on the underlying device
445       type.
446
447       For example, a SSD drive can remap the blocks internally to a single
448       copy—thus deduplicating them. This negates the purpose of increased
449       redundancy and just wastes filesystem space without providing the
450       expected level of redundancy.
451
452       The duplicated data/metadata may still be useful to statistically
453       improve the chances on a device that might perform some internal
454       optimizations. The actual details are not usually disclosed by vendors.
455       For example we could expect that not all blocks get deduplicated. This
456       will provide a non-zero probability of recovery compared to a zero
457       chance if the single profile is used. The user should make the tradeoff
458       decision. The deduplication in SSDs is thought to be widely available
459       so the reason behind the mkfs default is to not give a false sense of
460       redundancy.
461
462       As another example, the widely used USB flash or SD cards use a
463       translation layer between the logical and physical view of the device.
464       The data lifetime may be affected by frequent plugging. The memory
465       cells could get damaged, hopefully not destroying both copies of
466       particular data in case of DUP.
467
468       The wear levelling techniques can also lead to reduced redundancy, even
469       if the device does not do any deduplication. The controllers may put
470       data written in a short timespan into the same physical storage unit
471       (cell, block etc). In case this unit dies, both copies are lost. BTRFS
472       does not add any artificial delay between metadata writes.
473
474       The traditional rotational hard drives usually fail at the sector
475       level.
476
477       In any case, a device that starts to misbehave and repairs from the DUP
478       copy should be replaced! DUP is not backup.
479

KNOWN ISSUES

481       SMALL FILESYSTEMS AND LARGE NODESIZE
482
483       The combination of small filesystem size and large nodesize is not
484       recommended in general and can lead to various ENOSPC-related issues
485       during mount time or runtime.
486
487       Since mixed block group creation is optional, we allow small filesystem
488       instances with differing values for sectorsize and nodesize to be
489       created and could end up in the following situation:
490
491           # mkfs.btrfs -f -n 65536 /dev/loop0
492           btrfs-progs v3.19-rc2-405-g976307c
493           See http://btrfs.wiki.kernel.org for more information.
494
495           Performing full device TRIM (512.00MiB) ...
496           Label:              (null)
497           UUID:               49fab72e-0c8b-466b-a3ca-d1bfe56475f0
498           Node size:          65536
499           Sector size:        4096
500           Filesystem size:    512.00MiB
501           Block group profiles:
502             Data:             single            8.00MiB
503             Metadata:         DUP              40.00MiB
504             System:           DUP              12.00MiB
505           SSD detected:       no
506           Incompat features:  extref, skinny-metadata
507           Number of devices:  1
508           Devices:
509             ID        SIZE  PATH
510              1   512.00MiB  /dev/loop0
511
512           # mount /dev/loop0 /mnt/
513           mount: mount /dev/loop0 on /mnt failed: No space left on device
514
515       The ENOSPC occurs during the creation of the UUID tree. This is caused
516       by large metadata blocks and space reservation strategy that allocates
517       more than can fit into the filesystem.
518

AVAILABILITY

520       mkfs.btrfs is part of btrfs-progs. Please refer to the btrfs wiki
521       http://btrfs.wiki.kernel.org for further details.
522

SEE ALSO

524       btrfs(5), btrfs(8), wipefs(8)
525
526
527
528Btrfs v5.4                        12/03/2019                     MKFS.BTRFS(8)
Impressum