1MDADM(8) System Manager's Manual MDADM(8)
2
3
4
6 mdadm - manage MD devices aka Linux Software RAID
7
8
10 mdadm [mode] <raiddevice> [options] <component-devices>
11
12
14 RAID devices are virtual devices created from two or more real block
15 devices. This allows multiple devices (typically disk drives or parti‐
16 tions thereof) to be combined into a single device to hold (for exam‐
17 ple) a single filesystem. Some RAID levels include redundancy and so
18 can survive some degree of device failure.
19
20 Linux Software RAID devices are implemented through the md (Multiple
21 Devices) device driver.
22
23 Currently, Linux supports LINEAR md devices, RAID0 (striping), RAID1
24 (mirroring), RAID4, RAID5, RAID6, RAID10, MULTIPATH, FAULTY, and CON‐
25 TAINER.
26
27 MULTIPATH is not a Software RAID mechanism, but does involve multiple
28 devices: each device is a path to one common physical storage device.
29 New installations should not use md/multipath as it is not well sup‐
30 ported and has no ongoing development. Use the Device Mapper based
31 multipath-tools instead.
32
33 FAULTY is also not true RAID, and it only involves one device. It pro‐
34 vides a layer over a true device that can be used to inject faults.
35
36 CONTAINER is different again. A CONTAINER is a collection of devices
37 that are managed as a set. This is similar to the set of devices con‐
38 nected to a hardware RAID controller. The set of devices may contain a
39 number of different RAID arrays each utilising some (or all) of the
40 blocks from a number of the devices in the set. For example, two de‐
41 vices in a 5-device set might form a RAID1 using the whole devices.
42 The remaining three might have a RAID5 over the first half of each de‐
43 vice, and a RAID0 over the second half.
44
45 With a CONTAINER, there is one set of metadata that describes all of
46 the arrays in the container. So when mdadm creates a CONTAINER device,
47 the device just represents the metadata. Other normal arrays (RAID1
48 etc) can be created inside the container.
49
50
52 mdadm has several major modes of operation:
53
54 Assemble
55 Assemble the components of a previously created array into an
56 active array. Components can be explicitly given or can be
57 searched for. mdadm checks that the components do form a bona
58 fide array, and can, on request, fiddle superblock information
59 so as to assemble a faulty array.
60
61
62 Build Build an array that doesn't have per-device metadata (su‐
63 perblocks). For these sorts of arrays, mdadm cannot differenti‐
64 ate between initial creation and subsequent assembly of an ar‐
65 ray. It also cannot perform any checks that appropriate compo‐
66 nents have been requested. Because of this, the Build mode
67 should only be used together with a complete understanding of
68 what you are doing.
69
70
71 Create Create a new array with per-device metadata (superblocks). Ap‐
72 propriate metadata is written to each device, and then the array
73 comprising those devices is activated. A 'resync' process is
74 started to make sure that the array is consistent (e.g. both
75 sides of a mirror contain the same data) but the content of the
76 device is left otherwise untouched. The array can be used as
77 soon as it has been created. There is no need to wait for the
78 initial resync to finish.
79
80
81 Follow or Monitor
82 Monitor one or more md devices and act on any state changes.
83 This is only meaningful for RAID1, 4, 5, 6, 10 or multipath ar‐
84 rays, as only these have interesting state. RAID0 or Linear
85 never have missing, spare, or failed drives, so there is nothing
86 to monitor.
87
88
89 Grow Grow (or shrink) an array, or otherwise reshape it in some way.
90 Currently supported growth options including changing the active
91 size of component devices and changing the number of active de‐
92 vices in Linear and RAID levels 0/1/4/5/6, changing the RAID
93 level between 0, 1, 5, and 6, and between 0 and 10, changing the
94 chunk size and layout for RAID 0,4,5,6,10 as well as adding or
95 removing a write-intent bitmap and changing the array's consis‐
96 tency policy.
97
98
99 Incremental Assembly
100 Add a single device to an appropriate array. If the addition of
101 the device makes the array runnable, the array will be started.
102 This provides a convenient interface to a hot-plug system. As
103 each device is detected, mdadm has a chance to include it in
104 some array as appropriate. Optionally, when the --fail flag is
105 passed in we will remove the device from any active array in‐
106 stead of adding it.
107
108 If a CONTAINER is passed to mdadm in this mode, then any arrays
109 within that container will be assembled and started.
110
111
112 Manage This is for doing things to specific components of an array such
113 as adding new spares and removing faulty devices.
114
115
116 Misc This is an 'everything else' mode that supports operations on
117 active arrays, operations on component devices such as erasing
118 old superblocks, and information gathering operations.
119
120
121 Auto-detect
122 This mode does not act on a specific device or array, but rather
123 it requests the Linux Kernel to activate any auto-detected ar‐
124 rays.
125
128 -A, --assemble
129 Assemble a pre-existing array.
130
131
132 -B, --build
133 Build a legacy array without superblocks.
134
135
136 -C, --create
137 Create a new array.
138
139
140 -F, --follow, --monitor
141 Select Monitor mode.
142
143
144 -G, --grow
145 Change the size or shape of an active array.
146
147
148 -I, --incremental
149 Add/remove a single device to/from an appropriate array, and
150 possibly start the array.
151
152
153 --auto-detect
154 Request that the kernel starts any auto-detected arrays. This
155 can only work if md is compiled into the kernel — not if it is a
156 module. Arrays can be auto-detected by the kernel if all the
157 components are in primary MS-DOS partitions with partition type
158 FD, and all use v0.90 metadata. In-kernel autodetect is not
159 recommended for new installations. Using mdadm to detect and
160 assemble arrays — possibly in an initrd — is substantially more
161 flexible and should be preferred.
162
163
164 If a device is given before any options, or if the first option is one
165 of --add, --re-add, --add-spare, --fail, --remove, or --replace, then
166 the MANAGE mode is assumed. Anything other than these will cause the
167 Misc mode to be assumed.
168
169
171 -h, --help
172 Display general help message or, after one of the above options,
173 a mode-specific help message.
174
175
176 --help-options
177 Display more detailed help about command line parsing and some
178 commonly used options.
179
180
181 -V, --version
182 Print version information for mdadm.
183
184
185 -v, --verbose
186 Be more verbose about what is happening. This can be used twice
187 to be extra-verbose. The extra verbosity currently only affects
188 --detail --scan and --examine --scan.
189
190
191 -q, --quiet
192 Avoid printing purely informative messages. With this, mdadm
193 will be silent unless there is something really important to re‐
194 port.
195
196
197
198 -f, --force
199 Be more forceful about certain operations. See the various
200 modes for the exact meaning of this option in different con‐
201 texts.
202
203
204 -c, --config=
205 Specify the config file or directory. Default is to use
206 /etc/mdadm.conf and /etc/mdadm.conf.d, or if those are missing
207 then /etc/mdadm/mdadm.conf and /etc/mdadm/mdadm.conf.d. If the
208 config file given is partitions then nothing will be read, but
209 mdadm will act as though the config file contained exactly
210 DEVICE partitions containers
211 and will read /proc/partitions to find a list of devices to
212 scan, and /proc/mdstat to find a list of containers to examine.
213 If the word none is given for the config file, then mdadm will
214 act as though the config file were empty.
215
216 If the name given is of a directory, then mdadm will collect all
217 the files contained in the directory with a name ending in
218 .conf, sort them lexically, and process all of those files as
219 config files.
220
221
222 -s, --scan
223 Scan config file or /proc/mdstat for missing information. In
224 general, this option gives mdadm permission to get any missing
225 information (like component devices, array devices, array iden‐
226 tities, and alert destination) from the configuration file (see
227 previous option); one exception is MISC mode when using --detail
228 or --stop, in which case --scan says to get a list of array de‐
229 vices from /proc/mdstat.
230
231
232 -e, --metadata=
233 Declare the style of RAID metadata (superblock) to be used. The
234 default is 1.2 for --create, and to guess for other operations.
235 The default can be overridden by setting the metadata value for
236 the CREATE keyword in mdadm.conf.
237
238 Options are:
239
240
241 0, 0.90
242 Use the original 0.90 format superblock. This format
243 limits arrays to 28 component devices and limits compo‐
244 nent devices of levels 1 and greater to 2 terabytes. It
245 is also possible for there to be confusion about whether
246 the superblock applies to a whole device or just the last
247 partition, if that partition starts on a 64K boundary.
248
249
250 1, 1.0, 1.1, 1.2 default
251 Use the new version-1 format superblock. This has fewer
252 restrictions. It can easily be moved between hosts with
253 different endian-ness, and a recovery operation can be
254 checkpointed and restarted. The different sub-versions
255 store the superblock at different locations on the de‐
256 vice, either at the end (for 1.0), at the start (for 1.1)
257 or 4K from the start (for 1.2). "1" is equivalent to
258 "1.2" (the commonly preferred 1.x format). "default" is
259 equivalent to "1.2".
260
261 ddf Use the "Industry Standard" DDF (Disk Data Format) format
262 defined by SNIA. When creating a DDF array a CONTAINER
263 will be created, and normal arrays can be created in that
264 container.
265
266 imsm Use the Intel(R) Matrix Storage Manager metadata format.
267 This creates a CONTAINER which is managed in a similar
268 manner to DDF, and is supported by an option-rom on some
269 platforms:
270
271 https://www.intel.com/content/www/us/en/support/prod‐
272 ucts/122484/memory-and-storage/ssd-software/intel-vir‐
273 tual-raid-on-cpu-intel-vroc.html
274
275 --homehost=
276 This will override any HOMEHOST setting in the config file and
277 provides the identity of the host which should be considered the
278 home for any arrays.
279
280 When creating an array, the homehost will be recorded in the
281 metadata. For version-1 superblocks, it will be prefixed to the
282 array name. For version-0.90 superblocks, part of the SHA1 hash
283 of the hostname will be stored in the later half of the UUID.
284
285 When reporting information about an array, any array which is
286 tagged for the given homehost will be reported as such.
287
288 When using Auto-Assemble, only arrays tagged for the given home‐
289 host will be allowed to use 'local' names (i.e. not ending in
290 '_' followed by a digit string). See below under Auto Assembly.
291
292 The special name "any" can be used as a wild card. If an array
293 is created with --homehost=any then the name "any" will be
294 stored in the array and it can be assembled in the same way on
295 any host. If an array is assembled with this option, then the
296 homehost recorded on the array will be ignored.
297
298
299 --prefer=
300 When mdadm needs to print the name for a device it normally
301 finds the name in /dev which refers to the device and is short‐
302 est. When a path component is given with --prefer mdadm will
303 prefer a longer name if it contains that component. For example
304 --prefer=by-uuid will prefer a name in a subdirectory of /dev
305 called by-uuid.
306
307 This functionality is currently only provided by --detail and
308 --monitor.
309
310
311 --home-cluster=
312 specifies the cluster name for the md device. The md device can
313 be assembled only on the cluster which matches the name speci‐
314 fied. If this option is not provided, mdadm tries to detect the
315 cluster name automatically.
316
317
319 -n, --raid-devices=
320 Specify the number of active devices in the array. This, plus
321 the number of spare devices (see below) must equal the number of
322 component-devices (including "missing" devices) that are listed
323 on the command line for --create. Setting a value of 1 is prob‐
324 ably a mistake and so requires that --force be specified first.
325 A value of 1 will then be allowed for linear, multipath, RAID0
326 and RAID1. It is never allowed for RAID4, RAID5 or RAID6.
327 This number can only be changed using --grow for RAID1, RAID4,
328 RAID5 and RAID6 arrays, and only on kernels which provide the
329 necessary support.
330
331
332 -x, --spare-devices=
333 Specify the number of spare (eXtra) devices in the initial ar‐
334 ray. Spares can also be added and removed later. The number of
335 component devices listed on the command line must equal the num‐
336 ber of RAID devices plus the number of spare devices.
337
338
339 -z, --size=
340 Amount (in Kilobytes) of space to use from each drive in RAID
341 levels 1/4/5/6. This must be a multiple of the chunk size, and
342 must leave about 128Kb of space at the end of the drive for the
343 RAID superblock. If this is not specified (as it normally is
344 not) the smallest drive (or partition) sets the size, though if
345 there is a variance among the drives of greater than 1%, a warn‐
346 ing is issued.
347
348 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilo‐
349 bytes, Megabytes, Gigabytes or Terabytes respectively.
350
351 Sometimes a replacement drive can be a little smaller than the
352 original drives though this should be minimised by IDEMA stan‐
353 dards. Such a replacement drive will be rejected by md. To
354 guard against this it can be useful to set the initial size
355 slightly smaller than the smaller device with the aim that it
356 will still be larger than any replacement.
357
358 This value can be set with --grow for RAID level 1/4/5/6 though
359 DDF arrays may not be able to support this. If the array was
360 created with a size smaller than the currently active drives,
361 the extra space can be accessed using --grow. The size can be
362 given as max which means to choose the largest size that fits on
363 all current drives.
364
365 Before reducing the size of the array (with --grow --size=) you
366 should make sure that space isn't needed. If the device holds a
367 filesystem, you would need to resize the filesystem to use less
368 space.
369
370 After reducing the array size you should check that the data
371 stored in the device is still available. If the device holds a
372 filesystem, then an 'fsck' of the filesystem is a minimum re‐
373 quirement. If there are problems the array can be made bigger
374 again with no loss with another --grow --size= command.
375
376 This value cannot be used when creating a CONTAINER such as with
377 DDF and IMSM metadata, though it perfectly valid when creating
378 an array inside a container.
379
380
381 -Z, --array-size=
382 This is only meaningful with --grow and its effect is not per‐
383 sistent: when the array is stopped and restarted the default ar‐
384 ray size will be restored.
385
386 Setting the array-size causes the array to appear smaller to
387 programs that access the data. This is particularly needed be‐
388 fore reshaping an array so that it will be smaller. As the re‐
389 shape is not reversible, but setting the size with --array-size
390 is, it is required that the array size is reduced as appropriate
391 before the number of devices in the array is reduced.
392
393 Before reducing the size of the array you should make sure that
394 space isn't needed. If the device holds a filesystem, you would
395 need to resize the filesystem to use less space.
396
397 After reducing the array size you should check that the data
398 stored in the device is still available. If the device holds a
399 filesystem, then an 'fsck' of the filesystem is a minimum re‐
400 quirement. If there are problems the array can be made bigger
401 again with no loss with another --grow --array-size= command.
402
403 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilo‐
404 bytes, Megabytes, Gigabytes or Terabytes respectively. A value
405 of max restores the apparent size of the array to be whatever
406 the real amount of available space is.
407
408 Clustered arrays do not support this parameter yet.
409
410
411 -c, --chunk=
412 Specify chunk size of kilobytes. The default when creating an
413 array is 512KB. To ensure compatibility with earlier versions,
414 the default when building an array with no persistent metadata
415 is 64KB. This is only meaningful for RAID0, RAID4, RAID5,
416 RAID6, and RAID10.
417
418 RAID4, RAID5, RAID6, and RAID10 require the chunk size to be a
419 power of 2. In any case it must be a multiple of 4KB.
420
421 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilo‐
422 bytes, Megabytes, Gigabytes or Terabytes respectively.
423
424
425 --rounding=
426 Specify rounding factor for a Linear array. The size of each
427 component will be rounded down to a multiple of this size. This
428 is a synonym for --chunk but highlights the different meaning
429 for Linear as compared to other RAID levels. The default is 64K
430 if a kernel earlier than 2.6.16 is in use, and is 0K (i.e. no
431 rounding) in later kernels.
432
433
434 -l, --level=
435 Set RAID level. When used with --create, options are: linear,
436 raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6,
437 6, raid10, 10, multipath, mp, faulty, container. Obviously some
438 of these are synonymous.
439
440 When a CONTAINER metadata type is requested, only the container
441 level is permitted, and it does not need to be explicitly given.
442
443 When used with --build, only linear, stripe, raid0, 0, raid1,
444 multipath, mp, and faulty are valid.
445
446 Can be used with --grow to change the RAID level in some cases.
447 See LEVEL CHANGES below.
448
449
450 -p, --layout=
451 This option configures the fine details of data layout for
452 RAID5, RAID6, and RAID10 arrays, and controls the failure modes
453 for faulty. It can also be used for working around a kernel bug
454 with RAID0, but generally doesn't need to be used explicitly.
455
456 The layout of the RAID5 parity block can be one of left-asymmet‐
457 ric, left-symmetric, right-asymmetric, right-symmetric, la, ra,
458 ls, rs. The default is left-symmetric.
459
460 It is also possible to cause RAID5 to use a RAID4-like layout by
461 choosing parity-first, or parity-last.
462
463 Finally for RAID5 there are DDF-compatible layouts,
464 ddf-zero-restart, ddf-N-restart, and ddf-N-continue.
465
466 These same layouts are available for RAID6. There are also 4
467 layouts that will provide an intermediate stage for converting
468 between RAID5 and RAID6. These provide a layout which is iden‐
469 tical to the corresponding RAID5 layout on the first N-1 de‐
470 vices, and has the 'Q' syndrome (the second 'parity' block used
471 by RAID6) on the last device. These layouts are: left-symmet‐
472 ric-6, right-symmetric-6, left-asymmetric-6, right-asymmetric-6,
473 and parity-first-6.
474
475 When setting the failure mode for level faulty, the options are:
476 write-transient, wt, read-transient, rt, write-persistent, wp,
477 read-persistent, rp, write-all, read-fixable, rf, clear, flush,
478 none.
479
480 Each failure mode can be followed by a number, which is used as
481 a period between fault generation. Without a number, the fault
482 is generated once on the first relevant request. With a number,
483 the fault will be generated after that many requests, and will
484 continue to be generated every time the period elapses.
485
486 Multiple failure modes can be current simultaneously by using
487 the --grow option to set subsequent failure modes.
488
489 "clear" or "none" will remove any pending or periodic failure
490 modes, and "flush" will clear any persistent faults.
491
492 The layout options for RAID10 are one of 'n', 'o' or 'f' fol‐
493 lowed by a small number. The default is 'n2'. The supported
494 options are:
495
496 'n' signals 'near' copies. Multiple copies of one data block
497 are at similar offsets in different devices.
498
499 'o' signals 'offset' copies. Rather than the chunks being du‐
500 plicated within a stripe, whole stripes are duplicated but are
501 rotated by one device so duplicate blocks are on different de‐
502 vices. Thus subsequent copies of a block are in the next drive,
503 and are one chunk further down.
504
505 'f' signals 'far' copies (multiple copies have very different
506 offsets). See md(4) for more detail about 'near', 'offset', and
507 'far'.
508
509 The number is the number of copies of each datablock. 2 is nor‐
510 mal, 3 can be useful. This number can be at most equal to the
511 number of devices in the array. It does not need to divide
512 evenly into that number (e.g. it is perfectly legal to have an
513 'n2' layout for an array with an odd number of devices).
514
515 A bug introduced in Linux 3.14 means that RAID0 arrays with de‐
516 vices of differing sizes started using a different layout. This
517 could lead to data corruption. Since Linux 5.4 (and various
518 stable releases that received backports), the kernel will not
519 accept such an array unless a layout is explictly set. It can
520 be set to 'original' or 'alternate'. When creating a new array,
521 mdadm will select 'original' by default, so the layout does not
522 normally need to be set. An array created for either 'original'
523 or 'alternate' will not be recognized by an (unpatched) kernel
524 prior to 5.4. To create a RAID0 array with devices of differing
525 sizes that can be used on an older kernel, you can set the lay‐
526 out to 'dangerous'. This will use whichever layout the running
527 kernel supports, so the data on the array may become corrupt
528 when changing kernel from pre-3.14 to a later kernel.
529
530 When an array is converted between RAID5 and RAID6 an intermedi‐
531 ate RAID6 layout is used in which the second parity block (Q) is
532 always on the last device. To convert a RAID5 to RAID6 and
533 leave it in this new layout (which does not require re-striping)
534 use --layout=preserve. This will try to avoid any restriping.
535
536 The converse of this is --layout=normalise which will change a
537 non-standard RAID6 layout into a more standard arrangement.
538
539
540 --parity=
541 same as --layout (thus explaining the p of -p).
542
543
544 -b, --bitmap=
545 Specify a file to store a write-intent bitmap in. The file
546 should not exist unless --force is also given. The same file
547 should be provided when assembling the array. If the word in‐
548 ternal is given, then the bitmap is stored with the metadata on
549 the array, and so is replicated on all devices. If the word
550 none is given with --grow mode, then any bitmap that is present
551 is removed. If the word clustered is given, the array is created
552 for a clustered environment. One bitmap is created for each node
553 as defined by the --nodes parameter and are stored internally.
554
555 To help catch typing errors, the filename must contain at least
556 one slash ('/') if it is a real file (not 'internal' or 'none').
557
558 Note: external bitmaps are only known to work on ext2 and ext3.
559 Storing bitmap files on other filesystems may result in serious
560 problems.
561
562 When creating an array on devices which are 100G or larger,
563 mdadm automatically adds an internal bitmap as it will usually
564 be beneficial. This can be suppressed with --bitmap=none or by
565 selecting a different consistency policy with --consistency-pol‐
566 icy.
567
568
569 --bitmap-chunk=
570 Set the chunksize of the bitmap. Each bit corresponds to that
571 many Kilobytes of storage. When using a file based bitmap, the
572 default is to use the smallest size that is at-least 4 and re‐
573 quires no more than 2^21 chunks. When using an internal bitmap,
574 the chunksize defaults to 64Meg, or larger if necessary to fit
575 the bitmap into the available space.
576
577 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilo‐
578 bytes, Megabytes, Gigabytes or Terabytes respectively.
579
580
581 -W, --write-mostly
582 subsequent devices listed in a --build, --create, or --add com‐
583 mand will be flagged as 'write-mostly'. This is valid for RAID1
584 only and means that the 'md' driver will avoid reading from
585 these devices if at all possible. This can be useful if mirror‐
586 ing over a slow link.
587
588
589 --write-behind=
590 Specify that write-behind mode should be enabled (valid for
591 RAID1 only). If an argument is specified, it will set the maxi‐
592 mum number of outstanding writes allowed. The default value is
593 256. A write-intent bitmap is required in order to use write-
594 behind mode, and write-behind is only attempted on drives marked
595 as write-mostly.
596
597
598 --failfast
599 subsequent devices listed in a --create or --add command will be
600 flagged as 'failfast'. This is valid for RAID1 and RAID10
601 only. IO requests to these devices will be encouraged to fail
602 quickly rather than cause long delays due to error handling.
603 Also no attempt is made to repair a read error on these devices.
604
605 If an array becomes degraded so that the 'failfast' device is
606 the only usable device, the 'failfast' flag will then be ignored
607 and extended delays will be preferred to complete failure.
608
609 The 'failfast' flag is appropriate for storage arrays which have
610 a low probability of true failure, but which may sometimes cause
611 unacceptable delays due to internal maintenance functions.
612
613
614 --assume-clean
615 Tell mdadm that the array pre-existed and is known to be clean.
616 It can be useful when trying to recover from a major failure as
617 you can be sure that no data will be affected unless you actu‐
618 ally write to the array. It can also be used when creating a
619 RAID1 or RAID10 if you want to avoid the initial resync, however
620 this practice — while normally safe — is not recommended. Use
621 this only if you really know what you are doing.
622
623 When the devices that will be part of a new array were filled
624 with zeros before creation the operator knows the array is actu‐
625 ally clean. If that is the case, such as after running bad‐
626 blocks, this argument can be used to tell mdadm the facts the
627 operator knows.
628
629 When an array is resized to a larger size with --grow --size=
630 the new space is normally resynced in that same way that the
631 whole array is resynced at creation. From Linux version 3.0,
632 --assume-clean can be used with that command to avoid the auto‐
633 matic resync.
634
635
636 --backup-file=
637 This is needed when --grow is used to increase the number of
638 raid-devices in a RAID5 or RAID6 if there are no spare devices
639 available, or to shrink, change RAID level or layout. See the
640 GROW MODE section below on RAID-DEVICES CHANGES. The file must
641 be stored on a separate device, not on the RAID array being re‐
642 shaped.
643
644
645 --data-offset=
646 Arrays with 1.x metadata can leave a gap between the start of
647 the device and the start of array data. This gap can be used
648 for various metadata. The start of data is known as the
649 data-offset. Normally an appropriate data offset is computed
650 automatically. However it can be useful to set it explicitly
651 such as when re-creating an array which was originally created
652 using a different version of mdadm which computed a different
653 offset.
654
655 Setting the offset explicitly over-rides the default. The value
656 given is in Kilobytes unless a suffix of 'K', 'M', 'G' or 'T' is
657 used to explicitly indicate Kilobytes, Megabytes, Gigabytes or
658 Terabytes respectively.
659
660 Since Linux 3.4, --data-offset can also be used with --grow for
661 some RAID levels (initially on RAID10). This allows the
662 data-offset to be changed as part of the reshape process. When
663 the data offset is changed, no backup file is required as the
664 difference in offsets is used to provide the same functionality.
665
666 When the new offset is earlier than the old offset, the number
667 of devices in the array cannot shrink. When it is after the old
668 offset, the number of devices in the array cannot increase.
669
670 When creating an array, --data-offset can be specified as vari‐
671 able. In the case each member device is expected to have a off‐
672 set appended to the name, separated by a colon. This makes it
673 possible to recreate exactly an array which has varying data
674 offsets (as can happen when different versions of mdadm are used
675 to add different devices).
676
677
678 --continue
679 This option is complementary to the --freeze-reshape option for
680 assembly. It is needed when --grow operation is interrupted and
681 it is not restarted automatically due to --freeze-reshape usage
682 during array assembly. This option is used together with -G , (
683 --grow ) command and device for a pending reshape to be contin‐
684 ued. All parameters required for reshape continuation will be
685 read from array metadata. If initial --grow command had re‐
686 quired --backup-file= option to be set, continuation option will
687 require to have exactly the same backup file given as well.
688
689 Any other parameter passed together with --continue option will
690 be ignored.
691
692
693 -N, --name=
694 Set a name for the array. This is currently only effective when
695 creating an array with a version-1 superblock, or an array in a
696 DDF container. The name is a simple textual string that can be
697 used to identify array components when assembling. If name is
698 needed but not specified, it is taken from the basename of the
699 device that is being created. e.g. when creating /dev/md/home
700 the name will default to home.
701
702
703 -R, --run
704 Insist that mdadm run the array, even if some of the components
705 appear to be active in another array or filesystem. Normally
706 mdadm will ask for confirmation before including such components
707 in an array. This option causes that question to be suppressed.
708
709
710 -f, --force
711 Insist that mdadm accept the geometry and layout specified with‐
712 out question. Normally mdadm will not allow creation of an ar‐
713 ray with only one device, and will try to create a RAID5 array
714 with one missing drive (as this makes the initial resync work
715 faster). With --force, mdadm will not try to be so clever.
716
717
718 -o, --readonly
719 Start the array read only rather than read-write as normal. No
720 writes will be allowed to the array, and no resync, recovery, or
721 reshape will be started. It works with Create, Assemble, Manage
722 and Misc mode.
723
724
725 -a, --auto{=yes,md,mdp,part,p}{NN}
726 Instruct mdadm how to create the device file if needed, possibly
727 allocating an unused minor number. "md" causes a non-partition‐
728 able array to be used (though since Linux 2.6.28, these array
729 devices are in fact partitionable). "mdp", "part" or "p" causes
730 a partitionable array (2.6 and later) to be used. "yes" re‐
731 quires the named md device to have a 'standard' format, and the
732 type and minor number will be determined from this. With mdadm
733 3.0, device creation is normally left up to udev so this option
734 is unlikely to be needed. See DEVICE NAMES below.
735
736 The argument can also come immediately after "-a". e.g. "-ap".
737
738 If --auto is not given on the command line or in the config
739 file, then the default will be --auto=yes.
740
741 If --scan is also given, then any auto= entries in the config
742 file will override the --auto instruction given on the command
743 line.
744
745 For partitionable arrays, mdadm will create the device file for
746 the whole array and for the first 4 partitions. A different
747 number of partitions can be specified at the end of this option
748 (e.g. --auto=p7). If the device name ends with a digit, the
749 partition names add a 'p', and a number, e.g. /dev/md/home1p3.
750 If there is no trailing digit, then the partition names just
751 have a number added, e.g. /dev/md/scratch3.
752
753 If the md device name is in a 'standard' format as described in
754 DEVICE NAMES, then it will be created, if necessary, with the
755 appropriate device number based on that name. If the device
756 name is not in one of these formats, then a unused device number
757 will be allocated. The device number will be considered unused
758 if there is no active array for that number, and there is no en‐
759 try in /dev for that number and with a non-standard name. Names
760 that are not in 'standard' format are only allowed in
761 "/dev/md/".
762
763 This is meaningful with --create or --build.
764
765
766 -a, --add
767 This option can be used in Grow mode in two cases.
768
769 If the target array is a Linear array, then --add can be used to
770 add one or more devices to the array. They are simply catenated
771 on to the end of the array. Once added, the devices cannot be
772 removed.
773
774 If the --raid-disks option is being used to increase the number
775 of devices in an array, then --add can be used to add some extra
776 devices to be included in the array. In most cases this is not
777 needed as the extra devices can be added as spares first, and
778 then the number of raid-disks can be changed. However for
779 RAID0, it is not possible to add spares. So to increase the
780 number of devices in a RAID0, it is necessary to set the new
781 number of devices, and to add the new devices, in the same com‐
782 mand.
783
784
785 --nodes
786 Only works when the array is for clustered environment. It spec‐
787 ifies the maximum number of nodes in the cluster that will use
788 this device simultaneously. If not specified, this defaults to
789 4.
790
791
792 --write-journal
793 Specify journal device for the RAID-4/5/6 array. The journal de‐
794 vice should be a SSD with reasonable lifetime.
795
796
797 --symlinks
798 Auto creation of symlinks in /dev to /dev/md, option --symlinks
799 must be 'no' or 'yes' and work with --create and --build.
800
801
802 -k, --consistency-policy=
803 Specify how the array maintains consistency in case of unex‐
804 pected shutdown. Only relevant for RAID levels with redundancy.
805 Currently supported options are:
806
807
808 resync Full resync is performed and all redundancy is regener‐
809 ated when the array is started after unclean shutdown.
810
811
812 bitmap Resync assisted by a write-intent bitmap. Implicitly se‐
813 lected when using --bitmap.
814
815
816 journal
817 For RAID levels 4/5/6, journal device is used to log
818 transactions and replay after unclean shutdown. Implic‐
819 itly selected when using --write-journal.
820
821
822 ppl For RAID5 only, Partial Parity Log is used to close the
823 write hole and eliminate resync. PPL is stored in the
824 metadata region of RAID member drives, no additional
825 journal drive is needed.
826
827
828 Can be used with --grow to change the consistency policy of an
829 active array in some cases. See CONSISTENCY POLICY CHANGES be‐
830 low.
831
832
833
835 -u, --uuid=
836 uuid of array to assemble. Devices which don't have this uuid
837 are excluded
838
839
840 -m, --super-minor=
841 Minor number of device that array was created for. Devices
842 which don't have this minor number are excluded. If you create
843 an array as /dev/md1, then all superblocks will contain the mi‐
844 nor number 1, even if the array is later assembled as /dev/md2.
845
846 Giving the literal word "dev" for --super-minor will cause mdadm
847 to use the minor number of the md device that is being assem‐
848 bled. e.g. when assembling /dev/md0, --super-minor=dev will
849 look for super blocks with a minor number of 0.
850
851 --super-minor is only relevant for v0.90 metadata, and should
852 not normally be used. Using --uuid is much safer.
853
854
855 -N, --name=
856 Specify the name of the array to assemble. This must be the
857 name that was specified when creating the array. It must either
858 match the name stored in the superblock exactly, or it must
859 match with the current homehost prefixed to the start of the
860 given name.
861
862
863 -f, --force
864 Assemble the array even if the metadata on some devices appears
865 to be out-of-date. If mdadm cannot find enough working devices
866 to start the array, but can find some devices that are recorded
867 as having failed, then it will mark those devices as working so
868 that the array can be started. This works only for native. For
869 external metadata it allows to start dirty degraded RAID 4, 5,
870 6. An array which requires --force to be started may contain
871 data corruption. Use it carefully.
872
873
874 -R, --run
875 Attempt to start the array even if fewer drives were given than
876 were present last time the array was active. Normally if not
877 all the expected drives are found and --scan is not used, then
878 the array will be assembled but not started. With --run an at‐
879 tempt will be made to start it anyway.
880
881
882 --no-degraded
883 This is the reverse of --run in that it inhibits the startup of
884 array unless all expected drives are present. This is only
885 needed with --scan, and can be used if the physical connections
886 to devices are not as reliable as you would like.
887
888
889 -a, --auto{=no,yes,md,mdp,part}
890 See this option under Create and Build options.
891
892
893 -b, --bitmap=
894 Specify the bitmap file that was given when the array was cre‐
895 ated. If an array has an internal bitmap, there is no need to
896 specify this when assembling the array.
897
898
899 --backup-file=
900 If --backup-file was used while reshaping an array (e.g. chang‐
901 ing number of devices or chunk size) and the system crashed dur‐
902 ing the critical section, then the same --backup-file must be
903 presented to --assemble to allow possibly corrupted data to be
904 restored, and the reshape to be completed.
905
906
907 --invalid-backup
908 If the file needed for the above option is not available for any
909 reason an empty file can be given together with this option to
910 indicate that the backup file is invalid. In this case the data
911 that was being rearranged at the time of the crash could be ir‐
912 recoverably lost, but the rest of the array may still be recov‐
913 erable. This option should only be used as a last resort if
914 there is no way to recover the backup file.
915
916
917
918 -U, --update=
919 Update the superblock on each device while assembling the array.
920 The argument given to this flag can be one of sparc2.2, sum‐
921 maries, uuid, name, nodes, homehost, home-cluster, resync, byte‐
922 order, devicesize, no-bitmap, bbl, no-bbl, ppl, no-ppl, lay‐
923 out-original, layout-alternate, layout-unspecified, metadata, or
924 super-minor.
925
926 The sparc2.2 option will adjust the superblock of an array what
927 was created on a Sparc machine running a patched 2.2 Linux ker‐
928 nel. This kernel got the alignment of part of the superblock
929 wrong. You can use the --examine --sparc2.2 option to mdadm to
930 see what effect this would have.
931
932 The super-minor option will update the preferred minor field on
933 each superblock to match the minor number of the array being as‐
934 sembled. This can be useful if --examine reports a different
935 "Preferred Minor" to --detail. In some cases this update will
936 be performed automatically by the kernel driver. In particular
937 the update happens automatically at the first write to an array
938 with redundancy (RAID level 1 or greater) on a 2.6 (or later)
939 kernel.
940
941 The uuid option will change the uuid of the array. If a UUID is
942 given with the --uuid option that UUID will be used as a new
943 UUID and will NOT be used to help identify the devices in the
944 array. If no --uuid is given, a random UUID is chosen.
945
946 The name option will change the name of the array as stored in
947 the superblock. This is only supported for version-1 su‐
948 perblocks.
949
950 The nodes option will change the nodes of the array as stored in
951 the bitmap superblock. This option only works for a clustered
952 environment.
953
954 The homehost option will change the homehost as recorded in the
955 superblock. For version-0 superblocks, this is the same as up‐
956 dating the UUID. For version-1 superblocks, this involves up‐
957 dating the name.
958
959 The home-cluster option will change the cluster name as recorded
960 in the superblock and bitmap. This option only works for clus‐
961 tered environment.
962
963 The resync option will cause the array to be marked dirty mean‐
964 ing that any redundancy in the array (e.g. parity for RAID5,
965 copies for RAID1) may be incorrect. This will cause the RAID
966 system to perform a "resync" pass to make sure that all redun‐
967 dant information is correct.
968
969 The byteorder option allows arrays to be moved between machines
970 with different byte-order, such as from a big-endian machine
971 like a Sparc or some MIPS machines, to a little-endian x86_64
972 machine. When assembling such an array for the first time after
973 a move, giving --update=byteorder will cause mdadm to expect su‐
974 perblocks to have their byteorder reversed, and will correct
975 that order before assembling the array. This is only valid with
976 original (Version 0.90) superblocks.
977
978 The summaries option will correct the summaries in the su‐
979 perblock. That is the counts of total, working, active, failed,
980 and spare devices.
981
982 The devicesize option will rarely be of use. It applies to ver‐
983 sion 1.1 and 1.2 metadata only (where the metadata is at the
984 start of the device) and is only useful when the component de‐
985 vice has changed size (typically become larger). The version 1
986 metadata records the amount of the device that can be used to
987 store data, so if a device in a version 1.1 or 1.2 array becomes
988 larger, the metadata will still be visible, but the extra space
989 will not. In this case it might be useful to assemble the array
990 with --update=devicesize. This will cause mdadm to determine
991 the maximum usable amount of space on each device and update the
992 relevant field in the metadata.
993
994 The metadata option only works on v0.90 metadata arrays and will
995 convert them to v1.0 metadata. The array must not be dirty
996 (i.e. it must not need a sync) and it must not have a write-in‐
997 tent bitmap.
998
999 The old metadata will remain on the devices, but will appear
1000 older than the new metadata and so will usually be ignored. The
1001 old metadata (or indeed the new metadata) can be removed by giv‐
1002 ing the appropriate --metadata= option to --zero-superblock.
1003
1004 The no-bitmap option can be used when an array has an internal
1005 bitmap which is corrupt in some way so that assembling the array
1006 normally fails. It will cause any internal bitmap to be ig‐
1007 nored.
1008
1009 The bbl option will reserve space in each device for a bad block
1010 list. This will be 4K in size and positioned near the end of
1011 any free space between the superblock and the data.
1012
1013 The no-bbl option will cause any reservation of space for a bad
1014 block list to be removed. If the bad block list contains en‐
1015 tries, this will fail, as removing the list could cause data
1016 corruption.
1017
1018 The ppl option will enable PPL for a RAID5 array and reserve
1019 space for PPL on each device. There must be enough free space
1020 between the data and superblock and a write-intent bitmap or
1021 journal must not be used.
1022
1023 The no-ppl option will disable PPL in the superblock.
1024
1025 The layout-original and layout-alternate options are for RAID0
1026 arrays with non-uniform devices size that were in use before
1027 Linux 5.4. If the array was being used with Linux 3.13 or ear‐
1028 lier, then to assemble the array on a new kernel, --update=lay‐
1029 out-original must be given. If the array was created and used
1030 with a kernel from Linux 3.14 to Linux 5.3, then --update=lay‐
1031 out-alternate must be given. This only needs to be given once.
1032 Subsequent assembly of the array will happen normally. For more
1033 information, see md(4).
1034
1035 The layout-unspecified option reverts the effect of layout-orig‐
1036 nal or layout-alternate and allows the array to be again used on
1037 a kernel prior to Linux 5.3. This option should be used with
1038 great caution.
1039
1040
1041 --freeze-reshape
1042 Option is intended to be used in start-up scripts during initrd
1043 boot phase. When array under reshape is assembled during initrd
1044 phase, this option stops reshape after reshape critical section
1045 is being restored. This happens before file system pivot opera‐
1046 tion and avoids loss of file system context. Losing file system
1047 context would cause reshape to be broken.
1048
1049 Reshape can be continued later using the --continue option for
1050 the grow command.
1051
1052
1053 --symlinks
1054 See this option under Create and Build options.
1055
1056
1058 -t, --test
1059 Unless a more serious error occurred, mdadm will exit with a
1060 status of 2 if no changes were made to the array and 0 if at
1061 least one change was made. This can be useful when an indirect
1062 specifier such as missing, detached or faulty is used in re‐
1063 questing an operation on the array. --test will report failure
1064 if these specifiers didn't find any match.
1065
1066
1067 -a, --add
1068 hot-add listed devices. If a device appears to have recently
1069 been part of the array (possibly it failed or was removed) the
1070 device is re-added as described in the next point. If that
1071 fails or the device was never part of the array, the device is
1072 added as a hot-spare. If the array is degraded, it will immedi‐
1073 ately start to rebuild data onto that spare.
1074
1075 Note that this and the following options are only meaningful on
1076 array with redundancy. They don't apply to RAID0 or Linear.
1077
1078
1079 --re-add
1080 re-add a device that was previously removed from an array. If
1081 the metadata on the device reports that it is a member of the
1082 array, and the slot that it used is still vacant, then the de‐
1083 vice will be added back to the array in the same position. This
1084 will normally cause the data for that device to be recovered.
1085 However based on the event count on the device, the recovery may
1086 only require sections that are flagged a write-intent bitmap to
1087 be recovered or may not require any recovery at all.
1088
1089 When used on an array that has no metadata (i.e. it was built
1090 with --build) it will be assumed that bitmap-based recovery is
1091 enough to make the device fully consistent with the array.
1092
1093 When used with v1.x metadata, --re-add can be accompanied by
1094 --update=devicesize, --update=bbl, or --update=no-bbl. See the
1095 description of these option when used in Assemble mode for an
1096 explanation of their use.
1097
1098 If the device name given is missing then mdadm will try to find
1099 any device that looks like it should be part of the array but
1100 isn't and will try to re-add all such devices.
1101
1102 If the device name given is faulty then mdadm will find all de‐
1103 vices in the array that are marked faulty, remove them and at‐
1104 tempt to immediately re-add them. This can be useful if you are
1105 certain that the reason for failure has been resolved.
1106
1107
1108 --add-spare
1109 Add a device as a spare. This is similar to --add except that
1110 it does not attempt --re-add first. The device will be added as
1111 a spare even if it looks like it could be an recent member of
1112 the array.
1113
1114
1115 -r, --remove
1116 remove listed devices. They must not be active. i.e. they
1117 should be failed or spare devices.
1118
1119 As well as the name of a device file (e.g. /dev/sda1) the words
1120 failed, detached and names like set-A can be given to --remove.
1121 The first causes all failed device to be removed. The second
1122 causes any device which is no longer connected to the system
1123 (i.e an 'open' returns ENXIO) to be removed. The third will re‐
1124 move a set as describe below under --fail.
1125
1126
1127 -f, --fail
1128 Mark listed devices as faulty. As well as the name of a device
1129 file, the word detached or a set name like set-A can be given.
1130 The former will cause any device that has been detached from the
1131 system to be marked as failed. It can then be removed.
1132
1133 For RAID10 arrays where the number of copies evenly divides the
1134 number of devices, the devices can be conceptually divided into
1135 sets where each set contains a single complete copy of the data
1136 on the array. Sometimes a RAID10 array will be configured so
1137 that these sets are on separate controllers. In this case all
1138 the devices in one set can be failed by giving a name like set-A
1139 or set-B to --fail. The appropriate set names are reported by
1140 --detail.
1141
1142
1143 --set-faulty
1144 same as --fail.
1145
1146
1147 --replace
1148 Mark listed devices as requiring replacement. As soon as a
1149 spare is available, it will be rebuilt and will replace the
1150 marked device. This is similar to marking a device as faulty,
1151 but the device remains in service during the recovery process to
1152 increase resilience against multiple failures. When the re‐
1153 placement process finishes, the replaced device will be marked
1154 as faulty.
1155
1156
1157 --with This can follow a list of --replace devices. The devices listed
1158 after --with will be preferentially used to replace the devices
1159 listed after --replace. These device must already be spare de‐
1160 vices in the array.
1161
1162
1163 --write-mostly
1164 Subsequent devices that are added or re-added will have the
1165 'write-mostly' flag set. This is only valid for RAID1 and means
1166 that the 'md' driver will avoid reading from these devices if
1167 possible.
1168
1169 --readwrite
1170 Subsequent devices that are added or re-added will have the
1171 'write-mostly' flag cleared.
1172
1173 --cluster-confirm
1174 Confirm the existence of the device. This is issued in response
1175 to an --add request by a node in a cluster. When a node adds a
1176 device it sends a message to all nodes in the cluster to look
1177 for a device with a UUID. This translates to a udev notification
1178 with the UUID of the device to be added and the slot number. The
1179 receiving node must acknowledge this message with --cluster-con‐
1180 firm. Valid arguments are <slot>:<devicename> in case the device
1181 is found or <slot>:missing in case the device is not found.
1182
1183
1184 --add-journal
1185 Add journal to an existing array, or recreate journal for
1186 RAID-4/5/6 array that lost a journal device. To avoid interrupt‐
1187 ing on-going write opertions, --add-journal only works for array
1188 in Read-Only state.
1189
1190
1191 --failfast
1192 Subsequent devices that are added or re-added will have the
1193 'failfast' flag set. This is only valid for RAID1 and RAID10
1194 and means that the 'md' driver will avoid long timeouts on error
1195 handling where possible.
1196
1197 --nofailfast
1198 Subsequent devices that are re-added will be re-added without
1199 the 'failfast' flag set.
1200
1201
1202 Each of these options requires that the first device listed is the ar‐
1203 ray to be acted upon, and the remainder are component devices to be
1204 added, removed, marked as faulty, etc. Several different operations
1205 can be specified for different devices, e.g.
1206 mdadm /dev/md0 --add /dev/sda1 --fail /dev/sdb1 --remove /dev/sdb1
1207 Each operation applies to all devices listed until the next operation.
1208
1209 If an array is using a write-intent bitmap, then devices which have
1210 been removed can be re-added in a way that avoids a full reconstruction
1211 but instead just updates the blocks that have changed since the device
1212 was removed. For arrays with persistent metadata (superblocks) this is
1213 done automatically. For arrays created with --build mdadm needs to be
1214 told that this device we removed recently with --re-add.
1215
1216 Devices can only be removed from an array if they are not in active
1217 use, i.e. that must be spares or failed devices. To remove an active
1218 device, it must first be marked as faulty.
1219
1220
1222 -Q, --query
1223 Examine a device to see (1) if it is an md device and (2) if it
1224 is a component of an md array. Information about what is dis‐
1225 covered is presented.
1226
1227
1228 -D, --detail
1229 Print details of one or more md devices.
1230
1231
1232 --detail-platform
1233 Print details of the platform's RAID capabilities (firmware /
1234 hardware topology) for a given metadata format. If used without
1235 argument, mdadm will scan all controllers looking for their ca‐
1236 pabilities. Otherwise, mdadm will only look at the controller
1237 specified by the argument in form of an absolute filepath or a
1238 link, e.g. /sys/devices/pci0000:00/0000:00:1f.2.
1239
1240
1241 -Y, --export
1242 When used with --detail, --detail-platform, --examine, or --in‐
1243 cremental output will be formatted as key=value pairs for easy
1244 import into the environment.
1245
1246 With --incremental The value MD_STARTED indicates whether an ar‐
1247 ray was started (yes) or not, which may include a reason (un‐
1248 safe, nothing, no). Also the value MD_FOREIGN indicates if the
1249 array is expected on this host (no), or seems to be from else‐
1250 where (yes).
1251
1252
1253 -E, --examine
1254 Print contents of the metadata stored on the named device(s).
1255 Note the contrast between --examine and --detail. --examine ap‐
1256 plies to devices which are components of an array, while --de‐
1257 tail applies to a whole array which is currently active.
1258
1259 --sparc2.2
1260 If an array was created on a SPARC machine with a 2.2 Linux ker‐
1261 nel patched with RAID support, the superblock will have been
1262 created incorrectly, or at least incompatibly with 2.4 and later
1263 kernels. Using the --sparc2.2 flag with --examine will fix the
1264 superblock before displaying it. If this appears to do the
1265 right thing, then the array can be successfully assembled using
1266 --assemble --update=sparc2.2.
1267
1268
1269 -X, --examine-bitmap
1270 Report information about a bitmap file. The argument is either
1271 an external bitmap file or an array component in case of an in‐
1272 ternal bitmap. Note that running this on an array device (e.g.
1273 /dev/md0) does not report the bitmap for that array.
1274
1275
1276 --examine-badblocks
1277 List the bad-blocks recorded for the device, if a bad-blocks
1278 list has been configured. Currently only 1.x and IMSM metadata
1279 support bad-blocks lists.
1280
1281
1282 --dump=directory
1283
1284 --restore=directory
1285 Save metadata from lists devices, or restore metadata to listed
1286 devices.
1287
1288
1289 -R, --run
1290 start a partially assembled array. If --assemble did not find
1291 enough devices to fully start the array, it might leaving it
1292 partially assembled. If you wish, you can then use --run to
1293 start the array in degraded mode.
1294
1295
1296 -S, --stop
1297 deactivate array, releasing all resources.
1298
1299
1300 -o, --readonly
1301 mark array as readonly.
1302
1303
1304 -w, --readwrite
1305 mark array as readwrite.
1306
1307
1308 --zero-superblock
1309 If the device contains a valid md superblock, the block is over‐
1310 written with zeros. With --force the block where the superblock
1311 would be is overwritten even if it doesn't appear to be valid.
1312
1313 Note: Be careful to call --zero-superblock with clustered raid,
1314 make sure array isn't used or assembled in other cluster node
1315 before execute it.
1316
1317
1318 --kill-subarray=
1319 If the device is a container and the argument to --kill-subarray
1320 specifies an inactive subarray in the container, then the subar‐
1321 ray is deleted. Deleting all subarrays will leave an 'empty-
1322 container' or spare superblock on the drives. See --zero-su‐
1323 perblock for completely removing a superblock. Note that some
1324 formats depend on the subarray index for generating a UUID, this
1325 command will fail if it would change the UUID of an active sub‐
1326 array.
1327
1328
1329 --update-subarray=
1330 If the device is a container and the argument to --update-subar‐
1331 ray specifies a subarray in the container, then attempt to up‐
1332 date the given superblock field in the subarray. See below in
1333 MISC MODE for details.
1334
1335
1336 -t, --test
1337 When used with --detail, the exit status of mdadm is set to re‐
1338 flect the status of the device. See below in MISC MODE for de‐
1339 tails.
1340
1341
1342 -W, --wait
1343 For each md device given, wait for any resync, recovery, or re‐
1344 shape activity to finish before returning. mdadm will return
1345 with success if it actually waited for every device listed, oth‐
1346 erwise it will return failure.
1347
1348
1349 --wait-clean
1350 For each md device given, or each device in /proc/mdstat if
1351 --scan is given, arrange for the array to be marked clean as
1352 soon as possible. mdadm will return with success if the array
1353 uses external metadata and we successfully waited. For native
1354 arrays this returns immediately as the kernel handles dirty-
1355 clean transitions at shutdown. No action is taken if safe-mode
1356 handling is disabled.
1357
1358
1359 --action=
1360 Set the "sync_action" for all md devices given to one of idle,
1361 frozen, check, repair. Setting to idle will abort any currently
1362 running action though some actions will automatically restart.
1363 Setting to frozen will abort any current action and ensure no
1364 other action starts automatically.
1365
1366 Details of check and repair can be found it md(4) under SCRUB‐
1367 BING AND MISMATCHES.
1368
1369
1371 --rebuild-map, -r
1372 Rebuild the map file (/run/mdadm/map) that mdadm uses to help
1373 track which arrays are currently being assembled.
1374
1375
1376 --run, -R
1377 Run any array assembled as soon as a minimal number of devices
1378 are available, rather than waiting until all expected devices
1379 are present.
1380
1381
1382 --scan, -s
1383 Only meaningful with -R this will scan the map file for arrays
1384 that are being incrementally assembled and will try to start any
1385 that are not already started. If any such array is listed in
1386 mdadm.conf as requiring an external bitmap, that bitmap will be
1387 attached first.
1388
1389
1390 --fail, -f
1391 This allows the hot-plug system to remove devices that have
1392 fully disappeared from the kernel. It will first fail and then
1393 remove the device from any array it belongs to. The device name
1394 given should be a kernel device name such as "sda", not a name
1395 in /dev.
1396
1397
1398 --path=
1399 Only used with --fail. The 'path' given will be recorded so
1400 that if a new device appears at the same location it can be au‐
1401 tomatically added to the same array. This allows the failed de‐
1402 vice to be automatically replaced by a new device without meta‐
1403 data if it appears at specified path. This option is normally
1404 only set by a udev script.
1405
1406
1408 -m, --mail
1409 Give a mail address to send alerts to.
1410
1411
1412 -p, --program, --alert
1413 Give a program to be run whenever an event is detected.
1414
1415
1416 -y, --syslog
1417 Cause all events to be reported through 'syslog'. The messages
1418 have facility of 'daemon' and varying priorities.
1419
1420
1421 -d, --delay
1422 Give a delay in seconds. mdadm polls the md arrays and then
1423 waits this many seconds before polling again. The default is 60
1424 seconds. Since 2.6.16, there is no need to reduce this as the
1425 kernel alerts mdadm immediately when there is any change.
1426
1427
1428 -r, --increment
1429 Give a percentage increment. mdadm will generate RebuildNN
1430 events with the given percentage increment.
1431
1432
1433 -f, --daemonise
1434 Tell mdadm to run as a background daemon if it decides to moni‐
1435 tor anything. This causes it to fork and run in the child, and
1436 to disconnect from the terminal. The process id of the child is
1437 written to stdout. This is useful with --scan which will only
1438 continue monitoring if a mail address or alert program is found
1439 in the config file.
1440
1441
1442 -i, --pid-file
1443 When mdadm is running in daemon mode, write the pid of the dae‐
1444 mon process to the specified file, instead of printing it on
1445 standard output.
1446
1447
1448 -1, --oneshot
1449 Check arrays only once. This will generate NewArray events and
1450 more significantly DegradedArray and SparesMissing events. Run‐
1451 ning
1452 mdadm --monitor --scan -1
1453 from a cron script will ensure regular notification of any de‐
1454 graded arrays.
1455
1456
1457 -t, --test
1458 Generate a TestMessage alert for every array found at startup.
1459 This alert gets mailed and passed to the alert program. This
1460 can be used for testing that alert message do get through suc‐
1461 cessfully.
1462
1463
1464 --no-sharing
1465 This inhibits the functionality for moving spares between ar‐
1466 rays. Only one monitoring process started with --scan but with‐
1467 out this flag is allowed, otherwise the two could interfere with
1468 each other.
1469
1470
1472 Usage: mdadm --assemble md-device options-and-component-devices...
1473
1474 Usage: mdadm --assemble --scan md-devices-and-options...
1475
1476 Usage: mdadm --assemble --scan options...
1477
1478
1479 This usage assembles one or more RAID arrays from pre-existing compo‐
1480 nents. For each array, mdadm needs to know the md device, the identity
1481 of the array, and a number of component-devices. These can be found in
1482 a number of ways.
1483
1484 In the first usage example (without the --scan) the first device given
1485 is the md device. In the second usage example, all devices listed are
1486 treated as md devices and assembly is attempted. In the third (where
1487 no devices are listed) all md devices that are listed in the configura‐
1488 tion file are assembled. If no arrays are described by the configura‐
1489 tion file, then any arrays that can be found on unused devices will be
1490 assembled.
1491
1492 If precisely one device is listed, but --scan is not given, then mdadm
1493 acts as though --scan was given and identity information is extracted
1494 from the configuration file.
1495
1496 The identity can be given with the --uuid option, the --name option, or
1497 the --super-minor option, will be taken from the md-device record in
1498 the config file, or will be taken from the super block of the first
1499 component-device listed on the command line.
1500
1501 Devices can be given on the --assemble command line or in the config
1502 file. Only devices which have an md superblock which contains the
1503 right identity will be considered for any array.
1504
1505 The config file is only used if explicitly named with --config or re‐
1506 quested with (a possibly implicit) --scan. In the later case,
1507 /etc/mdadm.conf or /etc/mdadm/mdadm.conf is used.
1508
1509 If --scan is not given, then the config file will only be used to find
1510 the identity of md arrays.
1511
1512 Normally the array will be started after it is assembled. However if
1513 --scan is not given and not all expected drives were listed, then the
1514 array is not started (to guard against usage errors). To insist that
1515 the array be started in this case (as may work for RAID1, 4, 5, 6, or
1516 10), give the --run flag.
1517
1518 If udev is active, mdadm does not create any entries in /dev but leaves
1519 that to udev. It does record information in /run/mdadm/map which will
1520 allow udev to choose the correct name.
1521
1522 If mdadm detects that udev is not configured, it will create the de‐
1523 vices in /dev itself.
1524
1525 In Linux kernels prior to version 2.6.28 there were two distinctly dif‐
1526 ferent types of md devices that could be created: one that could be
1527 partitioned using standard partitioning tools and one that could not.
1528 Since 2.6.28 that distinction is no longer relevant as both type of de‐
1529 vices can be partitioned. mdadm will normally create the type that
1530 originally could not be partitioned as it has a well defined major num‐
1531 ber (9).
1532
1533 Prior to 2.6.28, it is important that mdadm chooses the correct type of
1534 array device to use. This can be controlled with the --auto option.
1535 In particular, a value of "mdp" or "part" or "p" tells mdadm to use a
1536 partitionable device rather than the default.
1537
1538 In the no-udev case, the value given to --auto can be suffixed by a
1539 number. This tells mdadm to create that number of partition devices
1540 rather than the default of 4.
1541
1542 The value given to --auto can also be given in the configuration file
1543 as a word starting auto= on the ARRAY line for the relevant array.
1544
1545
1546 Auto Assembly
1547 When --assemble is used with --scan and no devices are listed, mdadm
1548 will first attempt to assemble all the arrays listed in the config
1549 file.
1550
1551 If no arrays are listed in the config (other than those marked <ig‐
1552 nore>) it will look through the available devices for possible arrays
1553 and will try to assemble anything that it finds. Arrays which are
1554 tagged as belonging to the given homehost will be assembled and started
1555 normally. Arrays which do not obviously belong to this host are given
1556 names that are expected not to conflict with anything local, and are
1557 started "read-auto" so that nothing is written to any device until the
1558 array is written to. i.e. automatic resync etc is delayed.
1559
1560 If mdadm finds a consistent set of devices that look like they should
1561 comprise an array, and if the superblock is tagged as belonging to the
1562 given home host, it will automatically choose a device name and try to
1563 assemble the array. If the array uses version-0.90 metadata, then the
1564 minor number as recorded in the superblock is used to create a name in
1565 /dev/md/ so for example /dev/md/3. If the array uses version-1 meta‐
1566 data, then the name from the superblock is used to similarly create a
1567 name in /dev/md/ (the name will have any 'host' prefix stripped first).
1568
1569 This behaviour can be modified by the AUTO line in the mdadm.conf con‐
1570 figuration file. This line can indicate that specific metadata type
1571 should, or should not, be automatically assembled. If an array is
1572 found which is not listed in mdadm.conf and has a metadata format that
1573 is denied by the AUTO line, then it will not be assembled. The AUTO
1574 line can also request that all arrays identified as being for this
1575 homehost should be assembled regardless of their metadata type. See
1576 mdadm.conf(5) for further details.
1577
1578 Note: Auto assembly cannot be used for assembling and activating some
1579 arrays which are undergoing reshape. In particular as the backup-file
1580 cannot be given, any reshape which requires a backup-file to continue
1581 cannot be started by auto assembly. An array which is growing to more
1582 devices and has passed the critical section can be assembled using
1583 auto-assembly.
1584
1585
1587 Usage: mdadm --build md-device --chunk=X --level=Y --raid-devices=Z de‐
1588 vices
1589
1590
1591 This usage is similar to --create. The difference is that it creates
1592 an array without a superblock. With these arrays there is no differ‐
1593 ence between initially creating the array and subsequently assembling
1594 the array, except that hopefully there is useful data there in the sec‐
1595 ond case.
1596
1597 The level may raid0, linear, raid1, raid10, multipath, or faulty, or
1598 one of their synonyms. All devices must be listed and the array will
1599 be started once complete. It will often be appropriate to use --as‐
1600 sume-clean with levels raid1 or raid10.
1601
1602
1604 Usage: mdadm --create md-device --chunk=X --level=Y
1605 --raid-devices=Z devices
1606
1607
1608 This usage will initialise a new md array, associate some devices with
1609 it, and activate the array.
1610
1611 The named device will normally not exist when mdadm --create is run,
1612 but will be created by udev once the array becomes active.
1613
1614 As devices are added, they are checked to see if they contain RAID su‐
1615 perblocks or filesystems. They are also checked to see if the variance
1616 in device size exceeds 1%.
1617
1618 If any discrepancy is found, the array will not automatically be run,
1619 though the presence of a --run can override this caution.
1620
1621 To create a "degraded" array in which some devices are missing, simply
1622 give the word "missing" in place of a device name. This will cause
1623 mdadm to leave the corresponding slot in the array empty. For a RAID4
1624 or RAID5 array at most one slot can be "missing"; for a RAID6 array at
1625 most two slots. For a RAID1 array, only one real device needs to be
1626 given. All of the others can be "missing".
1627
1628 When creating a RAID5 array, mdadm will automatically create a degraded
1629 array with an extra spare drive. This is because building the spare
1630 into a degraded array is in general faster than resyncing the parity on
1631 a non-degraded, but not clean, array. This feature can be overridden
1632 with the --force option.
1633
1634 When creating an array with version-1 metadata a name for the array is
1635 required. If this is not given with the --name option, mdadm will
1636 choose a name based on the last component of the name of the device be‐
1637 ing created. So if /dev/md3 is being created, then the name 3 will be
1638 chosen. If /dev/md/home is being created, then the name home will be
1639 used.
1640
1641 When creating a partition based array, using mdadm with version-1.x
1642 metadata, the partition type should be set to 0xDA (non fs-data). This
1643 type selection allows for greater precision since using any other [RAID
1644 auto-detect (0xFD) or a GNU/Linux partition (0x83)], might create prob‐
1645 lems in the event of array recovery through a live cdrom.
1646
1647 A new array will normally get a randomly assigned 128bit UUID which is
1648 very likely to be unique. If you have a specific need, you can choose
1649 a UUID for the array by giving the --uuid= option. Be warned that cre‐
1650 ating two arrays with the same UUID is a recipe for disaster. Also,
1651 using --uuid= when creating a v0.90 array will silently override any
1652 --homehost= setting.
1653
1654 If the array type supports a write-intent bitmap, and if the devices in
1655 the array exceed 100G is size, an internal write-intent bitmap will au‐
1656 tomatically be added unless some other option is explicitly requested
1657 with the --bitmap option or a different consistency policy is selected
1658 with the --consistency-policy option. In any case space for a bitmap
1659 will be reserved so that one can be added later with --grow --bit‐
1660 map=internal.
1661
1662 If the metadata type supports it (currently only 1.x and IMSM meta‐
1663 data), space will be allocated to store a bad block list. This allows
1664 a modest number of bad blocks to be recorded, allowing the drive to re‐
1665 main in service while only partially functional.
1666
1667 When creating an array within a CONTAINER mdadm can be given either the
1668 list of devices to use, or simply the name of the container. The for‐
1669 mer case gives control over which devices in the container will be used
1670 for the array. The latter case allows mdadm to automatically choose
1671 which devices to use based on how much spare space is available.
1672
1673 The General Management options that are valid with --create are:
1674
1675 --run insist on running the array even if some devices look like they
1676 might be in use.
1677
1678
1679 --readonly
1680 start the array in readonly mode.
1681
1682
1684 Usage: mdadm device options... devices...
1685
1686 This usage will allow individual devices in an array to be failed, re‐
1687 moved or added. It is possible to perform multiple operations with on
1688 command. For example:
1689 mdadm /dev/md0 -f /dev/hda1 -r /dev/hda1 -a /dev/hda1
1690 will firstly mark /dev/hda1 as faulty in /dev/md0 and will then remove
1691 it from the array and finally add it back in as a spare. However only
1692 one md array can be affected by a single command.
1693
1694 When a device is added to an active array, mdadm checks to see if it
1695 has metadata on it which suggests that it was recently a member of the
1696 array. If it does, it tries to "re-add" the device. If there have
1697 been no changes since the device was removed, or if the array has a
1698 write-intent bitmap which has recorded whatever changes there were,
1699 then the device will immediately become a full member of the array and
1700 those differences recorded in the bitmap will be resolved.
1701
1702
1704 Usage: mdadm options ... devices ...
1705
1706 MISC mode includes a number of distinct operations that operate on dis‐
1707 tinct devices. The operations are:
1708
1709 --query
1710 The device is examined to see if it is (1) an active md array,
1711 or (2) a component of an md array. The information discovered
1712 is reported.
1713
1714
1715 --detail
1716 The device should be an active md device. mdadm will display a
1717 detailed description of the array. --brief or --scan will cause
1718 the output to be less detailed and the format to be suitable for
1719 inclusion in mdadm.conf. The exit status of mdadm will normally
1720 be 0 unless mdadm failed to get useful information about the de‐
1721 vice(s); however, if the --test option is given, then the exit
1722 status will be:
1723
1724 0 The array is functioning normally.
1725
1726 1 The array has at least one failed device.
1727
1728 2 The array has multiple failed devices such that it is un‐
1729 usable.
1730
1731 4 There was an error while trying to get information about
1732 the device.
1733
1734
1735 --detail-platform
1736 Print detail of the platform's RAID capabilities (firmware /
1737 hardware topology). If the metadata is specified with -e or
1738 --metadata= then the return status will be:
1739
1740 0 metadata successfully enumerated its platform components
1741 on this system
1742
1743 1 metadata is platform independent
1744
1745 2 metadata failed to find its platform components on this
1746 system
1747
1748
1749 --update-subarray=
1750 If the device is a container and the argument to --update-subar‐
1751 ray specifies a subarray in the container, then attempt to up‐
1752 date the given superblock field in the subarray. Similar to up‐
1753 dating an array in "assemble" mode, the field to update is se‐
1754 lected by -U or --update= option. The supported options are
1755 name, ppl, no-ppl, bitmap and no-bitmap.
1756
1757 The name option updates the subarray name in the metadata, it
1758 may not affect the device node name or the device node symlink
1759 until the subarray is re-assembled. If updating name would
1760 change the UUID of an active subarray this operation is blocked,
1761 and the command will end in an error.
1762
1763 The ppl and no-ppl options enable and disable PPL in the meta‐
1764 data. Currently supported only for IMSM subarrays.
1765
1766 The bitmap and no-bitmap options enable and disable write-intent
1767 bitmap in the metadata. Currently supported only for IMSM subar‐
1768 rays.
1769
1770
1771 --examine
1772 The device should be a component of an md array. mdadm will
1773 read the md superblock of the device and display the contents.
1774 If --brief or --scan is given, then multiple devices that are
1775 components of the one array are grouped together and reported in
1776 a single entry suitable for inclusion in mdadm.conf.
1777
1778 Having --scan without listing any devices will cause all devices
1779 listed in the config file to be examined.
1780
1781
1782 --dump=directory
1783 If the device contains RAID metadata, a file will be created in
1784 the directory and the metadata will be written to it. The file
1785 will be the same size as the device and have the metadata writ‐
1786 ten in the file at the same locate that it exists in the device.
1787 However the file will be "sparse" so that only those blocks con‐
1788 taining metadata will be allocated. The total space used will be
1789 small.
1790
1791 The file name used in the directory will be the base name of the
1792 device. Further if any links appear in /dev/disk/by-id which
1793 point to the device, then hard links to the file will be created
1794 in directory based on these by-id names.
1795
1796 Multiple devices can be listed and their metadata will all be
1797 stored in the one directory.
1798
1799
1800 --restore=directory
1801 This is the reverse of --dump. mdadm will locate a file in the
1802 directory that has a name appropriate for the given device and
1803 will restore metadata from it. Names that match /dev/disk/by-id
1804 names are preferred, however if two of those refer to different
1805 files, mdadm will not choose between them but will abort the op‐
1806 eration.
1807
1808 If a file name is given instead of a directory then mdadm will
1809 restore from that file to a single device, always provided the
1810 size of the file matches that of the device, and the file con‐
1811 tains valid metadata.
1812
1813 --stop The devices should be active md arrays which will be deacti‐
1814 vated, as long as they are not currently in use.
1815
1816
1817 --run This will fully activate a partially assembled md array.
1818
1819
1820 --readonly
1821 This will mark an active array as read-only, providing that it
1822 is not currently being used.
1823
1824
1825 --readwrite
1826 This will change a readonly array back to being read/write.
1827
1828
1829 --scan For all operations except --examine, --scan will cause the oper‐
1830 ation to be applied to all arrays listed in /proc/mdstat. For
1831 --examine, --scan causes all devices listed in the config file
1832 to be examined.
1833
1834
1835 -b, --brief
1836 Be less verbose. This is used with --detail and --examine. Us‐
1837 ing --brief with --verbose gives an intermediate level of ver‐
1838 bosity.
1839
1840
1842 Usage: mdadm --monitor options... devices...
1843
1844
1845 This usage causes mdadm to periodically poll a number of md arrays and
1846 to report on any events noticed. mdadm will never exit once it decides
1847 that there are arrays to be checked, so it should normally be run in
1848 the background.
1849
1850 As well as reporting events, mdadm may move a spare drive from one ar‐
1851 ray to another if they are in the same spare-group or domain and if the
1852 destination array has a failed drive but no spares.
1853
1854 If any devices are listed on the command line, mdadm will only monitor
1855 those devices. Otherwise all arrays listed in the configuration file
1856 will be monitored. Further, if --scan is given, then any other md de‐
1857 vices that appear in /proc/mdstat will also be monitored.
1858
1859 The result of monitoring the arrays is the generation of events. These
1860 events are passed to a separate program (if specified) and may be
1861 mailed to a given E-mail address.
1862
1863 When passing events to a program, the program is run once for each
1864 event, and is given 2 or 3 command-line arguments: the first is the
1865 name of the event (see below), the second is the name of the md device
1866 which is affected, and the third is the name of a related device if
1867 relevant (such as a component device that has failed).
1868
1869 If --scan is given, then a program or an E-mail address must be speci‐
1870 fied on the command line or in the config file. If neither are avail‐
1871 able, then mdadm will not monitor anything. Without --scan, mdadm will
1872 continue monitoring as long as something was found to monitor. If no
1873 program or email is given, then each event is reported to stdout.
1874
1875 The different events are:
1876
1877
1878 DeviceDisappeared
1879 An md array which previously was configured appears to no
1880 longer be configured. (syslog priority: Critical)
1881
1882 If mdadm was told to monitor an array which is RAID0 or Lin‐
1883 ear, then it will report DeviceDisappeared with the extra
1884 information Wrong-Level. This is because RAID0 and Linear
1885 do not support the device-failed, hot-spare and resync oper‐
1886 ations which are monitored.
1887
1888
1889 RebuildStarted
1890 An md array started reconstruction (e.g. recovery, resync,
1891 reshape, check, repair). (syslog priority: Warning)
1892
1893
1894 RebuildNN
1895 Where NN is a two-digit number (ie. 05, 48). This indicates
1896 that rebuild has passed that many percent of the total. The
1897 events are generated with fixed increment since 0. Increment
1898 size may be specified with a commandline option (default is
1899 20). (syslog priority: Warning)
1900
1901
1902 RebuildFinished
1903 An md array that was rebuilding, isn't any more, either be‐
1904 cause it finished normally or was aborted. (syslog priority:
1905 Warning)
1906
1907
1908 Fail An active component device of an array has been marked as
1909 faulty. (syslog priority: Critical)
1910
1911
1912 FailSpare
1913 A spare component device which was being rebuilt to replace
1914 a faulty device has failed. (syslog priority: Critical)
1915
1916
1917 SpareActive
1918 A spare component device which was being rebuilt to replace
1919 a faulty device has been successfully rebuilt and has been
1920 made active. (syslog priority: Info)
1921
1922
1923 NewArray
1924 A new md array has been detected in the /proc/mdstat file.
1925 (syslog priority: Info)
1926
1927
1928 DegradedArray
1929 A newly noticed array appears to be degraded. This message
1930 is not generated when mdadm notices a drive failure which
1931 causes degradation, but only when mdadm notices that an ar‐
1932 ray is degraded when it first sees the array. (syslog pri‐
1933 ority: Critical)
1934
1935
1936 MoveSpare
1937 A spare drive has been moved from one array in a spare-group
1938 or domain to another to allow a failed drive to be replaced.
1939 (syslog priority: Info)
1940
1941
1942 SparesMissing
1943 If mdadm has been told, via the config file, that an array
1944 should have a certain number of spare devices, and mdadm de‐
1945 tects that it has fewer than this number when it first sees
1946 the array, it will report a SparesMissing message. (syslog
1947 priority: Warning)
1948
1949
1950 TestMessage
1951 An array was found at startup, and the --test flag was
1952 given. (syslog priority: Info)
1953
1954 Only Fail, FailSpare, DegradedArray, SparesMissing and TestMessage
1955 cause Email to be sent. All events cause the program to be run. The
1956 program is run with two or three arguments: the event name, the array
1957 device and possibly a second device.
1958
1959 Each event has an associated array device (e.g. /dev/md1) and possibly
1960 a second device. For Fail, FailSpare, and SpareActive the second de‐
1961 vice is the relevant component device. For MoveSpare the second device
1962 is the array that the spare was moved from.
1963
1964 For mdadm to move spares from one array to another, the different ar‐
1965 rays need to be labeled with the same spare-group or the spares must be
1966 allowed to migrate through matching POLICY domains in the configuration
1967 file. The spare-group name can be any string; it is only necessary
1968 that different spare groups use different names.
1969
1970 When mdadm detects that an array in a spare group has fewer active de‐
1971 vices than necessary for the complete array, and has no spare devices,
1972 it will look for another array in the same spare group that has a full
1973 complement of working drive and a spare. It will then attempt to re‐
1974 move the spare from the second drive and add it to the first. If the
1975 removal succeeds but the adding fails, then it is added back to the
1976 original array.
1977
1978 If the spare group for a degraded array is not defined, mdadm will look
1979 at the rules of spare migration specified by POLICY lines in mdadm.conf
1980 and then follow similar steps as above if a matching spare is found.
1981
1982
1984 The GROW mode is used for changing the size or shape of an active ar‐
1985 ray. For this to work, the kernel must support the necessary change.
1986 Various types of growth are being added during 2.6 development.
1987
1988 Currently the supported changes include
1989
1990 • change the "size" attribute for RAID1, RAID4, RAID5 and RAID6.
1991
1992 • increase or decrease the "raid-devices" attribute of RAID0, RAID1,
1993 RAID4, RAID5, and RAID6.
1994
1995 • change the chunk-size and layout of RAID0, RAID4, RAID5, RAID6 and
1996 RAID10.
1997
1998 • convert between RAID1 and RAID5, between RAID5 and RAID6, between
1999 RAID0, RAID4, and RAID5, and between RAID0 and RAID10 (in the
2000 near-2 mode).
2001
2002 • add a write-intent bitmap to any array which supports these bit‐
2003 maps, or remove a write-intent bitmap from such an array.
2004
2005 • change the array's consistency policy.
2006
2007 Using GROW on containers is currently supported only for Intel's IMSM
2008 container format. The number of devices in a container can be in‐
2009 creased - which affects all arrays in the container - or an array in a
2010 container can be converted between levels where those levels are sup‐
2011 ported by the container, and the conversion is on of those listed
2012 above.
2013
2014
2015 Notes:
2016
2017 • Intel's native checkpointing doesn't use --backup-file option and
2018 it is transparent for assembly feature.
2019
2020 • Roaming between Windows(R) and Linux systems for IMSM metadata is
2021 not supported during grow process.
2022
2023 • When growing a raid0 device, the new component disk size (or exter‐
2024 nal backup size) should be larger than LCM(old, new) * chunk-size *
2025 2, where LCM() is the least common multiple of the old and new
2026 count of component disks, and "* 2" comes from the fact that mdadm
2027 refuses to use more than half of a spare device for backup space.
2028
2029
2030 SIZE CHANGES
2031 Normally when an array is built the "size" is taken from the smallest
2032 of the drives. If all the small drives in an arrays are, one at a
2033 time, removed and replaced with larger drives, then you could have an
2034 array of large drives with only a small amount used. In this situa‐
2035 tion, changing the "size" with "GROW" mode will allow the extra space
2036 to start being used. If the size is increased in this way, a "resync"
2037 process will start to make sure the new parts of the array are synchro‐
2038 nised.
2039
2040 Note that when an array changes size, any filesystem that may be stored
2041 in the array will not automatically grow or shrink to use or vacate the
2042 space. The filesystem will need to be explicitly told to use the extra
2043 space after growing, or to reduce its size prior to shrinking the ar‐
2044 ray.
2045
2046 Also the size of an array cannot be changed while it has an active bit‐
2047 map. If an array has a bitmap, it must be removed before the size can
2048 be changed. Once the change is complete a new bitmap can be created.
2049
2050
2051 Note: --grow --size is not yet supported for external file bitmap.
2052
2053
2054 RAID-DEVICES CHANGES
2055 A RAID1 array can work with any number of devices from 1 upwards
2056 (though 1 is not very useful). There may be times which you want to
2057 increase or decrease the number of active devices. Note that this is
2058 different to hot-add or hot-remove which changes the number of inactive
2059 devices.
2060
2061 When reducing the number of devices in a RAID1 array, the slots which
2062 are to be removed from the array must already be vacant. That is, the
2063 devices which were in those slots must be failed and removed.
2064
2065 When the number of devices is increased, any hot spares that are
2066 present will be activated immediately.
2067
2068 Changing the number of active devices in a RAID5 or RAID6 is much more
2069 effort. Every block in the array will need to be read and written back
2070 to a new location. From 2.6.17, the Linux Kernel is able to increase
2071 the number of devices in a RAID5 safely, including restarting an inter‐
2072 rupted "reshape". From 2.6.31, the Linux Kernel is able to increase or
2073 decrease the number of devices in a RAID5 or RAID6.
2074
2075 From 2.6.35, the Linux Kernel is able to convert a RAID0 in to a RAID4
2076 or RAID5. mdadm uses this functionality and the ability to add devices
2077 to a RAID4 to allow devices to be added to a RAID0. When requested to
2078 do this, mdadm will convert the RAID0 to a RAID4, add the necessary
2079 disks and make the reshape happen, and then convert the RAID4 back to
2080 RAID0.
2081
2082 When decreasing the number of devices, the size of the array will also
2083 decrease. If there was data in the array, it could get destroyed and
2084 this is not reversible, so you should firstly shrink the filesystem on
2085 the array to fit within the new size. To help prevent accidents, mdadm
2086 requires that the size of the array be decreased first with mdadm
2087 --grow --array-size. This is a reversible change which simply makes
2088 the end of the array inaccessible. The integrity of any data can then
2089 be checked before the non-reversible reduction in the number of devices
2090 is request.
2091
2092 When relocating the first few stripes on a RAID5 or RAID6, it is not
2093 possible to keep the data on disk completely consistent and crash-
2094 proof. To provide the required safety, mdadm disables writes to the
2095 array while this "critical section" is reshaped, and takes a backup of
2096 the data that is in that section. For grows, this backup may be stored
2097 in any spare devices that the array has, however it can also be stored
2098 in a separate file specified with the --backup-file option, and is re‐
2099 quired to be specified for shrinks, RAID level changes and layout
2100 changes. If this option is used, and the system does crash during the
2101 critical period, the same file must be passed to --assemble to restore
2102 the backup and reassemble the array. When shrinking rather than grow‐
2103 ing the array, the reshape is done from the end towards the beginning,
2104 so the "critical section" is at the end of the reshape.
2105
2106
2107 LEVEL CHANGES
2108 Changing the RAID level of any array happens instantaneously. However
2109 in the RAID5 to RAID6 case this requires a non-standard layout of the
2110 RAID6 data, and in the RAID6 to RAID5 case that non-standard layout is
2111 required before the change can be accomplished. So while the level
2112 change is instant, the accompanying layout change can take quite a long
2113 time. A --backup-file is required. If the array is not simultaneously
2114 being grown or shrunk, so that the array size will remain the same -
2115 for example, reshaping a 3-drive RAID5 into a 4-drive RAID6 - the
2116 backup file will be used not just for a "cricital section" but through‐
2117 out the reshape operation, as described below under LAYOUT CHANGES.
2118
2119
2120 CHUNK-SIZE AND LAYOUT CHANGES
2121 Changing the chunk-size or layout without also changing the number of
2122 devices as the same time will involve re-writing all blocks in-place.
2123 To ensure against data loss in the case of a crash, a --backup-file
2124 must be provided for these changes. Small sections of the array will
2125 be copied to the backup file while they are being rearranged. This
2126 means that all the data is copied twice, once to the backup and once to
2127 the new layout on the array, so this type of reshape will go very
2128 slowly.
2129
2130 If the reshape is interrupted for any reason, this backup file must be
2131 made available to mdadm --assemble so the array can be reassembled.
2132 Consequently the file cannot be stored on the device being reshaped.
2133
2134
2135
2136 BITMAP CHANGES
2137 A write-intent bitmap can be added to, or removed from, an active ar‐
2138 ray. Either internal bitmaps, or bitmaps stored in a separate file,
2139 can be added. Note that if you add a bitmap stored in a file which is
2140 in a filesystem that is on the RAID array being affected, the system
2141 will deadlock. The bitmap must be on a separate filesystem.
2142
2143
2144 CONSISTENCY POLICY CHANGES
2145 The consistency policy of an active array can be changed by using the
2146 --consistency-policy option in Grow mode. Currently this works only for
2147 the ppl and resync policies and allows to enable or disable the RAID5
2148 Partial Parity Log (PPL).
2149
2150
2152 Usage: mdadm --incremental [--run] [--quiet] component-device [op‐
2153 tional-aliases-for-device]
2154
2155 Usage: mdadm --incremental --fail component-device
2156
2157 Usage: mdadm --incremental --rebuild-map
2158
2159 Usage: mdadm --incremental --run --scan
2160
2161
2162 This mode is designed to be used in conjunction with a device discovery
2163 system. As devices are found in a system, they can be passed to mdadm
2164 --incremental to be conditionally added to an appropriate array.
2165
2166 Conversely, it can also be used with the --fail flag to do just the op‐
2167 posite and find whatever array a particular device is part of and re‐
2168 move the device from that array.
2169
2170 If the device passed is a CONTAINER device created by a previous call
2171 to mdadm, then rather than trying to add that device to an array, all
2172 the arrays described by the metadata of the container will be started.
2173
2174 mdadm performs a number of tests to determine if the device is part of
2175 an array, and which array it should be part of. If an appropriate ar‐
2176 ray is found, or can be created, mdadm adds the device to the array and
2177 conditionally starts the array.
2178
2179 Note that mdadm will normally only add devices to an array which were
2180 previously working (active or spare) parts of that array. The support
2181 for automatic inclusion of a new drive as a spare in some array re‐
2182 quires a configuration through POLICY in config file.
2183
2184 The tests that mdadm makes are as follow:
2185
2186 + Is the device permitted by mdadm.conf? That is, is it listed in
2187 a DEVICES line in that file. If DEVICES is absent then the de‐
2188 fault it to allow any device. Similarly if DEVICES contains the
2189 special word partitions then any device is allowed. Otherwise
2190 the device name given to mdadm, or one of the aliases given, or
2191 an alias found in the filesystem, must match one of the names or
2192 patterns in a DEVICES line.
2193
2194 This is the only context where the aliases are used. They are
2195 usually provided by a udev rules mentioning $env{DEVLINKS}.
2196
2197
2198 + Does the device have a valid md superblock? If a specific meta‐
2199 data version is requested with --metadata or -e then only that
2200 style of metadata is accepted, otherwise mdadm finds any known
2201 version of metadata. If no md metadata is found, the device may
2202 be still added to an array as a spare if POLICY allows.
2203
2204
2205
2206 mdadm keeps a list of arrays that it has partially assembled in
2207 /run/mdadm/map. If no array exists which matches the metadata on the
2208 new device, mdadm must choose a device name and unit number. It does
2209 this based on any name given in mdadm.conf or any name information
2210 stored in the metadata. If this name suggests a unit number, that num‐
2211 ber will be used, otherwise a free unit number will be chosen. Nor‐
2212 mally mdadm will prefer to create a partitionable array, however if the
2213 CREATE line in mdadm.conf suggests that a non-partitionable array is
2214 preferred, that will be honoured.
2215
2216 If the array is not found in the config file and its metadata does not
2217 identify it as belonging to the "homehost", then mdadm will choose a
2218 name for the array which is certain not to conflict with any array
2219 which does belong to this host. It does this be adding an underscore
2220 and a small number to the name preferred by the metadata.
2221
2222 Once an appropriate array is found or created and the device is added,
2223 mdadm must decide if the array is ready to be started. It will nor‐
2224 mally compare the number of available (non-spare) devices to the number
2225 of devices that the metadata suggests need to be active. If there are
2226 at least that many, the array will be started. This means that if any
2227 devices are missing the array will not be restarted.
2228
2229 As an alternative, --run may be passed to mdadm in which case the array
2230 will be run as soon as there are enough devices present for the data to
2231 be accessible. For a RAID1, that means one device will start the ar‐
2232 ray. For a clean RAID5, the array will be started as soon as all but
2233 one drive is present.
2234
2235 Note that neither of these approaches is really ideal. If it can be
2236 known that all device discovery has completed, then
2237 mdadm -IRs
2238 can be run which will try to start all arrays that are being incremen‐
2239 tally assembled. They are started in "read-auto" mode in which they
2240 are read-only until the first write request. This means that no meta‐
2241 data updates are made and no attempt at resync or recovery happens.
2242 Further devices that are found before the first write can still be
2243 added safely.
2244
2245
2247 This section describes environment variables that affect how mdadm op‐
2248 erates.
2249
2250
2251 MDADM_NO_MDMON
2252 Setting this value to 1 will prevent mdadm from automatically
2253 launching mdmon. This variable is intended primarily for debug‐
2254 ging mdadm/mdmon.
2255
2256
2257 MDADM_NO_UDEV
2258 Normally, mdadm does not create any device nodes in /dev, but
2259 leaves that task to udev. If udev appears not to be configured,
2260 or if this environment variable is set to '1', the mdadm will
2261 create and devices that are needed.
2262
2263
2264 MDADM_NO_SYSTEMCTL
2265 If mdadm detects that systemd is in use it will normally request
2266 systemd to start various background tasks (particularly mdmon)
2267 rather than forking and running them in the background. This
2268 can be suppressed by setting MDADM_NO_SYSTEMCTL=1.
2269
2270
2271 IMSM_NO_PLATFORM
2272 A key value of IMSM metadata is that it allows interoperability
2273 with boot ROMs on Intel platforms, and with other major operat‐
2274 ing systems. Consequently, mdadm will only allow an IMSM array
2275 to be created or modified if detects that it is running on an
2276 Intel platform which supports IMSM, and supports the particular
2277 configuration of IMSM that is being requested (some functional‐
2278 ity requires newer OROM support).
2279
2280 These checks can be suppressed by setting IMSM_NO_PLATFORM=1 in
2281 the environment. This can be useful for testing or for disaster
2282 recovery. You should be aware that interoperability may be com‐
2283 promised by setting this value.
2284
2285
2286 MDADM_GROW_ALLOW_OLD
2287 If an array is stopped while it is performing a reshape and that
2288 reshape was making use of a backup file, then when the array is
2289 re-assembled mdadm will sometimes complain that the backup file
2290 is too old. If this happens and you are certain it is the right
2291 backup file, you can over-ride this check by setting
2292 MDADM_GROW_ALLOW_OLD=1 in the environment.
2293
2294
2295 MDADM_CONF_AUTO
2296 Any string given in this variable is added to the start of the
2297 AUTO line in the config file, or treated as the whole AUTO line
2298 if none is given. It can be used to disable certain metadata
2299 types when mdadm is called from a boot script. For example
2300 export MDADM_CONF_AUTO='-ddf -imsm'
2301 will make sure that mdadm does not automatically assemble any
2302 DDF or IMSM arrays that are found. This can be useful on sys‐
2303 tems configured to manage such arrays with dmraid.
2304
2305
2306
2308 mdadm --query /dev/name-of-device
2309 This will find out if a given device is a RAID array, or is part of
2310 one, and will provide brief information about the device.
2311
2312 mdadm --assemble --scan
2313 This will assemble and start all arrays listed in the standard config
2314 file. This command will typically go in a system startup file.
2315
2316 mdadm --stop --scan
2317 This will shut down all arrays that can be shut down (i.e. are not cur‐
2318 rently in use). This will typically go in a system shutdown script.
2319
2320 mdadm --follow --scan --delay=120
2321 If (and only if) there is an Email address or program given in the
2322 standard config file, then monitor the status of all arrays listed in
2323 that file by polling them ever 2 minutes.
2324
2325 mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hd[ac]1
2326 Create /dev/md0 as a RAID1 array consisting of /dev/hda1 and /dev/hdc1.
2327
2328 echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf
2329 mdadm --detail --scan >> mdadm.conf
2330 This will create a prototype config file that describes currently ac‐
2331 tive arrays that are known to be made from partitions of IDE or SCSI
2332 drives. This file should be reviewed before being used as it may con‐
2333 tain unwanted detail.
2334
2335 echo 'DEVICE /dev/hd[a-z] /dev/sd*[a-z]' > mdadm.conf
2336 mdadm --examine --scan --config=mdadm.conf >> mdadm.conf
2337 This will find arrays which could be assembled from existing IDE and
2338 SCSI whole drives (not partitions), and store the information in the
2339 format of a config file. This file is very likely to contain unwanted
2340 detail, particularly the devices= entries. It should be reviewed and
2341 edited before being used as an actual config file.
2342
2343 mdadm --examine --brief --scan --config=partitions
2344 mdadm -Ebsc partitions
2345 Create a list of devices by reading /proc/partitions, scan these for
2346 RAID superblocks, and printout a brief listing of all that were found.
2347
2348 mdadm -Ac partitions -m 0 /dev/md0
2349 Scan all partitions and devices listed in /proc/partitions and assemble
2350 /dev/md0 out of all such devices with a RAID superblock with a minor
2351 number of 0.
2352
2353 mdadm --monitor --scan --daemonise > /run/mdadm/mon.pid
2354 If config file contains a mail address or alert program, run mdadm in
2355 the background in monitor mode monitoring all md devices. Also write
2356 pid of mdadm daemon to /run/mdadm/mon.pid.
2357
2358 mdadm -Iq /dev/somedevice
2359 Try to incorporate newly discovered device into some array as appropri‐
2360 ate.
2361
2362 mdadm --incremental --rebuild-map --run --scan
2363 Rebuild the array map from any current arrays, and then start any that
2364 can be started.
2365
2366 mdadm /dev/md4 --fail detached --remove detached
2367 Any devices which are components of /dev/md4 will be marked as faulty
2368 and then remove from the array.
2369
2370 mdadm --grow /dev/md4 --level=6 --backup-file=/root/backup-md4
2371 The array /dev/md4 which is currently a RAID5 array will be converted
2372 to RAID6. There should normally already be a spare drive attached to
2373 the array as a RAID6 needs one more drive than a matching RAID5.
2374
2375 mdadm --create /dev/md/ddf --metadata=ddf --raid-disks 6 /dev/sd[a-f]
2376 Create a DDF array over 6 devices.
2377
2378 mdadm --create /dev/md/home -n3 -l5 -z 30000000 /dev/md/ddf
2379 Create a RAID5 array over any 3 devices in the given DDF set. Use only
2380 30 gigabytes of each device.
2381
2382 mdadm -A /dev/md/ddf1 /dev/sd[a-f]
2383 Assemble a pre-exist ddf array.
2384
2385 mdadm -I /dev/md/ddf1
2386 Assemble all arrays contained in the ddf array, assigning names as ap‐
2387 propriate.
2388
2389 mdadm --create --help
2390 Provide help about the Create mode.
2391
2392 mdadm --config --help
2393 Provide help about the format of the config file.
2394
2395 mdadm --help
2396 Provide general help.
2397
2398
2400 /proc/mdstat
2401 If you're using the /proc filesystem, /proc/mdstat lists all active md
2402 devices with information about them. mdadm uses this to find arrays
2403 when --scan is given in Misc mode, and to monitor array reconstruction
2404 on Monitor mode.
2405
2406
2407 /etc/mdadm.conf
2408 The config file lists which devices may be scanned to see if they con‐
2409 tain MD super block, and gives identifying information (e.g. UUID)
2410 about known MD arrays. See mdadm.conf(5) for more details.
2411
2412
2413 /etc/mdadm.conf.d
2414 A directory containing configuration files which are read in lexical
2415 order.
2416
2417
2418 /run/mdadm/map
2419 When --incremental mode is used, this file gets a list of arrays cur‐
2420 rently being created.
2421
2422
2424 mdadm understand two sorts of names for array devices.
2425
2426 The first is the so-called 'standard' format name, which matches the
2427 names used by the kernel and which appear in /proc/mdstat.
2428
2429 The second sort can be freely chosen, but must reside in /dev/md/.
2430 When giving a device name to mdadm to create or assemble an array, ei‐
2431 ther full path name such as /dev/md0 or /dev/md/home can be given, or
2432 just the suffix of the second sort of name, such as home can be given.
2433
2434 When mdadm chooses device names during auto-assembly or incremental as‐
2435 sembly, it will sometimes add a small sequence number to the end of the
2436 name to avoid conflicted between multiple arrays that have the same
2437 name. If mdadm can reasonably determine that the array really is meant
2438 for this host, either by a hostname in the metadata, or by the presence
2439 of the array in mdadm.conf, then it will leave off the suffix if possi‐
2440 ble. Also if the homehost is specified as <ignore> mdadm will only use
2441 a suffix if a different array of the same name already exists or is
2442 listed in the config file.
2443
2444 The standard names for non-partitioned arrays (the only sort of md ar‐
2445 ray available in 2.4 and earlier) are of the form
2446
2447 /dev/mdNN
2448
2449 where NN is a number. The standard names for partitionable arrays (as
2450 available from 2.6 onwards) are of the form:
2451
2452 /dev/md_dNN
2453
2454 Partition numbers should be indicated by adding "pMM" to these, thus
2455 "/dev/md/d1p2".
2456
2457 From kernel version 2.6.28 the "non-partitioned array" can actually be
2458 partitioned. So the "md_dNN" names are no longer needed, and parti‐
2459 tions such as "/dev/mdNNpXX" are possible.
2460
2461 From kernel version 2.6.29 standard names can be non-numeric following
2462 the form:
2463
2464 /dev/md_XXX
2465
2466 where XXX is any string. These names are supported by mdadm since ver‐
2467 sion 3.3 provided they are enabled in mdadm.conf.
2468
2469
2471 mdadm was previously known as mdctl.
2472
2473
2475 For further information on mdadm usage, MD and the various levels of
2476 RAID, see:
2477
2478 https://raid.wiki.kernel.org/
2479
2480 (based upon Jakob Østergaard's Software-RAID.HOWTO)
2481
2482 The latest version of mdadm should always be available from
2483
2484 https://www.kernel.org/pub/linux/utils/raid/mdadm/
2485
2486 Related man pages:
2487
2488 mdmon(8), mdadm.conf(5), md(4).
2489
2490
2491
2492v4.2-rc2 MDADM(8)