1MDADM(8) System Manager's Manual MDADM(8)
2
3
4
6 mdadm - manage MD devices aka Linux Software RAID
7
8
10 mdadm [mode] <raiddevice> [options] <component-devices>
11
12
14 RAID devices are virtual devices created from two or more real block
15 devices. This allows multiple devices (typically disk drives or parti‐
16 tions thereof) to be combined into a single device to hold (for exam‐
17 ple) a single filesystem. Some RAID levels include redundancy and so
18 can survive some degree of device failure.
19
20 Linux Software RAID devices are implemented through the md (Multiple
21 Devices) device driver.
22
23 Currently, Linux supports LINEAR md devices, RAID0 (striping), RAID1
24 (mirroring), RAID4, RAID5, RAID6, RAID10, MULTIPATH, FAULTY, and CON‐
25 TAINER.
26
27 MULTIPATH is not a Software RAID mechanism, but does involve multiple
28 devices: each device is a path to one common physical storage device.
29 New installations should not use md/multipath as it is not well sup‐
30 ported and has no ongoing development. Use the Device Mapper based
31 multipath-tools instead.
32
33 FAULTY is also not true RAID, and it only involves one device. It pro‐
34 vides a layer over a true device that can be used to inject faults.
35
36 CONTAINER is different again. A CONTAINER is a collection of devices
37 that are managed as a set. This is similar to the set of devices con‐
38 nected to a hardware RAID controller. The set of devices may contain a
39 number of different RAID arrays each utilising some (or all) of the
40 blocks from a number of the devices in the set. For example, two de‐
41 vices in a 5-device set might form a RAID1 using the whole devices.
42 The remaining three might have a RAID5 over the first half of each de‐
43 vice, and a RAID0 over the second half.
44
45 With a CONTAINER, there is one set of metadata that describes all of
46 the arrays in the container. So when mdadm creates a CONTAINER device,
47 the device just represents the metadata. Other normal arrays (RAID1
48 etc) can be created inside the container.
49
50
52 mdadm has several major modes of operation:
53
54 Assemble
55 Assemble the components of a previously created array into an
56 active array. Components can be explicitly given or can be
57 searched for. mdadm checks that the components do form a bona
58 fide array, and can, on request, fiddle superblock information
59 so as to assemble a faulty array.
60
61
62 Build Build an array that doesn't have per-device metadata (su‐
63 perblocks). For these sorts of arrays, mdadm cannot differenti‐
64 ate between initial creation and subsequent assembly of an ar‐
65 ray. It also cannot perform any checks that appropriate compo‐
66 nents have been requested. Because of this, the Build mode
67 should only be used together with a complete understanding of
68 what you are doing.
69
70
71 Create Create a new array with per-device metadata (superblocks). Ap‐
72 propriate metadata is written to each device, and then the array
73 comprising those devices is activated. A 'resync' process is
74 started to make sure that the array is consistent (e.g. both
75 sides of a mirror contain the same data) but the content of the
76 device is left otherwise untouched. The array can be used as
77 soon as it has been created. There is no need to wait for the
78 initial resync to finish.
79
80
81 Follow or Monitor
82 Monitor one or more md devices and act on any state changes.
83 This is only meaningful for RAID1, 4, 5, 6, 10 or multipath ar‐
84 rays, as only these have interesting state. RAID0 or Linear
85 never have missing, spare, or failed drives, so there is nothing
86 to monitor.
87
88
89 Grow Grow (or shrink) an array, or otherwise reshape it in some way.
90 Currently supported growth options including changing the active
91 size of component devices and changing the number of active de‐
92 vices in Linear and RAID levels 0/1/4/5/6, changing the RAID
93 level between 0, 1, 5, and 6, and between 0 and 10, changing the
94 chunk size and layout for RAID 0,4,5,6,10 as well as adding or
95 removing a write-intent bitmap and changing the array's consis‐
96 tency policy.
97
98
99 Incremental Assembly
100 Add a single device to an appropriate array. If the addition of
101 the device makes the array runnable, the array will be started.
102 This provides a convenient interface to a hot-plug system. As
103 each device is detected, mdadm has a chance to include it in
104 some array as appropriate. Optionally, when the --fail flag is
105 passed in we will remove the device from any active array in‐
106 stead of adding it.
107
108 If a CONTAINER is passed to mdadm in this mode, then any arrays
109 within that container will be assembled and started.
110
111
112 Manage This is for doing things to specific components of an array such
113 as adding new spares and removing faulty devices.
114
115
116 Misc This is an 'everything else' mode that supports operations on
117 active arrays, operations on component devices such as erasing
118 old superblocks, and information-gathering operations.
119
120
121 Auto-detect
122 This mode does not act on a specific device or array, but rather
123 it requests the Linux Kernel to activate any auto-detected ar‐
124 rays.
125
128 -A, --assemble
129 Assemble a pre-existing array.
130
131
132 -B, --build
133 Build a legacy array without superblocks.
134
135
136 -C, --create
137 Create a new array.
138
139
140 -F, --follow, --monitor
141 Select Monitor mode.
142
143
144 -G, --grow
145 Change the size or shape of an active array.
146
147
148 -I, --incremental
149 Add/remove a single device to/from an appropriate array, and
150 possibly start the array.
151
152
153 --auto-detect
154 Request that the kernel starts any auto-detected arrays. This
155 can only work if md is compiled into the kernel — not if it is a
156 module. Arrays can be auto-detected by the kernel if all the
157 components are in primary MS-DOS partitions with partition type
158 FD, and all use v0.90 metadata. In-kernel autodetect is not
159 recommended for new installations. Using mdadm to detect and
160 assemble arrays — possibly in an initrd — is substantially more
161 flexible and should be preferred.
162
163
164 If a device is given before any options, or if the first option is one
165 of --add, --re-add, --add-spare, --fail, --remove, or --replace, then
166 the MANAGE mode is assumed. Anything other than these will cause the
167 Misc mode to be assumed.
168
169
171 -h, --help
172 Display a general help message or, after one of the above op‐
173 tions, a mode-specific help message.
174
175
176 --help-options
177 Display more detailed help about command-line parsing and some
178 commonly used options.
179
180
181 -V, --version
182 Print version information for mdadm.
183
184
185 -v, --verbose
186 Be more verbose about what is happening. This can be used twice
187 to be extra-verbose. The extra verbosity currently only affects
188 --detail --scan and --examine --scan.
189
190
191 -q, --quiet
192 Avoid printing purely informative messages. With this, mdadm
193 will be silent unless there is something really important to re‐
194 port.
195
196
197
198 -f, --force
199 Be more forceful about certain operations. See the various
200 modes for the exact meaning of this option in different con‐
201 texts.
202
203
204 -c, --config=
205 Specify the config file or directory. If not specified, the de‐
206 fault config file and default conf.d directory will be used.
207 See mdadm.conf(5) for more details.
208
209 If the config file given is partitions then nothing will be
210 read, but mdadm will act as though the config file contained ex‐
211 actly
212 DEVICE partitions containers
213 and will read /proc/partitions to find a list of devices to
214 scan, and /proc/mdstat to find a list of containers to examine.
215 If the word none is given for the config file, then mdadm will
216 act as though the config file were empty.
217
218 If the name given is of a directory, then mdadm will collect all
219 the files contained in the directory with a name ending in
220 .conf, sort them lexically, and process all of those files as
221 config files.
222
223
224 -s, --scan
225 Scan config file or /proc/mdstat for missing information. In
226 general, this option gives mdadm permission to get any missing
227 information (like component devices, array devices, array iden‐
228 tities, and alert destination) from the configuration file (see
229 previous option); one exception is MISC mode when using --detail
230 or --stop, in which case --scan says to get a list of array de‐
231 vices from /proc/mdstat.
232
233
234 -e, --metadata=
235 Declare the style of RAID metadata (superblock) to be used. The
236 default is 1.2 for --create, and to guess for other operations.
237 The default can be overridden by setting the metadata value for
238 the CREATE keyword in mdadm.conf.
239
240 Options are:
241
242
243 0, 0.90
244 Use the original 0.90 format superblock. This format
245 limits arrays to 28 component devices and limits compo‐
246 nent devices of levels 1 and greater to 2 terabytes. It
247 is also possible for there to be confusion about whether
248 the superblock applies to a whole device or just the last
249 partition, if that partition starts on a 64K boundary.
250
251
252 1, 1.0, 1.1, 1.2 default
253 Use the new version-1 format superblock. This has fewer
254 restrictions. It can easily be moved between hosts with
255 different endian-ness, and a recovery operation can be
256 checkpointed and restarted. The different sub-versions
257 store the superblock at different locations on the de‐
258 vice, either at the end (for 1.0), at the start (for 1.1)
259 or 4K from the start (for 1.2). "1" is equivalent to
260 "1.2" (the commonly preferred 1.x format). "default" is
261 equivalent to "1.2".
262
263 ddf Use the "Industry Standard" DDF (Disk Data Format) format
264 defined by SNIA. When creating a DDF array a CONTAINER
265 will be created, and normal arrays can be created in that
266 container.
267
268 imsm Use the Intel(R) Matrix Storage Manager metadata format.
269 This creates a CONTAINER which is managed in a similar
270 manner to DDF, and is supported by an option-rom on some
271 platforms:
272
273 https://www.intel.com/content/www/us/en/support/prod‐
274 ucts/122484/memory-and-storage/ssd-software/intel-vir‐
275 tual-raid-on-cpu-intel-vroc.html
276
277 --homehost=
278 This will override any HOMEHOST setting in the config file and
279 provides the identity of the host which should be considered the
280 home for any arrays.
281
282 When creating an array, the homehost will be recorded in the
283 metadata. For version-1 superblocks, it will be prefixed to the
284 array name. For version-0.90 superblocks, part of the SHA1 hash
285 of the hostname will be stored in the latter half of the UUID.
286
287 When reporting information about an array, any array which is
288 tagged for the given homehost will be reported as such.
289
290 When using Auto-Assemble, only arrays tagged for the given home‐
291 host will be allowed to use 'local' names (i.e. not ending in
292 '_' followed by a digit string). See below under Auto-Assembly.
293
294 The special name "any" can be used as a wild card. If an array
295 is created with --homehost=any then the name "any" will be
296 stored in the array and it can be assembled in the same way on
297 any host. If an array is assembled with this option, then the
298 homehost recorded on the array will be ignored.
299
300
301 --prefer=
302 When mdadm needs to print the name for a device it normally
303 finds the name in /dev which refers to the device and is the
304 shortest. When a path component is given with --prefer mdadm
305 will prefer a longer name if it contains that component. For
306 example --prefer=by-uuid will prefer a name in a subdirectory of
307 /dev called by-uuid.
308
309 This functionality is currently only provided by --detail and
310 --monitor.
311
312
313 --home-cluster=
314 specifies the cluster name for the md device. The md device can
315 be assembled only on the cluster which matches the name speci‐
316 fied. If this option is not provided, mdadm tries to detect the
317 cluster name automatically.
318
319
321 -n, --raid-devices=
322 Specify the number of active devices in the array. This, plus
323 the number of spare devices (see below) must equal the number of
324 component-devices (including "missing" devices) that are listed
325 on the command line for --create. Setting a value of 1 is prob‐
326 ably a mistake and so requires that --force be specified first.
327 A value of 1 will then be allowed for linear, multipath, RAID0
328 and RAID1. It is never allowed for RAID4, RAID5 or RAID6.
329 This number can only be changed using --grow for RAID1, RAID4,
330 RAID5 and RAID6 arrays, and only on kernels which provide the
331 necessary support.
332
333
334 -x, --spare-devices=
335 Specify the number of spare (eXtra) devices in the initial ar‐
336 ray. Spares can also be added and removed later. The number of
337 component devices listed on the command line must equal the num‐
338 ber of RAID devices plus the number of spare devices.
339
340
341 -z, --size=
342 Amount (in Kilobytes) of space to use from each drive in RAID
343 levels 1/4/5/6/10 and for RAID 0 on external metadata. This
344 must be a multiple of the chunk size, and must leave about 128Kb
345 of space at the end of the drive for the RAID superblock. If
346 this is not specified (as it normally is not) the smallest drive
347 (or partition) sets the size, though if there is a variance
348 among the drives of greater than 1%, a warning is issued.
349
350 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilo‐
351 bytes, Megabytes, Gigabytes or Terabytes respectively.
352
353 Sometimes a replacement drive can be a little smaller than the
354 original drives though this should be minimised by IDEMA stan‐
355 dards. Such a replacement drive will be rejected by md. To
356 guard against this it can be useful to set the initial size
357 slightly smaller than the smaller device with the aim that it
358 will still be larger than any replacement.
359
360 This option can be used with --create for determining the ini‐
361 tial size of an array. For external metadata, it can be used on
362 a volume, but not on a container itself. Setting the initial
363 size of RAID 0 array is only valid for external metadata.
364
365 This value can be set with --grow for RAID level 1/4/5/6/10
366 though DDF arrays may not be able to support this. RAID 0 array
367 size cannot be changed. If the array was created with a size
368 smaller than the currently active drives, the extra space can be
369 accessed using --grow. The size can be given as max which means
370 to choose the largest size that fits on all current drives.
371
372 Before reducing the size of the array (with --grow --size=) you
373 should make sure that space isn't needed. If the device holds a
374 filesystem, you would need to resize the filesystem to use less
375 space.
376
377 After reducing the array size you should check that the data
378 stored in the device is still available. If the device holds a
379 filesystem, then an 'fsck' of the filesystem is a minimum re‐
380 quirement. If there are problems the array can be made bigger
381 again with no loss with another --grow --size= command.
382
383
384 -Z, --array-size=
385 This is only meaningful with --grow and its effect is not per‐
386 sistent: when the array is stopped and restarted the default ar‐
387 ray size will be restored.
388
389 Setting the array-size causes the array to appear smaller to
390 programs that access the data. This is particularly needed be‐
391 fore reshaping an array so that it will be smaller. As the re‐
392 shape is not reversible, but setting the size with --array-size
393 is, it is required that the array size is reduced as appropriate
394 before the number of devices in the array is reduced.
395
396 Before reducing the size of the array you should make sure that
397 space isn't needed. If the device holds a filesystem, you would
398 need to resize the filesystem to use less space.
399
400 After reducing the array size you should check that the data
401 stored in the device is still available. If the device holds a
402 filesystem, then an 'fsck' of the filesystem is a minimum re‐
403 quirement. If there are problems the array can be made bigger
404 again with no loss with another --grow --array-size= command.
405
406 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilo‐
407 bytes, Megabytes, Gigabytes or Terabytes respectively. A value
408 of max restores the apparent size of the array to be whatever
409 the real amount of available space is.
410
411 Clustered arrays do not support this parameter yet.
412
413
414 -c, --chunk=
415 Specify chunk size in kilobytes. The default when creating an
416 array is 512KB. To ensure compatibility with earlier versions,
417 the default when building an array with no persistent metadata
418 is 64KB. This is only meaningful for RAID0, RAID4, RAID5,
419 RAID6, and RAID10.
420
421 RAID4, RAID5, RAID6, and RAID10 require the chunk size to be a
422 power of 2, with minimal chunk size being 4KB.
423
424 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilo‐
425 bytes, Megabytes, Gigabytes or Terabytes respectively.
426
427
428 --rounding=
429 Specify the rounding factor for a Linear array. The size of
430 each component will be rounded down to a multiple of this size.
431 This is a synonym for --chunk but highlights the different mean‐
432 ing for Linear as compared to other RAID levels. The default is
433 64K if a kernel earlier than 2.6.16 is in use, and is 0K (i.e.
434 no rounding) in later kernels.
435
436
437 -l, --level=
438 Set RAID level. When used with --create, options are: linear,
439 raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6,
440 6, raid10, 10, multipath, mp, faulty, container. Obviously some
441 of these are synonymous.
442
443 When a CONTAINER metadata type is requested, only the container
444 level is permitted, and it does not need to be explicitly given.
445
446 When used with --build, only linear, stripe, raid0, 0, raid1,
447 multipath, mp, and faulty are valid.
448
449 Can be used with --grow to change the RAID level in some cases.
450 See LEVEL CHANGES below.
451
452
453 -p, --layout=
454 This option configures the fine details of data layout for
455 RAID5, RAID6, and RAID10 arrays, and controls the failure modes
456 for faulty. It can also be used for working around a kernel bug
457 with RAID0, but generally doesn't need to be used explicitly.
458
459 The layout of the RAID5 parity block can be one of left-asymmet‐
460 ric, left-symmetric, right-asymmetric, right-symmetric, la, ra,
461 ls, rs. The default is left-symmetric.
462
463 It is also possible to cause RAID5 to use a RAID4-like layout by
464 choosing parity-first, or parity-last.
465
466 Finally for RAID5 there are DDF-compatible layouts,
467 ddf-zero-restart, ddf-N-restart, and ddf-N-continue.
468
469 These same layouts are available for RAID6. There are also 4
470 layouts that will provide an intermediate stage for converting
471 between RAID5 and RAID6. These provide a layout which is iden‐
472 tical to the corresponding RAID5 layout on the first N-1 de‐
473 vices, and has the 'Q' syndrome (the second 'parity' block used
474 by RAID6) on the last device. These layouts are: left-symmet‐
475 ric-6, right-symmetric-6, left-asymmetric-6, right-asymmetric-6,
476 and parity-first-6.
477
478 When setting the failure mode for level faulty, the options are:
479 write-transient, wt, read-transient, rt, write-persistent, wp,
480 read-persistent, rp, write-all, read-fixable, rf, clear, flush,
481 none.
482
483 Each failure mode can be followed by a number, which is used as
484 a period between fault generation. Without a number, the fault
485 is generated once on the first relevant request. With a number,
486 the fault will be generated after that many requests, and will
487 continue to be generated every time the period elapses.
488
489 Multiple failure modes can be current simultaneously by using
490 the --grow option to set subsequent failure modes.
491
492 "clear" or "none" will remove any pending or periodic failure
493 modes, and "flush" will clear any persistent faults.
494
495 The layout options for RAID10 are one of 'n', 'o' or 'f' fol‐
496 lowed by a small number signifying the number of copies of each
497 datablock. The default is 'n2'. The supported options are:
498
499 'n' signals 'near' copies. Multiple copies of one data block
500 are at similar offsets in different devices.
501
502 'o' signals 'offset' copies. Rather than the chunks being du‐
503 plicated within a stripe, whole stripes are duplicated but are
504 rotated by one device so duplicate blocks are on different de‐
505 vices. Thus subsequent copies of a block are in the next drive,
506 and are one chunk further down.
507
508 'f' signals 'far' copies (multiple copies have very different
509 offsets). See md(4) for more detail about 'near', 'offset', and
510 'far'.
511
512 As for the number of copies of each data block, 2 is normal, 3
513 can be useful. This number can be at most equal to the number
514 of devices in the array. It does not need to divide evenly into
515 that number (e.g. it is perfectly legal to have an 'n2' layout
516 for an array with an odd number of devices).
517
518 A bug introduced in Linux 3.14 means that RAID0 arrays with de‐
519 vices of differing sizes started using a different layout. This
520 could lead to data corruption. Since Linux 5.4 (and various
521 stable releases that received backports), the kernel will not
522 accept such an array unless a layout is explicitly set. It can
523 be set to 'original' or 'alternate'. When creating a new array,
524 mdadm will select 'original' by default, so the layout does not
525 normally need to be set. An array created for either 'original'
526 or 'alternate' will not be recognized by an (unpatched) kernel
527 prior to 5.4. To create a RAID0 array with devices of differing
528 sizes that can be used on an older kernel, you can set the lay‐
529 out to 'dangerous'. This will use whichever layout the running
530 kernel supports, so the data on the array may become corrupt
531 when changing kernel from pre-3.14 to a later kernel.
532
533 When an array is converted between RAID5 and RAID6 an intermedi‐
534 ate RAID6 layout is used in which the second parity block (Q) is
535 always on the last device. To convert a RAID5 to RAID6 and
536 leave it in this new layout (which does not require re-striping)
537 use --layout=preserve. This will try to avoid any restriping.
538
539 The converse of this is --layout=normalise which will change a
540 non-standard RAID6 layout into a more standard arrangement.
541
542
543 --parity=
544 same as --layout (thus explaining the p of -p).
545
546
547 -b, --bitmap=
548 Specify a file to store a write-intent bitmap in. The file
549 should not exist unless --force is also given. The same file
550 should be provided when assembling the array. If the word in‐
551 ternal is given, then the bitmap is stored with the metadata on
552 the array, and so is replicated on all devices. If the word
553 none is given with --grow mode, then any bitmap that is present
554 is removed. If the word clustered is given, the array is created
555 for a clustered environment. One bitmap is created for each node
556 as defined by the --nodes parameter and are stored internally.
557
558 To help catch typing errors, the filename must contain at least
559 one slash ('/') if it is a real file (not 'internal' or 'none').
560
561 Note: external bitmaps are only known to work on ext2 and ext3.
562 Storing bitmap files on other filesystems may result in serious
563 problems.
564
565 When creating an array on devices which are 100G or larger,
566 mdadm automatically adds an internal bitmap as it will usually
567 be beneficial. This can be suppressed with --bitmap=none or by
568 selecting a different consistency policy with --consistency-pol‐
569 icy.
570
571
572 --bitmap-chunk=
573 Set the chunk size of the bitmap. Each bit corresponds to that
574 many Kilobytes of storage. When using a file-based bitmap, the
575 default is to use the smallest size that is at least 4 and re‐
576 quires no more than 2^21 chunks. When using an internal bitmap,
577 the chunk size defaults to 64Meg, or larger if necessary to fit
578 the bitmap into the available space.
579
580 A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilo‐
581 bytes, Megabytes, Gigabytes or Terabytes respectively.
582
583
584 -W, --write-mostly
585 subsequent devices listed in a --build, --create, or --add com‐
586 mand will be flagged as 'write-mostly'. This is valid for RAID1
587 only and means that the 'md' driver will avoid reading from
588 these devices if at all possible. This can be useful if mirror‐
589 ing over a slow link.
590
591
592 --write-behind=
593 Specify that write-behind mode should be enabled (valid for
594 RAID1 only). If an argument is specified, it will set the maxi‐
595 mum number of outstanding writes allowed. The default value is
596 256. A write-intent bitmap is required in order to use write-
597 behind mode, and write-behind is only attempted on drives marked
598 as write-mostly.
599
600
601 --failfast
602 subsequent devices listed in a --create or --add command will be
603 flagged as 'failfast'. This is valid for RAID1 and RAID10
604 only. IO requests to these devices will be encouraged to fail
605 quickly rather than cause long delays due to error handling.
606 Also no attempt is made to repair a read error on these devices.
607
608 If an array becomes degraded so that the 'failfast' device is
609 the only usable device, the 'failfast' flag will then be ignored
610 and extended delays will be preferred to complete failure.
611
612 The 'failfast' flag is appropriate for storage arrays which have
613 a low probability of true failure, but which may sometimes cause
614 unacceptable delays due to internal maintenance functions.
615
616
617 --assume-clean
618 Tell mdadm that the array pre-existed and is known to be clean.
619 It can be useful when trying to recover from a major failure as
620 you can be sure that no data will be affected unless you actu‐
621 ally write to the array. It can also be used when creating a
622 RAID1 or RAID10 if you want to avoid the initial resync, however
623 this practice — while normally safe — is not recommended. Use
624 this only if you really know what you are doing.
625
626 When the devices that will be part of a new array were filled
627 with zeros before creation the operator knows the array is actu‐
628 ally clean. If that is the case, such as after running bad‐
629 blocks, this argument can be used to tell mdadm the facts the
630 operator knows.
631
632 When an array is resized to a larger size with --grow --size=
633 the new space is normally resynced in that same way that the
634 whole array is resynced at creation. From Linux version 3.0,
635 --assume-clean can be used with that command to avoid the auto‐
636 matic resync.
637
638
639 --write-zeroes
640 When creating an array, send write zeroes requests to all the
641 block devices. This should zero the data area on all disks such
642 that the initial sync is not necessary and, if successfull, will
643 behave as if --assume-clean was specified.
644
645 This is intended for use with devices that have hardware offload
646 for zeroing, but despite this zeroing can still take several
647 minutes for large disks. Thus a message is printed before and
648 after zeroing and each disk is zeroed in parallel with the oth‐
649 ers.
650
651 This is only meaningful with --create.
652
653
654 --backup-file=
655 This is needed when --grow is used to increase the number of
656 raid devices in a RAID5 or RAID6 if there are no spare devices
657 available, or to shrink, change RAID level or layout. See the
658 GROW MODE section below on RAID-DEVICES CHANGES. The file must
659 be stored on a separate device, not on the RAID array being re‐
660 shaped.
661
662
663 --data-offset=
664 Arrays with 1.x metadata can leave a gap between the start of
665 the device and the start of array data. This gap can be used
666 for various metadata. The start of data is known as the
667 data-offset. Normally an appropriate data offset is computed
668 automatically. However it can be useful to set it explicitly
669 such as when re-creating an array which was originally created
670 using a different version of mdadm which computed a different
671 offset.
672
673 Setting the offset explicitly over-rides the default. The value
674 given is in Kilobytes unless a suffix of 'K', 'M', 'G' or 'T' is
675 used to explicitly indicate Kilobytes, Megabytes, Gigabytes or
676 Terabytes respectively.
677
678 Since Linux 3.4, --data-offset can also be used with --grow for
679 some RAID levels (initially on RAID10). This allows the
680 data-offset to be changed as part of the reshape process. When
681 the data offset is changed, no backup file is required as the
682 difference in offsets is used to provide the same functionality.
683
684 When the new offset is earlier than the old offset, the number
685 of devices in the array cannot shrink. When it is after the old
686 offset, the number of devices in the array cannot increase.
687
688 When creating an array, --data-offset can be specified as vari‐
689 able. In the case each member device is expected to have an
690 offset appended to the name, separated by a colon. This makes
691 it possible to recreate exactly an array which has varying data
692 offsets (as can happen when different versions of mdadm are used
693 to add different devices).
694
695
696 --continue
697 This option is complementary to the --freeze-reshape option for
698 assembly. It is needed when --grow operation is interrupted and
699 it is not restarted automatically due to --freeze-reshape usage
700 during array assembly. This option is used together with -G , (
701 --grow ) command and device for a pending reshape to be contin‐
702 ued. All parameters required for reshape continuation will be
703 read from array metadata. If initial --grow command had re‐
704 quired --backup-file= option to be set, continuation option will
705 require to have exactly the same backup file given as well.
706
707 Any other parameter passed together with --continue option will
708 be ignored.
709
710
711 -N, --name=
712 Set a name for the array. This is currently only effective when
713 creating an array with a version-1 superblock, or an array in a
714 DDF container. The name is a simple textual string that can be
715 used to identify array components when assembling. If name is
716 needed but not specified, it is taken from the basename of the
717 device that is being created. e.g. when creating /dev/md/home
718 the name will default to home.
719
720
721 -R, --run
722 Insist that mdadm run the array, even if some of the components
723 appear to be active in another array or filesystem. Normally
724 mdadm will ask for confirmation before including such components
725 in an array. This option causes that question to be suppressed.
726
727
728 -f, --force
729 Insist that mdadm accept the geometry and layout specified with‐
730 out question. Normally mdadm will not allow the creation of an
731 array with only one device, and will try to create a RAID5 array
732 with one missing drive (as this makes the initial resync work
733 faster). With --force, mdadm will not try to be so clever.
734
735
736 -o, --readonly
737 Start the array read only rather than read-write as normal. No
738 writes will be allowed to the array, and no resync, recovery, or
739 reshape will be started. It works with Create, Assemble, Manage
740 and Misc mode.
741
742
743 -a, --auto{=yes,md,mdp,part,p}{NN}
744 Instruct mdadm how to create the device file if needed, possibly
745 allocating an unused minor number. "md" causes a non-partition‐
746 able array to be used (though since Linux 2.6.28, these array
747 devices are in fact partitionable). "mdp", "part" or "p" causes
748 a partitionable array (2.6 and later) to be used. "yes" re‐
749 quires the named md device to have a 'standard' format, and the
750 type and minor number will be determined from this. With mdadm
751 3.0, device creation is normally left up to udev so this option
752 is unlikely to be needed. See DEVICE NAMES below.
753
754 The argument can also come immediately after "-a". e.g. "-ap".
755
756 If --auto is not given on the command line or in the config
757 file, then the default will be --auto=yes.
758
759 If --scan is also given, then any auto= entries in the config
760 file will override the --auto instruction given on the command
761 line.
762
763 For partitionable arrays, mdadm will create the device file for
764 the whole array and for the first 4 partitions. A different
765 number of partitions can be specified at the end of this option
766 (e.g. --auto=p7). If the device name ends with a digit, the
767 partition names add a 'p', and a number, e.g. /dev/md/home1p3.
768 If there is no trailing digit, then the partition names just
769 have a number added, e.g. /dev/md/scratch3.
770
771 If the md device name is in a 'standard' format as described in
772 DEVICE NAMES, then it will be created, if necessary, with the
773 appropriate device number based on that name. If the device
774 name is not in one of these formats, then an unused device num‐
775 ber will be allocated. The device number will be considered un‐
776 used if there is no active array for that number, and there is
777 no entry in /dev for that number and with a non-standard name.
778 Names that are not in 'standard' format are only allowed in
779 "/dev/md/".
780
781 This is meaningful with --create or --build.
782
783
784 -a, --add
785 This option can be used in Grow mode in two cases.
786
787 If the target array is a Linear array, then --add can be used to
788 add one or more devices to the array. They are simply catenated
789 on to the end of the array. Once added, the devices cannot be
790 removed.
791
792 If the --raid-disks option is being used to increase the number
793 of devices in an array, then --add can be used to add some extra
794 devices to be included in the array. In most cases this is not
795 needed as the extra devices can be added as spares first, and
796 then the number of raid disks can be changed. However, for
797 RAID0 it is not possible to add spares. So to increase the num‐
798 ber of devices in a RAID0, it is necessary to set the new number
799 of devices, and to add the new devices, in the same command.
800
801
802 --nodes
803 Only works when the array is created for a clustered environ‐
804 ment. It specifies the maximum number of nodes in the cluster
805 that will use this device simultaneously. If not specified, this
806 defaults to 4.
807
808
809 --write-journal
810 Specify journal device for the RAID-4/5/6 array. The journal de‐
811 vice should be an SSD with a reasonable lifetime.
812
813
814 -k, --consistency-policy=
815 Specify how the array maintains consistency in the case of an
816 unexpected shutdown. Only relevant for RAID levels with redun‐
817 dancy. Currently supported options are:
818
819
820 resync Full resync is performed and all redundancy is regener‐
821 ated when the array is started after an unclean shutdown.
822
823
824 bitmap Resync assisted by a write-intent bitmap. Implicitly se‐
825 lected when using --bitmap.
826
827
828 journal
829 For RAID levels 4/5/6, the journal device is used to log
830 transactions and replay after an unclean shutdown. Im‐
831 plicitly selected when using --write-journal.
832
833
834 ppl For RAID5 only, Partial Parity Log is used to close the
835 write hole and eliminate resync. PPL is stored in the
836 metadata region of RAID member drives, no additional
837 journal drive is needed.
838
839
840 Can be used with --grow to change the consistency policy of an
841 active array in some cases. See CONSISTENCY POLICY CHANGES be‐
842 low.
843
844
845
847 -u, --uuid=
848 uuid of array to assemble. Devices which don't have this uuid
849 are excluded
850
851
852 -m, --super-minor=
853 Minor number of device that array was created for. Devices
854 which don't have this minor number are excluded. If you create
855 an array as /dev/md1, then all superblocks will contain the mi‐
856 nor number 1, even if the array is later assembled as /dev/md2.
857
858 Giving the literal word "dev" for --super-minor will cause mdadm
859 to use the minor number of the md device that is being assem‐
860 bled. e.g. when assembling /dev/md0, --super-minor=dev will
861 look for super blocks with a minor number of 0.
862
863 --super-minor is only relevant for v0.90 metadata, and should
864 not normally be used. Using --uuid is much safer.
865
866
867 -N, --name=
868 Specify the name of the array to assemble. This must be the
869 name that was specified when creating the array. It must either
870 match the name stored in the superblock exactly, or it must
871 match with the current homehost prefixed to the start of the
872 given name.
873
874
875 -f, --force
876 Assemble the array even if the metadata on some devices appears
877 to be out-of-date. If mdadm cannot find enough working devices
878 to start the array, but can find some devices that are recorded
879 as having failed, then it will mark those devices as working so
880 that the array can be started. This works only for native. For
881 external metadata it allows to start dirty degraded RAID 4, 5,
882 6. An array which requires --force to be started may contain
883 data corruption. Use it carefully.
884
885
886 -R, --run
887 Attempt to start the array even if fewer drives were given than
888 were present last time the array was active. Normally if not
889 all the expected drives are found and --scan is not used, then
890 the array will be assembled but not started. With --run an at‐
891 tempt will be made to start it anyway.
892
893
894 --no-degraded
895 This is the reverse of --run in that it inhibits the startup of
896 array unless all expected drives are present. This is only
897 needed with --scan, and can be used if the physical connections
898 to devices are not as reliable as you would like.
899
900
901 -a, --auto{=no,yes,md,mdp,part}
902 See this option under Create and Build options.
903
904
905 -b, --bitmap=
906 Specify the bitmap file that was given when the array was cre‐
907 ated. If an array has an internal bitmap, there is no need to
908 specify this when assembling the array.
909
910
911 --backup-file=
912 If --backup-file was used while reshaping an array (e.g. chang‐
913 ing number of devices or chunk size) and the system crashed dur‐
914 ing the critical section, then the same --backup-file must be
915 presented to --assemble to allow possibly corrupted data to be
916 restored, and the reshape to be completed.
917
918
919 --invalid-backup
920 If the file needed for the above option is not available for any
921 reason an empty file can be given together with this option to
922 indicate that the backup file is invalid. In this case the data
923 that was being rearranged at the time of the crash could be ir‐
924 recoverably lost, but the rest of the array may still be recov‐
925 erable. This option should only be used as a last resort if
926 there is no way to recover the backup file.
927
928
929
930 -U, --update=
931 Update the superblock on each device while assembling the array.
932 The argument given to this flag can be one of sparc2.2, sum‐
933 maries, uuid, name, nodes, homehost, home-cluster, resync, byte‐
934 order, devicesize, no-bitmap, bbl, no-bbl, ppl, no-ppl, lay‐
935 out-original, layout-alternate, layout-unspecified, metadata, or
936 super-minor.
937
938 The sparc2.2 option will adjust the superblock of an array what
939 was created on a Sparc machine running a patched 2.2 Linux ker‐
940 nel. This kernel got the alignment of part of the superblock
941 wrong. You can use the --examine --sparc2.2 option to mdadm to
942 see what effect this would have.
943
944 The super-minor option will update the preferred minor field on
945 each superblock to match the minor number of the array being as‐
946 sembled. This can be useful if --examine reports a different
947 "Preferred Minor" to --detail. In some cases this update will
948 be performed automatically by the kernel driver. In particular,
949 the update happens automatically at the first write to an array
950 with redundancy (RAID level 1 or greater) on a 2.6 (or later)
951 kernel.
952
953 The uuid option will change the uuid of the array. If a UUID is
954 given with the --uuid option that UUID will be used as a new
955 UUID and will NOT be used to help identify the devices in the
956 array. If no --uuid is given, a random UUID is chosen.
957
958 The name option will change the name of the array as stored in
959 the superblock. This is only supported for version-1 su‐
960 perblocks.
961
962 The nodes option will change the nodes of the array as stored in
963 the bitmap superblock. This option only works for a clustered
964 environment.
965
966 The homehost option will change the homehost as recorded in the
967 superblock. For version-0 superblocks, this is the same as up‐
968 dating the UUID. For version-1 superblocks, this involves up‐
969 dating the name.
970
971 The home-cluster option will change the cluster name as recorded
972 in the superblock and bitmap. This option only works for a clus‐
973 tered environment.
974
975 The resync option will cause the array to be marked dirty mean‐
976 ing that any redundancy in the array (e.g. parity for RAID5,
977 copies for RAID1) may be incorrect. This will cause the RAID
978 system to perform a "resync" pass to make sure that all redun‐
979 dant information is correct.
980
981 The byteorder option allows arrays to be moved between machines
982 with different byte-order, such as from a big-endian machine
983 like a Sparc or some MIPS machines, to a little-endian x86_64
984 machine. When assembling such an array for the first time after
985 a move, giving --update=byteorder will cause mdadm to expect su‐
986 perblocks to have their byteorder reversed, and will correct
987 that order before assembling the array. This is only valid with
988 original (Version 0.90) superblocks.
989
990 The summaries option will correct the summaries in the su‐
991 perblock. That is the counts of total, working, active, failed,
992 and spare devices.
993
994 The devicesize option will rarely be of use. It applies to ver‐
995 sion 1.1 and 1.2 metadata only (where the metadata is at the
996 start of the device) and is only useful when the component de‐
997 vice has changed size (typically become larger). The version 1
998 metadata records the amount of the device that can be used to
999 store data, so if a device in a version 1.1 or 1.2 array becomes
1000 larger, the metadata will still be visible, but the extra space
1001 will not. In this case it might be useful to assemble the array
1002 with --update=devicesize. This will cause mdadm to determine
1003 the maximum usable amount of space on each device and update the
1004 relevant field in the metadata.
1005
1006 The metadata option only works on v0.90 metadata arrays and will
1007 convert them to v1.0 metadata. The array must not be dirty
1008 (i.e. it must not need a sync) and it must not have a write-in‐
1009 tent bitmap.
1010
1011 The old metadata will remain on the devices, but will appear
1012 older than the new metadata and so will usually be ignored. The
1013 old metadata (or indeed the new metadata) can be removed by giv‐
1014 ing the appropriate --metadata= option to --zero-superblock.
1015
1016 The no-bitmap option can be used when an array has an internal
1017 bitmap which is corrupt in some way so that assembling the array
1018 normally fails. It will cause any internal bitmap to be ig‐
1019 nored.
1020
1021 The bbl option will reserve space in each device for a bad block
1022 list. This will be 4K in size and positioned near the end of
1023 any free space between the superblock and the data.
1024
1025 The no-bbl option will cause any reservation of space for a bad
1026 block list to be removed. If the bad block list contains en‐
1027 tries, this will fail, as removing the list could cause data
1028 corruption.
1029
1030 The ppl option will enable PPL for a RAID5 array and reserve
1031 space for PPL on each device. There must be enough free space
1032 between the data and superblock and a write-intent bitmap or
1033 journal must not be used.
1034
1035 The no-ppl option will disable PPL in the superblock.
1036
1037 The layout-original and layout-alternate options are for RAID0
1038 arrays with non-uniform devices size that were in use before
1039 Linux 5.4. If the array was being used with Linux 3.13 or ear‐
1040 lier, then to assemble the array on a new kernel, --update=lay‐
1041 out-original must be given. If the array was created and used
1042 with a kernel from Linux 3.14 to Linux 5.3, then --update=lay‐
1043 out-alternate must be given. This only needs to be given once.
1044 Subsequent assembly of the array will happen normally. For more
1045 information, see md(4).
1046
1047 The layout-unspecified option reverts the effect of layout-orig‐
1048 nal or layout-alternate and allows the array to be again used on
1049 a kernel prior to Linux 5.3. This option should be used with
1050 great caution.
1051
1052
1053 --freeze-reshape
1054 This option is intended to be used in start-up scripts during
1055 the initrd boot phase. When the array under reshape is assem‐
1056 bled during the initrd phase, this option stops the reshape af‐
1057 ter the reshape-critical section has been restored. This happens
1058 before the file system pivot operation and avoids loss of
1059 filesystem context. Losing file system context would cause re‐
1060 shape to be broken.
1061
1062 Reshape can be continued later using the --continue option for
1063 the grow command.
1064
1065
1067 -t, --test
1068 Unless a more serious error occurred, mdadm will exit with a
1069 status of 2 if no changes were made to the array and 0 if at
1070 least one change was made. This can be useful when an indirect
1071 specifier such as missing, detached or faulty is used in re‐
1072 questing an operation on the array. --test will report failure
1073 if these specifiers didn't find any match.
1074
1075
1076 -a, --add
1077 hot-add listed devices. If a device appears to have recently
1078 been part of the array (possibly it failed or was removed) the
1079 device is re-added as described in the next point. If that
1080 fails or the device was never part of the array, the device is
1081 added as a hot-spare. If the array is degraded, it will immedi‐
1082 ately start to rebuild data onto that spare.
1083
1084 Note that this and the following options are only meaningful on
1085 array with redundancy. They don't apply to RAID0 or Linear.
1086
1087
1088 --re-add
1089 re-add a device that was previously removed from an array. If
1090 the metadata on the device reports that it is a member of the
1091 array, and the slot that it used is still vacant, then the de‐
1092 vice will be added back to the array in the same position. This
1093 will normally cause the data for that device to be recovered.
1094 However, based on the event count on the device, the recovery
1095 may only require sections that are flagged by a write-intent
1096 bitmap to be recovered or may not require any recovery at all.
1097
1098 When used on an array that has no metadata (i.e. it was built
1099 with --build) it will be assumed that bitmap-based recovery is
1100 enough to make the device fully consistent with the array.
1101
1102 --re-add can also be accompanied by --update=devicesize, --up‐
1103 date=bbl, or --update=no-bbl. See descriptions of these options
1104 when used in Assemble mode for an explanation of their use.
1105
1106 If the device name given is missing then mdadm will try to find
1107 any device that looks like it should be part of the array but
1108 isn't and will try to re-add all such devices.
1109
1110 If the device name given is faulty then mdadm will find all de‐
1111 vices in the array that are marked faulty, remove them and at‐
1112 tempt to immediately re-add them. This can be useful if you are
1113 certain that the reason for failure has been resolved.
1114
1115
1116 --add-spare
1117 Add a device as a spare. This is similar to --add except that
1118 it does not attempt --re-add first. The device will be added as
1119 a spare even if it looks like it could be a recent member of the
1120 array.
1121
1122
1123 -r, --remove
1124 remove listed devices. They must not be active. i.e. they
1125 should be failed or spare devices.
1126
1127 As well as the name of a device file (e.g. /dev/sda1) the words
1128 failed, detached and names like set-A can be given to --remove.
1129 The first causes all failed devices to be removed. The second
1130 causes any device which is no longer connected to the system
1131 (i.e an 'open' returns ENXIO) to be removed. The third will re‐
1132 move a set as described below under --fail.
1133
1134
1135 -f, --fail
1136 Mark listed devices as faulty. As well as the name of a device
1137 file, the word detached or a set name like set-A can be given.
1138 The former will cause any device that has been detached from the
1139 system to be marked as failed. It can then be removed.
1140
1141 For RAID10 arrays where the number of copies evenly divides the
1142 number of devices, the devices can be conceptually divided into
1143 sets where each set contains a single complete copy of the data
1144 on the array. Sometimes a RAID10 array will be configured so
1145 that these sets are on separate controllers. In this case, all
1146 the devices in one set can be failed by giving a name like set-A
1147 or set-B to --fail. The appropriate set names are reported by
1148 --detail.
1149
1150
1151 --set-faulty
1152 same as --fail.
1153
1154
1155 --replace
1156 Mark listed devices as requiring replacement. As soon as a
1157 spare is available, it will be rebuilt and will replace the
1158 marked device. This is similar to marking a device as faulty,
1159 but the device remains in service during the recovery process to
1160 increase resilience against multiple failures. When the re‐
1161 placement process finishes, the replaced device will be marked
1162 as faulty.
1163
1164
1165 --with This can follow a list of --replace devices. The devices listed
1166 after --with will preferentially be used to replace the devices
1167 listed after --replace. These devices must already be spare de‐
1168 vices in the array.
1169
1170
1171 --write-mostly
1172 Subsequent devices that are added or re-added will have the
1173 'write-mostly' flag set. This is only valid for RAID1 and means
1174 that the 'md' driver will avoid reading from these devices if
1175 possible.
1176
1177 --readwrite
1178 Subsequent devices that are added or re-added will have the
1179 'write-mostly' flag cleared.
1180
1181 --cluster-confirm
1182 Confirm the existence of the device. This is issued in response
1183 to an --add request by a node in a cluster. When a node adds a
1184 device it sends a message to all nodes in the cluster to look
1185 for a device with a UUID. This translates to a udev notification
1186 with the UUID of the device to be added and the slot number. The
1187 receiving node must acknowledge this message with --cluster-con‐
1188 firm. Valid arguments are <slot>:<devicename> in case the device
1189 is found or <slot>:missing in case the device is not found.
1190
1191
1192 --add-journal
1193 Add a journal to an existing array, or recreate journal for a
1194 RAID-4/5/6 array that lost a journal device. To avoid interrupt‐
1195 ing ongoing write operations, --add-journal only works for array
1196 in Read-Only state.
1197
1198
1199 --failfast
1200 Subsequent devices that are added or re-added will have the
1201 'failfast' flag set. This is only valid for RAID1 and RAID10
1202 and means that the 'md' driver will avoid long timeouts on error
1203 handling where possible.
1204
1205 --nofailfast
1206 Subsequent devices that are re-added will be re-added without
1207 the 'failfast' flag set.
1208
1209
1210 Each of these options requires that the first device listed is the ar‐
1211 ray to be acted upon, and the remainder are component devices to be
1212 added, removed, marked as faulty, etc. Several different operations
1213 can be specified for different devices, e.g.
1214 mdadm /dev/md0 --add /dev/sda1 --fail /dev/sdb1 --remove /dev/sdb1
1215 Each operation applies to all devices listed until the next operation.
1216
1217 If an array is using a write-intent bitmap, then devices which have
1218 been removed can be re-added in a way that avoids a full reconstruction
1219 but instead just updates the blocks that have changed since the device
1220 was removed. For arrays with persistent metadata (superblocks) this is
1221 done automatically. For arrays created with --build mdadm needs to be
1222 told that this device we removed recently with --re-add.
1223
1224 Devices can only be removed from an array if they are not in active
1225 use, i.e. that must be spares or failed devices. To remove an active
1226 device, it must first be marked as faulty.
1227
1228
1230 -Q, --query
1231 Examine a device to see (1) if it is an md device and (2) if it
1232 is a component of an md array. Information about what is dis‐
1233 covered is presented.
1234
1235
1236 -D, --detail
1237 Print details of one or more md devices.
1238
1239
1240 --detail-platform
1241 Print details of the platform's RAID capabilities (firmware /
1242 hardware topology) for a given metadata format. If used without
1243 an argument, mdadm will scan all controllers looking for their
1244 capabilities. Otherwise, mdadm will only look at the controller
1245 specified by the argument in the form of an absolute filepath or
1246 a link, e.g. /sys/devices/pci0000:00/0000:00:1f.2.
1247
1248
1249 -Y, --export
1250 When used with --detail, --detail-platform, --examine, or --in‐
1251 cremental output will be formatted as key=value pairs for easy
1252 import into the environment.
1253
1254 With --incremental The value MD_STARTED indicates whether an ar‐
1255 ray was started (yes) or not, which may include a reason (un‐
1256 safe, nothing, no). Also the value MD_FOREIGN indicates if the
1257 array is expected on this host (no), or seems to be from else‐
1258 where (yes).
1259
1260
1261 -E, --examine
1262 Print contents of the metadata stored on the named device(s).
1263 Note the contrast between --examine and --detail. --examine ap‐
1264 plies to devices which are components of an array, while --de‐
1265 tail applies to a whole array which is currently active.
1266
1267 --sparc2.2
1268 If an array was created on a SPARC machine with a 2.2 Linux ker‐
1269 nel patched with RAID support, the superblock will have been
1270 created incorrectly, or at least incompatibly with 2.4 and later
1271 kernels. Using the --sparc2.2 flag with --examine will fix the
1272 superblock before displaying it. If this appears to do the
1273 right thing, then the array can be successfully assembled using
1274 --assemble --update=sparc2.2.
1275
1276
1277 -X, --examine-bitmap
1278 Report information about a bitmap file. The argument is either
1279 an external bitmap file or an array component in case of an in‐
1280 ternal bitmap. Note that running this on an array device (e.g.
1281 /dev/md0) does not report the bitmap for that array.
1282
1283
1284 --examine-badblocks
1285 List the bad-blocks recorded for the device, if a bad-blocks
1286 list has been configured. Currently only 1.x and IMSM metadata
1287 support bad-blocks lists.
1288
1289
1290 --dump=directory
1291
1292 --restore=directory
1293 Save metadata from lists devices, or restore metadata to listed
1294 devices.
1295
1296
1297 -R, --run
1298 start a partially assembled array. If --assemble did not find
1299 enough devices to fully start the array, it might leaving it
1300 partially assembled. If you wish, you can then use --run to
1301 start the array in degraded mode.
1302
1303
1304 -S, --stop
1305 deactivate array, releasing all resources.
1306
1307
1308 -o, --readonly
1309 mark array as readonly.
1310
1311
1312 -w, --readwrite
1313 mark array as readwrite.
1314
1315
1316 --zero-superblock
1317 If the device contains a valid md superblock, the block is over‐
1318 written with zeros. With --force the block where the superblock
1319 would be is overwritten even if it doesn't appear to be valid.
1320
1321 Note: Be careful when calling --zero-superblock with clustered
1322 raid. Make sure the array isn't used or assembled in another
1323 cluster node before executing it.
1324
1325
1326 --kill-subarray=
1327 If the device is a container and the argument to --kill-subarray
1328 specifies an inactive subarray in the container, then the subar‐
1329 ray is deleted. Deleting all subarrays will leave an 'empty-
1330 container' or spare superblock on the drives. See --zero-su‐
1331 perblock for completely removing a superblock. Note that some
1332 formats depend on the subarray index for generating a UUID, this
1333 command will fail if it would change the UUID of an active sub‐
1334 array.
1335
1336
1337 --update-subarray=
1338 If the device is a container and the argument to --update-subar‐
1339 ray specifies a subarray in the container, then attempt to up‐
1340 date the given superblock field in the subarray. See below in
1341 MISC MODE for details.
1342
1343
1344 -t, --test
1345 When used with --detail, the exit status of mdadm is set to re‐
1346 flect the status of the device. See below in MISC MODE for de‐
1347 tails.
1348
1349
1350 -W, --wait
1351 For each md device given, wait for any resync, recovery, or re‐
1352 shape activity to finish before returning. mdadm will return
1353 with success if it actually waited for every device listed, oth‐
1354 erwise it will return failure.
1355
1356
1357 --wait-clean
1358 For each md device given, or each device in /proc/mdstat if
1359 --scan is given, arrange for the array to be marked clean as
1360 soon as possible. mdadm will return with success if the array
1361 uses external metadata and we successfully waited. For native
1362 arrays, this returns immediately as the kernel handles dirty-
1363 clean transitions at shutdown. No action is taken if safe-mode
1364 handling is disabled.
1365
1366
1367 --action=
1368 Set the "sync_action" for all md devices given to one of idle,
1369 frozen, check, repair. Setting to idle will abort any currently
1370 running action though some actions will automatically restart.
1371 Setting to frozen will abort any current action and ensure no
1372 other action starts automatically.
1373
1374 Details of check and repair can be found it md(4) under SCRUB‐
1375 BING AND MISMATCHES.
1376
1377
1379 --rebuild-map, -r
1380 Rebuild the map file (/run/mdadm/map) that mdadm uses to help
1381 track which arrays are currently being assembled.
1382
1383
1384 --run, -R
1385 Run any array assembled as soon as a minimal number of devices
1386 is available, rather than waiting until all expected devices are
1387 present.
1388
1389
1390 --scan, -s
1391 Only meaningful with -R this will scan the map file for arrays
1392 that are being incrementally assembled and will try to start any
1393 that are not already started. If any such array is listed in
1394 mdadm.conf as requiring an external bitmap, that bitmap will be
1395 attached first.
1396
1397
1398 --fail, -f
1399 This allows the hot-plug system to remove devices that have
1400 fully disappeared from the kernel. It will first fail and then
1401 remove the device from any array it belongs to. The device name
1402 given should be a kernel device name such as "sda", not a name
1403 in /dev.
1404
1405
1406 --path=
1407 Only used with --fail. The 'path' given will be recorded so
1408 that if a new device appears at the same location it can be au‐
1409 tomatically added to the same array. This allows the failed de‐
1410 vice to be automatically replaced by a new device without meta‐
1411 data if it appears at specified path. This option is normally
1412 only set by an udev script.
1413
1414
1416 -m, --mail
1417 Give a mail address to send alerts to.
1418
1419
1420 -p, --program, --alert
1421 Give a program to be run whenever an event is detected.
1422
1423
1424 -y, --syslog
1425 Cause all events to be reported through 'syslog'. The messages
1426 have facility of 'daemon' and varying priorities.
1427
1428
1429 -d, --delay
1430 Give a delay in seconds. mdadm polls the md arrays and then
1431 waits this many seconds before polling again. The default is 60
1432 seconds. Since 2.6.16, there is no need to reduce this as the
1433 kernel alerts mdadm immediately when there is any change.
1434
1435
1436 -r, --increment
1437 Give a percentage increment. mdadm will generate RebuildNN
1438 events with the given percentage increment.
1439
1440
1441 -f, --daemonise
1442 Tell mdadm to run as a background daemon if it decides to moni‐
1443 tor anything. This causes it to fork and run in the child, and
1444 to disconnect from the terminal. The process id of the child is
1445 written to stdout. This is useful with --scan which will only
1446 continue monitoring if a mail address or alert program is found
1447 in the config file.
1448
1449
1450 -i, --pid-file
1451 When mdadm is running in daemon mode, write the pid of the dae‐
1452 mon process to the specified file, instead of printing it on
1453 standard output.
1454
1455
1456 -1, --oneshot
1457 Check arrays only once. This will generate NewArray events and
1458 more significantly DegradedArray and SparesMissing events. Run‐
1459 ning
1460 mdadm --monitor --scan -1
1461 from a cron script will ensure regular notification of any de‐
1462 graded arrays.
1463
1464
1465 -t, --test
1466 Generate a TestMessage alert for every array found at startup.
1467 This alert gets mailed and passed to the alert program. This
1468 can be used for testing that alert message do get through suc‐
1469 cessfully.
1470
1471
1472 --no-sharing
1473 This inhibits the functionality for moving spares between ar‐
1474 rays. Only one monitoring process started with --scan but with‐
1475 out this flag is allowed, otherwise the two could interfere with
1476 each other.
1477
1478
1480 Usage: mdadm --assemble md-device options-and-component-devices...
1481
1482 Usage: mdadm --assemble --scan md-devices-and-options...
1483
1484 Usage: mdadm --assemble --scan options...
1485
1486
1487 This usage assembles one or more RAID arrays from pre-existing compo‐
1488 nents. For each array, mdadm needs to know the md device, the identity
1489 of the array, and the number of component devices. These can be found
1490 in a number of ways.
1491
1492 In the first usage example (without the --scan) the first device given
1493 is the md device. In the second usage example, all devices listed are
1494 treated as md devices and assembly is attempted. In the third (where
1495 no devices are listed) all md devices that are listed in the configura‐
1496 tion file are assembled. If no arrays are described by the configura‐
1497 tion file, then any arrays that can be found on unused devices will be
1498 assembled.
1499
1500 If precisely one device is listed, but --scan is not given, then mdadm
1501 acts as though --scan was given and identity information is extracted
1502 from the configuration file.
1503
1504 The identity can be given with the --uuid option, the --name option, or
1505 the --super-minor option, will be taken from the md-device record in
1506 the config file, or will be taken from the super block of the first
1507 component-device listed on the command line.
1508
1509 Devices can be given on the --assemble command line or in the config
1510 file. Only devices which have an md superblock which contains the
1511 right identity will be considered for any array.
1512
1513 The config file is only used if explicitly named with --config or re‐
1514 quested with (a possibly implicit) --scan. In the latter case, the de‐
1515 fault config file is used. See mdadm.conf(5) for more details.
1516
1517 If --scan is not given, then the config file will only be used to find
1518 the identity of md arrays.
1519
1520 Normally the array will be started after it is assembled. However if
1521 --scan is not given and not all expected drives were listed, then the
1522 array is not started (to guard against usage errors). To insist that
1523 the array be started in this case (as may work for RAID1, 4, 5, 6, or
1524 10), give the --run flag.
1525
1526 If udev is active, mdadm does not create any entries in /dev but leaves
1527 that to udev. It does record information in /run/mdadm/map which will
1528 allow udev to choose the correct name.
1529
1530 If mdadm detects that udev is not configured, it will create the de‐
1531 vices in /dev itself.
1532
1533 In Linux kernels prior to version 2.6.28 there were two distinct types
1534 of md devices that could be created: one that could be partitioned us‐
1535 ing standard partitioning tools and one that could not. Since 2.6.28
1536 that distinction is no longer relevant as both types of devices can be
1537 partitioned. mdadm will normally create the type that originally could
1538 not be partitioned as it has a well-defined major number (9).
1539
1540 Prior to 2.6.28, it is important that mdadm chooses the correct type of
1541 array device to use. This can be controlled with the --auto option.
1542 In particular, a value of "mdp" or "part" or "p" tells mdadm to use a
1543 partitionable device rather than the default.
1544
1545 In the no-udev case, the value given to --auto can be suffixed by a
1546 number. This tells mdadm to create that number of partition devices
1547 rather than the default of 4.
1548
1549 The value given to --auto can also be given in the configuration file
1550 as a word starting auto= on the ARRAY line for the relevant array.
1551
1552
1553 Auto-Assembly
1554 When --assemble is used with --scan and no devices are listed, mdadm
1555 will first attempt to assemble all the arrays listed in the config
1556 file.
1557
1558 If no arrays are listed in the config (other than those marked <ig‐
1559 nore>) it will look through the available devices for possible arrays
1560 and will try to assemble anything that it finds. Arrays which are
1561 tagged as belonging to the given homehost will be assembled and started
1562 normally. Arrays which do not obviously belong to this host are given
1563 names that are expected not to conflict with anything local, and are
1564 started "read-auto" so that nothing is written to any device until the
1565 array is written to. i.e. automatic resync etc is delayed.
1566
1567 If mdadm finds a consistent set of devices that look like they should
1568 comprise an array, and if the superblock is tagged as belonging to the
1569 given home host, it will automatically choose a device name and try to
1570 assemble the array. If the array uses version-0.90 metadata, then the
1571 minor number as recorded in the superblock is used to create a name in
1572 /dev/md/ so for example /dev/md/3. If the array uses version-1 meta‐
1573 data, then the name from the superblock is used to similarly create a
1574 name in /dev/md/ (the name will have any 'host' prefix stripped first).
1575
1576 This behaviour can be modified by the AUTO line in the mdadm.conf con‐
1577 figuration file. This line can indicate that specific metadata type
1578 should, or should not, be automatically assembled. If an array is
1579 found which is not listed in mdadm.conf and has a metadata format that
1580 is denied by the AUTO line, then it will not be assembled. The AUTO
1581 line can also request that all arrays identified as being for this
1582 homehost should be assembled regardless of their metadata type. See
1583 mdadm.conf(5) for further details.
1584
1585 Note: Auto-assembly cannot be used for assembling and activating some
1586 arrays which are undergoing reshape. In particular as the backup-file
1587 cannot be given, any reshape which requires a backup file to continue
1588 cannot be started by auto-assembly. An array which is growing to more
1589 devices and has passed the critical section can be assembled using
1590 auto-assembly.
1591
1592
1594 Usage: mdadm --build md-device --chunk=X --level=Y --raid-devices=Z de‐
1595 vices
1596
1597
1598 This usage is similar to --create. The difference is that it creates
1599 an array without a superblock. With these arrays there is no differ‐
1600 ence between initially creating the array and subsequently assembling
1601 the array, except that hopefully there is useful data there in the sec‐
1602 ond case.
1603
1604 The level may raid0, linear, raid1, raid10, multipath, or faulty, or
1605 one of their synonyms. All devices must be listed and the array will
1606 be started once complete. It will often be appropriate to use --as‐
1607 sume-clean with levels raid1 or raid10.
1608
1609
1611 Usage: mdadm --create md-device --chunk=X --level=Y
1612 --raid-devices=Z devices
1613
1614
1615 This usage will initialise a new md array, associate some devices with
1616 it, and activate the array.
1617
1618 The named device will normally not exist when mdadm --create is run,
1619 but will be created by udev once the array becomes active.
1620
1621 The max length md-device name is limited to 32 characters. Different
1622 metadata types have more strict limitation (like IMSM where only 16
1623 characters are allowed). For that reason, long name could be truncated
1624 or rejected, it depends on metadata policy.
1625
1626 As devices are added, they are checked to see if they contain RAID su‐
1627 perblocks or filesystems. They are also checked to see if the variance
1628 in device size exceeds 1%.
1629
1630 If any discrepancy is found, the array will not automatically be run,
1631 though the presence of a --run can override this caution.
1632
1633 To create a "degraded" array in which some devices are missing, simply
1634 give the word "missing" in place of a device name. This will cause
1635 mdadm to leave the corresponding slot in the array empty. For a RAID4
1636 or RAID5 array at most one slot can be "missing"; for a RAID6 array at
1637 most two slots. For a RAID1 array, only one real device needs to be
1638 given. All of the others can be "missing".
1639
1640 When creating a RAID5 array, mdadm will automatically create a degraded
1641 array with an extra spare drive. This is because building the spare
1642 into a degraded array is in general faster than resyncing the parity on
1643 a non-degraded, but not clean, array. This feature can be overridden
1644 with the --force option.
1645
1646 When creating an array with version-1 metadata a name for the array is
1647 required. If this is not given with the --name option, mdadm will
1648 choose a name based on the last component of the name of the device be‐
1649 ing created. So if /dev/md3 is being created, then the name 3 will be
1650 chosen. If /dev/md/home is being created, then the name home will be
1651 used.
1652
1653 When creating a partition based array, using mdadm with version-1.x
1654 metadata, the partition type should be set to 0xDA (non fs-data). This
1655 type of selection allows for greater precision since using any other
1656 [RAID auto-detect (0xFD) or a GNU/Linux partition (0x83)], might create
1657 problems in the event of array recovery through a live cdrom.
1658
1659 A new array will normally get a randomly assigned 128bit UUID which is
1660 very likely to be unique. If you have a specific need, you can choose
1661 a UUID for the array by giving the --uuid= option. Be warned that cre‐
1662 ating two arrays with the same UUID is a recipe for disaster. Also,
1663 using --uuid= when creating a v0.90 array will silently override any
1664 --homehost= setting.
1665
1666 If the array type supports a write-intent bitmap, and if the devices in
1667 the array exceed 100G is size, an internal write-intent bitmap will au‐
1668 tomatically be added unless some other option is explicitly requested
1669 with the --bitmap option or a different consistency policy is selected
1670 with the --consistency-policy option. In any case, space for a bitmap
1671 will be reserved so that one can be added later with --grow --bit‐
1672 map=internal.
1673
1674 If the metadata type supports it (currently only 1.x and IMSM meta‐
1675 data), space will be allocated to store a bad block list. This allows
1676 a modest number of bad blocks to be recorded, allowing the drive to re‐
1677 main in service while only partially functional.
1678
1679 When creating an array within a CONTAINER mdadm can be given either the
1680 list of devices to use, or simply the name of the container. The for‐
1681 mer case gives control over which devices in the container will be used
1682 for the array. The latter case allows mdadm to automatically choose
1683 which devices to use based on how much spare space is available.
1684
1685 The General Management options that are valid with --create are:
1686
1687 --run insist on running the array even if some devices look like they
1688 might be in use.
1689
1690
1691 --readonly
1692 start the array in readonly mode.
1693
1694
1696 Usage: mdadm device options... devices...
1697
1698 This usage will allow individual devices in an array to be failed, re‐
1699 moved or added. It is possible to perform multiple operations with on
1700 command. For example:
1701 mdadm /dev/md0 -f /dev/hda1 -r /dev/hda1 -a /dev/hda1
1702 will firstly mark /dev/hda1 as faulty in /dev/md0 and will then remove
1703 it from the array and finally add it back in as a spare. However, only
1704 one md array can be affected by a single command.
1705
1706 When a device is added to an active array, mdadm checks to see if it
1707 has metadata on it which suggests that it was recently a member of the
1708 array. If it does, it tries to "re-add" the device. If there have
1709 been no changes since the device was removed, or if the array has a
1710 write-intent bitmap which has recorded whatever changes there were,
1711 then the device will immediately become a full member of the array and
1712 those differences recorded in the bitmap will be resolved.
1713
1714
1716 Usage: mdadm options ... devices ...
1717
1718 MISC mode includes a number of distinct operations that operate on dis‐
1719 tinct devices. The operations are:
1720
1721 --query
1722 The device is examined to see if it is (1) an active md array,
1723 or (2) a component of an md array. The information discovered
1724 is reported.
1725
1726
1727 --detail
1728 The device should be an active md device. mdadm will display a
1729 detailed description of the array. --brief or --scan will cause
1730 the output to be less detailed and the format to be suitable for
1731 inclusion in mdadm.conf. The exit status of mdadm will normally
1732 be 0 unless mdadm failed to get useful information about the de‐
1733 vice(s); however, if the --test option is given, then the exit
1734 status will be:
1735
1736 0 The array is functioning normally.
1737
1738 1 The array has at least one failed device.
1739
1740 2 The array has multiple failed devices such that it is un‐
1741 usable.
1742
1743 4 There was an error while trying to get information about
1744 the device.
1745
1746
1747 --detail-platform
1748 Print detail of the platform's RAID capabilities (firmware /
1749 hardware topology). If the metadata is specified with -e or
1750 --metadata= then the return status will be:
1751
1752 0 metadata successfully enumerated its platform components
1753 on this system
1754
1755 1 metadata is platform independent
1756
1757 2 metadata failed to find its platform components on this
1758 system
1759
1760
1761 --update-subarray=
1762 If the device is a container and the argument to --update-subar‐
1763 ray specifies a subarray in the container, then attempt to up‐
1764 date the given superblock field in the subarray. Similar to up‐
1765 dating an array in "assemble" mode, the field to update is se‐
1766 lected by -U or --update= option. The supported options are
1767 name, ppl, no-ppl, bitmap and no-bitmap.
1768
1769 The name option updates the subarray name in the metadata, it
1770 may not affect the device node name or the device node symlink
1771 until the subarray is re-assembled. If updating name would
1772 change the UUID of an active subarray this operation is blocked,
1773 and the command will end in an error.
1774
1775 The ppl and no-ppl options enable and disable PPL in the meta‐
1776 data. Currently supported only for IMSM subarrays.
1777
1778 The bitmap and no-bitmap options enable and disable write-intent
1779 bitmap in the metadata. Currently supported only for IMSM subar‐
1780 rays.
1781
1782
1783 --examine
1784 The device should be a component of an md array. mdadm will
1785 read the md superblock of the device and display the contents.
1786 If --brief or --scan is given, then multiple devices that are
1787 components of the one array are grouped together and reported in
1788 a single entry suitable for inclusion in mdadm.conf.
1789
1790 Having --scan without listing any devices will cause all devices
1791 listed in the config file to be examined.
1792
1793
1794 --dump=directory
1795 If the device contains RAID metadata, a file will be created in
1796 the directory and the metadata will be written to it. The file
1797 will be the same size as the device and will have the metadata
1798 written at the same location as it exists in the device. How‐
1799 ever, the file will be "sparse" so that only those blocks con‐
1800 taining metadata will be allocated. The total space used will be
1801 small.
1802
1803 The filename used in the directory will be the base name of the
1804 device. Further, if any links appear in /dev/disk/by-id which
1805 point to the device, then hard links to the file will be created
1806 in directory based on these by-id names.
1807
1808 Multiple devices can be listed and their metadata will all be
1809 stored in the one directory.
1810
1811
1812 --restore=directory
1813 This is the reverse of --dump. mdadm will locate a file in the
1814 directory that has a name appropriate for the given device and
1815 will restore metadata from it. Names that match /dev/disk/by-id
1816 names are preferred, however if two of those refer to different
1817 files, mdadm will not choose between them but will abort the op‐
1818 eration.
1819
1820 If a file name is given instead of a directory then mdadm will
1821 restore from that file to a single device, always provided the
1822 size of the file matches that of the device, and the file con‐
1823 tains valid metadata.
1824
1825 --stop The devices should be active md arrays which will be deacti‐
1826 vated, as long as they are not currently in use.
1827
1828
1829 --run This will fully activate a partially assembled md array.
1830
1831
1832 --readonly
1833 This will mark an active array as read-only, providing that it
1834 is not currently being used.
1835
1836
1837 --readwrite
1838 This will change a readonly array back to being read/write.
1839
1840
1841 --scan For all operations except --examine, --scan will cause the oper‐
1842 ation to be applied to all arrays listed in /proc/mdstat. For
1843 --examine, --scan causes all devices listed in the config file
1844 to be examined.
1845
1846
1847 -b, --brief
1848 Be less verbose. This is used with --detail and --examine. Us‐
1849 ing --brief with --verbose gives an intermediate level of ver‐
1850 bosity.
1851
1852
1854 Usage: mdadm --monitor options... devices...
1855
1856
1857 Monitor option can work in two modes:
1858
1859 • system wide mode, follow all md devices based on /proc/mdstat,
1860
1861 • follow only specified MD devices in command line.
1862
1863 --scan - indicates system wide mode. Option causes the monitor to track
1864 all md devices that appear in /proc/mdstat. If it is not set, then at
1865 least one device must be specified.
1866
1867 Monitor usage causes mdadm to periodically poll a number of md arrays
1868 and to report on any events noticed.
1869
1870 In both modes, monitor will work as long as there is an active array
1871 with redundancy and it is defined to follow (for --scan every array is
1872 followed).
1873
1874 As well as reporting events, mdadm may move a spare drive from one ar‐
1875 ray to another if they are in the same spare-group or domain and if the
1876 destination array has a failed drive but no spares.
1877
1878 The result of monitoring the arrays is the generation of events. These
1879 events are passed to a separate program (if specified) and may be
1880 mailed to a given E-mail address.
1881
1882 When passing events to a program, the program is run once for each
1883 event, and is given 2 or 3 command-line arguments: the first is the
1884 name of the event (see below), the second is the name of the md device
1885 which is affected, and the third is the name of a related device if
1886 relevant (such as a component device that has failed).
1887
1888 If --scan is given, then a program or an e-mail address must be speci‐
1889 fied on the command line or in the config file. If neither are avail‐
1890 able, then mdadm will not monitor anything. For devices given directly
1891 in command line, without program or email specified, each event is re‐
1892 ported to stdout.
1893
1894 Note: For systems where is configured via systemd, mdmonitor(mdmoni‐
1895 tor.service) should be configured. The service is designed to be pri‐
1896 mary solution for array monitoring, it is configured to work in system
1897 wide mode. It is automatically started and stopped according to cur‐
1898 rent state and types of MD arrays in system. The service may require
1899 additional configuration, like e-mail or delay. That should be done in
1900 mdadm.conf.
1901
1902 The different events are:
1903
1904
1905 DeviceDisappeared
1906 An md array which previously was configured appears to no
1907 longer be configured. (syslog priority: Critical)
1908
1909 If mdadm was told to monitor an array which is RAID0 or Lin‐
1910 ear, then it will report DeviceDisappeared with the extra
1911 information Wrong-Level. This is because RAID0 and Linear
1912 do not support the device-failed, hot-spare and resync oper‐
1913 ations which are monitored.
1914
1915
1916 RebuildStarted
1917 An md array started reconstruction (e.g. recovery, resync,
1918 reshape, check, repair). (syslog priority: Warning)
1919
1920
1921 RebuildNN
1922 Where NN is a two-digit number (eg. 05, 48). This indicates
1923 that the rebuild has reached that percentage of the total.
1924 The events are generated at a fixed increment from 0. The
1925 increment size may be specified with a command-line option
1926 (the default is 20). (syslog priority: Warning)
1927
1928
1929 RebuildFinished
1930 An md array that was rebuilding, isn't any more, either be‐
1931 cause it finished normally or was aborted. (syslog priority:
1932 Warning)
1933
1934
1935 Fail An active component device of an array has been marked as
1936 faulty. (syslog priority: Critical)
1937
1938
1939 FailSpare
1940 A spare component device which was being rebuilt to replace
1941 a faulty device has failed. (syslog priority: Critical)
1942
1943
1944 SpareActive
1945 A spare component device which was being rebuilt to replace
1946 a faulty device has been successfully rebuilt and has been
1947 made active. (syslog priority: Info)
1948
1949
1950 NewArray
1951 A new md array has been detected in the /proc/mdstat file.
1952 (syslog priority: Info)
1953
1954
1955 DegradedArray
1956 A newly noticed array appears to be degraded. This message
1957 is not generated when mdadm notices a drive failure which
1958 causes degradation, but only when mdadm notices that an ar‐
1959 ray is degraded when it first sees the array. (syslog pri‐
1960 ority: Critical)
1961
1962
1963 MoveSpare
1964 A spare drive has been moved from one array in a spare-group
1965 or domain to another to allow a failed drive to be replaced.
1966 (syslog priority: Info)
1967
1968
1969 SparesMissing
1970 If mdadm has been told, via the config file, that an array
1971 should have a certain number of spare devices, and mdadm de‐
1972 tects that it has fewer than this number when it first sees
1973 the array, it will report a SparesMissing message. (syslog
1974 priority: Warning)
1975
1976
1977 TestMessage
1978 An array was found at startup, and the --test flag was
1979 given. (syslog priority: Info)
1980
1981 Only Fail, FailSpare, DegradedArray, SparesMissing and TestMessage
1982 cause Email to be sent. All events cause the program to be run. The
1983 program is run with two or three arguments: the event name, the array
1984 device and possibly a second device.
1985
1986 Each event has an associated array device (e.g. /dev/md1) and possibly
1987 a second device. For Fail, FailSpare, and SpareActive the second de‐
1988 vice is the relevant component device. For MoveSpare the second device
1989 is the array that the spare was moved from.
1990
1991 For mdadm to move spares from one array to another, the different ar‐
1992 rays need to be labeled with the same spare-group or the spares must be
1993 allowed to migrate through matching POLICY domains in the configuration
1994 file. The spare-group name can be any string; it is only necessary
1995 that different spare groups use different names.
1996
1997 When mdadm detects that an array in a spare group has fewer active de‐
1998 vices than necessary for the complete array, and has no spare devices,
1999 it will look for another array in the same spare group that has a full
2000 complement of working drives and a spare. It will then attempt to re‐
2001 move the spare from the second array and add it to the first. If the
2002 removal succeeds but the adding fails, then it is added back to the
2003 original array.
2004
2005 If the spare group for a degraded array is not defined, mdadm will look
2006 at the rules of spare migration specified by POLICY lines in mdadm.conf
2007 and then follow similar steps as above if a matching spare is found.
2008
2009
2011 The GROW mode is used for changing the size or shape of an active ar‐
2012 ray.
2013
2014 During the kernel 2.6 era the following changes were added:
2015
2016 • change the "size" attribute for RAID1, RAID4, RAID5 and RAID6.
2017
2018 • increase or decrease the "raid-devices" attribute of RAID0, RAID1,
2019 RAID4, RAID5, and RAID6.
2020
2021 • change the chunk-size and layout of RAID0, RAID4, RAID5, RAID6 and
2022 RAID10.
2023
2024 • convert between RAID1 and RAID5, between RAID5 and RAID6, between
2025 RAID0, RAID4, and RAID5, and between RAID0 and RAID10 (in the
2026 near-2 mode).
2027
2028 • add a write-intent bitmap to any array which supports these bit‐
2029 maps, or remove a write-intent bitmap from such an array.
2030
2031 • change the array's consistency policy.
2032
2033 Using GROW on containers is currently supported only for Intel's IMSM
2034 container format. The number of devices in a container can be in‐
2035 creased - which affects all arrays in the container - or an array in a
2036 container can be converted between levels where those levels are sup‐
2037 ported by the container, and the conversion is on of those listed
2038 above.
2039
2040
2041 Notes:
2042
2043 • Intel's native checkpointing doesn't use --backup-file option and
2044 it is transparent for assembly feature.
2045
2046 • Roaming between Windows(R) and Linux systems for IMSM metadata is
2047 not supported during grow process.
2048
2049 • When growing a raid0 device, the new component disk size (or exter‐
2050 nal backup size) should be larger than LCM(old, new) * chunk-size *
2051 2, where LCM() is the least common multiple of the old and new
2052 count of component disks, and "* 2" comes from the fact that mdadm
2053 refuses to use more than half of a spare device for backup space.
2054
2055
2056 SIZE CHANGES
2057 Normally when an array is built the "size" is taken from the smallest
2058 of the drives. If all the small drives in an arrays are, over time,
2059 removed and replaced with larger drives, then you could have an array
2060 of large drives with only a small amount used. In this situation,
2061 changing the "size" with "GROW" mode will allow the extra space to
2062 start being used. If the size is increased in this way, a "resync"
2063 process will start to make sure the new parts of the array are synchro‐
2064 nised.
2065
2066 Note that when an array changes size, any filesystem that may be stored
2067 in the array will not automatically grow or shrink to use or vacate the
2068 space. The filesystem will need to be explicitly told to use the extra
2069 space after growing, or to reduce its size prior to shrinking the ar‐
2070 ray.
2071
2072 Also, the size of an array cannot be changed while it has an active
2073 bitmap. If an array has a bitmap, it must be removed before the size
2074 can be changed. Once the change is complete a new bitmap can be cre‐
2075 ated.
2076
2077
2078 Note: --grow --size is not yet supported for external file bitmap.
2079
2080
2081 RAID-DEVICES CHANGES
2082 A RAID1 array can work with any number of devices from 1 upwards
2083 (though 1 is not very useful). There may be times which you want to
2084 increase or decrease the number of active devices. Note that this is
2085 different to hot-add or hot-remove which changes the number of inactive
2086 devices.
2087
2088 When reducing the number of devices in a RAID1 array, the slots which
2089 are to be removed from the array must already be vacant. That is, the
2090 devices which were in those slots must be failed and removed.
2091
2092 When the number of devices is increased, any hot spares that are
2093 present will be activated immediately.
2094
2095 Changing the number of active devices in a RAID5 or RAID6 is much more
2096 effort. Every block in the array will need to be read and written back
2097 to a new location. From 2.6.17, the Linux Kernel is able to increase
2098 the number of devices in a RAID5 safely, including restarting an inter‐
2099 rupted "reshape". From 2.6.31, the Linux Kernel is able to increase or
2100 decrease the number of devices in a RAID5 or RAID6.
2101
2102 From 2.6.35, the Linux Kernel is able to convert a RAID0 in to a RAID4
2103 or RAID5. mdadm uses this functionality and the ability to add devices
2104 to a RAID4 to allow devices to be added to a RAID0. When requested to
2105 do this, mdadm will convert the RAID0 to a RAID4, add the necessary
2106 disks and make the reshape happen, and then convert the RAID4 back to
2107 RAID0.
2108
2109 When decreasing the number of devices, the size of the array will also
2110 decrease. If there was data in the array, it could get destroyed and
2111 this is not reversible, so you should firstly shrink the filesystem on
2112 the array to fit within the new size. To help prevent accidents, mdadm
2113 requires that the size of the array be decreased first with mdadm
2114 --grow --array-size. This is a reversible change which simply makes
2115 the end of the array inaccessible. The integrity of any data can then
2116 be checked before the non-reversible reduction in the number of devices
2117 is request.
2118
2119 When relocating the first few stripes on a RAID5 or RAID6, it is not
2120 possible to keep the data on disk completely consistent and crash-
2121 proof. To provide the required safety, mdadm disables writes to the
2122 array while this "critical section" is reshaped, and takes a backup of
2123 the data that is in that section. For grows, this backup may be stored
2124 in any spare devices that the array has, however it can also be stored
2125 in a separate file specified with the --backup-file option, and is re‐
2126 quired to be specified for shrinks, RAID level changes and layout
2127 changes. If this option is used, and the system does crash during the
2128 critical period, the same file must be passed to --assemble to restore
2129 the backup and reassemble the array. When shrinking rather than grow‐
2130 ing the array, the reshape is done from the end towards the beginning,
2131 so the "critical section" is at the end of the reshape.
2132
2133
2134 LEVEL CHANGES
2135 Changing the RAID level of any array happens instantaneously. However
2136 in the RAID5 to RAID6 case this requires a non-standard layout of the
2137 RAID6 data, and in the RAID6 to RAID5 case that non-standard layout is
2138 required before the change can be accomplished. So while the level
2139 change is instant, the accompanying layout change can take quite a long
2140 time. A --backup-file is required. If the array is not simultaneously
2141 being grown or shrunk, so that the array size will remain the same -
2142 for example, reshaping a 3-drive RAID5 into a 4-drive RAID6 - the
2143 backup file will be used not just for a "critical section" but through‐
2144 out the reshape operation, as described below under LAYOUT CHANGES.
2145
2146
2147 CHUNK-SIZE AND LAYOUT CHANGES
2148 Changing the chunk-size or layout without also changing the number of
2149 devices as the same time will involve re-writing all blocks in-place.
2150 To ensure against data loss in the case of a crash, a --backup-file
2151 must be provided for these changes. Small sections of the array will
2152 be copied to the backup file while they are being rearranged. This
2153 means that all the data is copied twice, once to the backup and once to
2154 the new layout on the array, so this type of reshape will go very
2155 slowly.
2156
2157 If the reshape is interrupted for any reason, this backup file must be
2158 made available to mdadm --assemble so the array can be reassembled.
2159 Consequently, the file cannot be stored on the device being reshaped.
2160
2161
2162
2163 BITMAP CHANGES
2164 A write-intent bitmap can be added to, or removed from, an active ar‐
2165 ray. Either internal bitmaps, or bitmaps stored in a separate file,
2166 can be added. Note that if you add a bitmap stored in a file which is
2167 in a filesystem that is on the RAID array being affected, the system
2168 will deadlock. The bitmap must be on a separate filesystem.
2169
2170
2171 CONSISTENCY POLICY CHANGES
2172 The consistency policy of an active array can be changed by using the
2173 --consistency-policy option in Grow mode. Currently this works only for
2174 the ppl and resync policies and allows to enable or disable the RAID5
2175 Partial Parity Log (PPL).
2176
2177
2179 Usage: mdadm --incremental [--run] [--quiet] component-device [op‐
2180 tional-aliases-for-device]
2181
2182 Usage: mdadm --incremental --fail component-device
2183
2184 Usage: mdadm --incremental --rebuild-map
2185
2186 Usage: mdadm --incremental --run --scan
2187
2188
2189 This mode is designed to be used in conjunction with a device discovery
2190 system. As devices are found in a system, they can be passed to mdadm
2191 --incremental to be conditionally added to an appropriate array.
2192
2193 Conversely, it can also be used with the --fail flag to do just the op‐
2194 posite and find whatever array a particular device is part of and re‐
2195 move the device from that array.
2196
2197 If the device passed is a CONTAINER device created by a previous call
2198 to mdadm, then rather than trying to add that device to an array, all
2199 the arrays described by the metadata of the container will be started.
2200
2201 mdadm performs a number of tests to determine if the device is part of
2202 an array, and which array it should be part of. If an appropriate ar‐
2203 ray is found, or can be created, mdadm adds the device to the array and
2204 conditionally starts the array.
2205
2206 Note that mdadm will normally only add devices to an array which were
2207 previously working (active or spare) parts of that array. The support
2208 for automatic inclusion of a new drive as a spare in some array re‐
2209 quires a configuration through POLICY in config file.
2210
2211 The tests that mdadm makes are as follow:
2212
2213 + Is the device permitted by mdadm.conf? That is, is it listed in
2214 a DEVICES line in that file. If DEVICES is absent then the de‐
2215 fault it to allow any device. Similarly if DEVICES contains the
2216 special word partitions then any device is allowed. Otherwise
2217 the device name given to mdadm, or one of the aliases given, or
2218 an alias found in the filesystem, must match one of the names or
2219 patterns in a DEVICES line.
2220
2221 This is the only context where the aliases are used. They are
2222 usually provided by a udev rules mentioning $env{DEVLINKS}.
2223
2224
2225 + Does the device have a valid md superblock? If a specific meta‐
2226 data version is requested with --metadata or -e then only that
2227 style of metadata is accepted, otherwise mdadm finds any known
2228 version of metadata. If no md metadata is found, the device may
2229 be still added to an array as a spare if POLICY allows.
2230
2231
2232
2233 mdadm keeps a list of arrays that it has partially assembled in
2234 /run/mdadm/map. If no array exists which matches the metadata on the
2235 new device, mdadm must choose a device name and unit number. It does
2236 this based on any name given in mdadm.conf or any name information
2237 stored in the metadata. If this name suggests a unit number, that num‐
2238 ber will be used, otherwise a free unit number will be chosen. Nor‐
2239 mally mdadm will prefer to create a partitionable array, however if the
2240 CREATE line in mdadm.conf suggests that a non-partitionable array is
2241 preferred, that will be honoured.
2242
2243 If the array is not found in the config file and its metadata does not
2244 identify it as belonging to the "homehost", then mdadm will choose a
2245 name for the array which is certain not to conflict with any array
2246 which does belong to this host. It does this be adding an underscore
2247 and a small number to the name preferred by the metadata.
2248
2249 Once an appropriate array is found or created and the device is added,
2250 mdadm must decide if the array is ready to be started. It will nor‐
2251 mally compare the number of available (non-spare) devices to the number
2252 of devices that the metadata suggests need to be active. If there are
2253 at least that many, the array will be started. This means that if any
2254 devices are missing the array will not be restarted.
2255
2256 As an alternative, --run may be passed to mdadm in which case the array
2257 will be run as soon as there are enough devices present for the data to
2258 be accessible. For a RAID1, that means one device will start the ar‐
2259 ray. For a clean RAID5, the array will be started as soon as all but
2260 one drive is present.
2261
2262 Note that neither of these approaches is really ideal. If it can be
2263 known that all device discovery has completed, then
2264 mdadm -IRs
2265 can be run which will try to start all arrays that are being incremen‐
2266 tally assembled. They are started in "read-auto" mode in which they
2267 are read-only until the first write request. This means that no meta‐
2268 data updates are made and no attempt at resync or recovery happens.
2269 Further devices that are found before the first write can still be
2270 added safely.
2271
2272
2274 This section describes environment variables that affect how mdadm op‐
2275 erates.
2276
2277
2278 MDADM_NO_MDMON
2279 Setting this value to 1 will prevent mdadm from automatically
2280 launching mdmon. This variable is intended primarily for debug‐
2281 ging mdadm/mdmon.
2282
2283
2284 MDADM_NO_UDEV
2285 Normally, mdadm does not create any device nodes in /dev, but
2286 leaves that task to udev. If udev appears not to be configured,
2287 or if this environment variable is set to '1', the mdadm will
2288 create and devices that are needed.
2289
2290
2291 MDADM_NO_SYSTEMCTL
2292 If mdadm detects that systemd is in use it will normally request
2293 systemd to start various background tasks (particularly mdmon)
2294 rather than forking and running them in the background. This
2295 can be suppressed by setting MDADM_NO_SYSTEMCTL=1.
2296
2297
2298 IMSM_NO_PLATFORM
2299 A key value of IMSM metadata is that it allows interoperability
2300 with boot ROMs on Intel platforms, and with other major operat‐
2301 ing systems. Consequently, mdadm will only allow an IMSM array
2302 to be created or modified if detects that it is running on an
2303 Intel platform which supports IMSM, and supports the particular
2304 configuration of IMSM that is being requested (some functional‐
2305 ity requires newer OROM support).
2306
2307 These checks can be suppressed by setting IMSM_NO_PLATFORM=1 in
2308 the environment. This can be useful for testing or for disaster
2309 recovery. You should be aware that interoperability may be com‐
2310 promised by setting this value.
2311
2312 These change can also be suppressed by adding mdadm.imsm.test=1
2313 to the kernel command line. This makes it easy to test IMSM code
2314 in a virtual machine that doesn't have IMSM virtual hardware.
2315
2316
2317 MDADM_GROW_ALLOW_OLD
2318 If an array is stopped while it is performing a reshape and that
2319 reshape was making use of a backup file, then when the array is
2320 re-assembled mdadm will sometimes complain that the backup file
2321 is too old. If this happens and you are certain it is the right
2322 backup file, you can over-ride this check by setting
2323 MDADM_GROW_ALLOW_OLD=1 in the environment.
2324
2325
2326 MDADM_CONF_AUTO
2327 Any string given in this variable is added to the start of the
2328 AUTO line in the config file, or treated as the whole AUTO line
2329 if none is given. It can be used to disable certain metadata
2330 types when mdadm is called from a boot script. For example
2331 export MDADM_CONF_AUTO='-ddf -imsm'
2332 will make sure that mdadm does not automatically assemble any
2333 DDF or IMSM arrays that are found. This can be useful on sys‐
2334 tems configured to manage such arrays with dmraid.
2335
2336
2337
2339 mdadm --query /dev/name-of-device
2340 This will find out if a given device is a RAID array, or is part of
2341 one, and will provide brief information about the device.
2342
2343 mdadm --assemble --scan
2344 This will assemble and start all arrays listed in the standard config
2345 file. This command will typically go in a system startup file.
2346
2347 mdadm --stop --scan
2348 This will shut down all arrays that can be shut down (i.e. are not cur‐
2349 rently in use). This will typically go in a system shutdown script.
2350
2351 mdadm --follow --scan --delay=120
2352 If (and only if) there is an Email address or program given in the
2353 standard config file, then monitor the status of all arrays listed in
2354 that file by polling them ever 2 minutes.
2355
2356 mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hd[ac]1
2357 Create /dev/md0 as a RAID1 array consisting of /dev/hda1 and /dev/hdc1.
2358
2359 echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf
2360 mdadm --detail --scan >> mdadm.conf
2361 This will create a prototype config file that describes currently ac‐
2362 tive arrays that are known to be made from partitions of IDE or SCSI
2363 drives. This file should be reviewed before being used as it may con‐
2364 tain unwanted detail.
2365
2366 echo 'DEVICE /dev/hd[a-z] /dev/sd*[a-z]' > mdadm.conf
2367 mdadm --examine --scan --config=mdadm.conf >> mdadm.conf
2368 This will find arrays which could be assembled from existing IDE and
2369 SCSI whole drives (not partitions), and store the information in the
2370 format of a config file. This file is very likely to contain unwanted
2371 detail, particularly the devices= entries. It should be reviewed and
2372 edited before being used as an actual config file.
2373
2374 mdadm --examine --brief --scan --config=partitions
2375 mdadm -Ebsc partitions
2376 Create a list of devices by reading /proc/partitions, scan these for
2377 RAID superblocks, and printout a brief listing of all that were found.
2378
2379 mdadm -Ac partitions -m 0 /dev/md0
2380 Scan all partitions and devices listed in /proc/partitions and assemble
2381 /dev/md0 out of all such devices with a RAID superblock with a minor
2382 number of 0.
2383
2384 mdadm --monitor --scan --daemonise > /run/mdadm/mon.pid
2385 If config file contains a mail address or alert program, run mdadm in
2386 the background in monitor mode monitoring all md devices. Also write
2387 pid of mdadm daemon to /run/mdadm/mon.pid.
2388
2389 mdadm -Iq /dev/somedevice
2390 Try to incorporate newly discovered device into some array as appropri‐
2391 ate.
2392
2393 mdadm --incremental --rebuild-map --run --scan
2394 Rebuild the array map from any current arrays, and then start any that
2395 can be started.
2396
2397 mdadm /dev/md4 --fail detached --remove detached
2398 Any devices which are components of /dev/md4 will be marked as faulty
2399 and then remove from the array.
2400
2401 mdadm --grow /dev/md4 --level=6 --backup-file=/root/backup-md4
2402 The array /dev/md4 which is currently a RAID5 array will be converted
2403 to RAID6. There should normally already be a spare drive attached to
2404 the array as a RAID6 needs one more drive than a matching RAID5.
2405
2406 mdadm --create /dev/md/ddf --metadata=ddf --raid-disks 6 /dev/sd[a-f]
2407 Create a DDF array over 6 devices.
2408
2409 mdadm --create /dev/md/home -n3 -l5 -z 30000000 /dev/md/ddf
2410 Create a RAID5 array over any 3 devices in the given DDF set. Use only
2411 30 gigabytes of each device.
2412
2413 mdadm -A /dev/md/ddf1 /dev/sd[a-f]
2414 Assemble a pre-exist ddf array.
2415
2416 mdadm -I /dev/md/ddf1
2417 Assemble all arrays contained in the ddf array, assigning names as ap‐
2418 propriate.
2419
2420 mdadm --create --help
2421 Provide help about the Create mode.
2422
2423 mdadm --config --help
2424 Provide help about the format of the config file.
2425
2426 mdadm --help
2427 Provide general help.
2428
2429
2431 /proc/mdstat
2432 If you're using the /proc filesystem, /proc/mdstat lists all active md
2433 devices with information about them. mdadm uses this to find arrays
2434 when --scan is given in Misc mode, and to monitor array reconstruction
2435 on Monitor mode.
2436
2437
2438 /etc/mdadm.conf (or /etc/mdadm/mdadm.conf)
2439 Default config file. See mdadm.conf(5) for more details.
2440
2441
2442 /etc/mdadm.conf.d (or /etc/mdadm/mdadm.conf.d)
2443 Default directory containing configuration files. See mdadm.conf(5)
2444 for more details.
2445
2446
2447 /run/mdadm/map
2448 When --incremental mode is used, this file gets a list of arrays cur‐
2449 rently being created.
2450
2451
2453 mdadm understand two sorts of names for array devices.
2454
2455 The first is the so-called 'standard' format name, which matches the
2456 names used by the kernel and which appear in /proc/mdstat.
2457
2458 The second sort can be freely chosen, but must reside in /dev/md/.
2459 When giving a device name to mdadm to create or assemble an array, ei‐
2460 ther full path name such as /dev/md0 or /dev/md/home can be given, or
2461 just the suffix of the second sort of name, such as home can be given.
2462
2463 When mdadm chooses device names during auto-assembly or incremental as‐
2464 sembly, it will sometimes add a small sequence number to the end of the
2465 name to avoid conflicted between multiple arrays that have the same
2466 name. If mdadm can reasonably determine that the array really is meant
2467 for this host, either by a hostname in the metadata, or by the presence
2468 of the array in mdadm.conf, then it will leave off the suffix if possi‐
2469 ble. Also if the homehost is specified as <ignore> mdadm will only use
2470 a suffix if a different array of the same name already exists or is
2471 listed in the config file.
2472
2473 The standard names for non-partitioned arrays (the only sort of md ar‐
2474 ray available in 2.4 and earlier) are of the form
2475
2476 /dev/mdNN
2477
2478 where NN is a number. The standard names for partitionable arrays (as
2479 available from 2.6 onwards) are of the form:
2480
2481 /dev/md_dNN
2482
2483 Partition numbers should be indicated by adding "pMM" to these, thus
2484 "/dev/md/d1p2".
2485
2486 From kernel version 2.6.28 the "non-partitioned array" can actually be
2487 partitioned. So the "md_dNN" names are no longer needed, and parti‐
2488 tions such as "/dev/mdNNpXX" are possible.
2489
2490 From kernel version 2.6.29 standard names can be non-numeric following
2491 the form:
2492
2493 /dev/md_XXX
2494
2495 where XXX is any string. These names are supported by mdadm since ver‐
2496 sion 3.3 provided they are enabled in mdadm.conf.
2497
2498
2500 mdadm was previously known as mdctl.
2501
2502
2504 For further information on mdadm usage, MD and the various levels of
2505 RAID, see:
2506
2507 https://raid.wiki.kernel.org/
2508
2509 (based upon Jakob Østergaard's Software-RAID.HOWTO)
2510
2511 The latest version of mdadm should always be available from
2512
2513 https://www.kernel.org/pub/linux/utils/raid/mdadm/
2514
2515 Related man pages:
2516
2517 mdmon(8), mdadm.conf(5), md(4).
2518
2519
2520
2521v4.2 MDADM(8)