1MDADM(8)                    System Manager's Manual                   MDADM(8)
2
3
4

NAME

6       mdadm - manage MD devices aka Linux Software RAID
7
8

SYNOPSIS

10       mdadm [mode] <raiddevice> [options] <component-devices>
11
12

DESCRIPTION

14       RAID  devices  are  virtual devices created from two or more real block
15       devices.  This allows multiple devices (typically disk drives or parti‐
16       tions  thereof)  to be combined into a single device to hold (for exam‐
17       ple) a single filesystem.  Some RAID levels include redundancy  and  so
18       can survive some degree of device failure.
19
20       Linux  Software  RAID  devices are implemented through the md (Multiple
21       Devices) device driver.
22
23       Currently, Linux supports LINEAR md devices,  RAID0  (striping),  RAID1
24       (mirroring),  RAID4,  RAID5, RAID6, RAID10, MULTIPATH, FAULTY, and CON‐
25       TAINER.
26
27       MULTIPATH is not a Software RAID mechanism, but does  involve  multiple
28       devices:  each  device is a path to one common physical storage device.
29       New installations should not use md/multipath as it is  not  well  sup‐
30       ported  and  has  no  ongoing development.  Use the Device Mapper based
31       multipath-tools instead.
32
33       FAULTY is also not true RAID, and it only involves one device.  It pro‐
34       vides a layer over a true device that can be used to inject faults.
35
36       CONTAINER  is  different again.  A CONTAINER is a collection of devices
37       that are managed as a set.  This is similar to the set of devices  con‐
38       nected to a hardware RAID controller.  The set of devices may contain a
39       number of different RAID arrays each utilising some  (or  all)  of  the
40       blocks  from  a number of the devices in the set.  For example, two de‐
41       vices in a 5-device set might form a RAID1  using  the  whole  devices.
42       The  remaining three might have a RAID5 over the first half of each de‐
43       vice, and a RAID0 over the second half.
44
45       With a CONTAINER, there is one set of metadata that  describes  all  of
46       the arrays in the container.  So when mdadm creates a CONTAINER device,
47       the device just represents the metadata.  Other  normal  arrays  (RAID1
48       etc) can be created inside the container.
49
50

MODES

52       mdadm has several major modes of operation:
53
54       Assemble
55              Assemble  the  components  of a previously created array into an
56              active array.  Components can be  explicitly  given  or  can  be
57              searched  for.   mdadm checks that the components do form a bona
58              fide array, and can, on request, fiddle  superblock  information
59              so as to assemble a faulty array.
60
61
62       Build  Build  an  array  that  doesn't  have  per-device  metadata (su‐
63              perblocks).  For these sorts of arrays, mdadm cannot differenti‐
64              ate  between  initial creation and subsequent assembly of an ar‐
65              ray.  It also cannot perform any checks that appropriate  compo‐
66              nents  have  been  requested.   Because  of this, the Build mode
67              should only be used together with a  complete  understanding  of
68              what you are doing.
69
70
71       Create Create  a new array with per-device metadata (superblocks).  Ap‐
72              propriate metadata is written to each device, and then the array
73              comprising  those  devices  is activated.  A 'resync' process is
74              started to make sure that the array  is  consistent  (e.g.  both
75              sides  of a mirror contain the same data) but the content of the
76              device is left otherwise untouched.  The array can  be  used  as
77              soon  as  it has been created.  There is no need to wait for the
78              initial resync to finish.
79
80
81       Follow or Monitor
82              Monitor one or more md devices and act  on  any  state  changes.
83              This  is only meaningful for RAID1, 4, 5, 6, 10 or multipath ar‐
84              rays, as only these have interesting  state.   RAID0  or  Linear
85              never have missing, spare, or failed drives, so there is nothing
86              to monitor.
87
88
89       Grow   Grow (or shrink) an array, or otherwise reshape it in some  way.
90              Currently supported growth options including changing the active
91              size of component devices and changing the number of active  de‐
92              vices  in  Linear  and  RAID levels 0/1/4/5/6, changing the RAID
93              level between 0, 1, 5, and 6, and between 0 and 10, changing the
94              chunk  size  and layout for RAID 0,4,5,6,10 as well as adding or
95              removing a write-intent bitmap and changing the array's  consis‐
96              tency policy.
97
98
99       Incremental Assembly
100              Add a single device to an appropriate array.  If the addition of
101              the device makes the array runnable, the array will be  started.
102              This  provides  a convenient interface to a hot-plug system.  As
103              each device is detected, mdadm has a chance  to  include  it  in
104              some  array as appropriate.  Optionally, when the --fail flag is
105              passed in we will remove the device from any  active  array  in‐
106              stead of adding it.
107
108              If  a CONTAINER is passed to mdadm in this mode, then any arrays
109              within that container will be assembled and started.
110
111
112       Manage This is for doing things to specific components of an array such
113              as adding new spares and removing faulty devices.
114
115
116       Misc   This  is  an  'everything else' mode that supports operations on
117              active arrays, operations on component devices such  as  erasing
118              old superblocks, and information-gathering operations.
119
120
121       Auto-detect
122              This mode does not act on a specific device or array, but rather
123              it requests the Linux Kernel to activate any  auto-detected  ar‐
124              rays.
125

OPTIONS

Options for selecting a mode are:

128       -A, --assemble
129              Assemble a pre-existing array.
130
131
132       -B, --build
133              Build a legacy array without superblocks.
134
135
136       -C, --create
137              Create a new array.
138
139
140       -F, --follow, --monitor
141              Select Monitor mode.
142
143
144       -G, --grow
145              Change the size or shape of an active array.
146
147
148       -I, --incremental
149              Add/remove  a  single  device  to/from an appropriate array, and
150              possibly start the array.
151
152
153       --auto-detect
154              Request that the kernel starts any auto-detected  arrays.   This
155              can only work if md is compiled into the kernel — not if it is a
156              module.  Arrays can be auto-detected by the kernel  if  all  the
157              components  are in primary MS-DOS partitions with partition type
158              FD, and all use v0.90 metadata.   In-kernel  autodetect  is  not
159              recommended  for  new  installations.  Using mdadm to detect and
160              assemble arrays — possibly in an initrd — is substantially  more
161              flexible and should be preferred.
162
163
164       If  a device is given before any options, or if the first option is one
165       of --add, --re-add, --add-spare, --fail, --remove, or  --replace,  then
166       the  MANAGE  mode is assumed.  Anything other than these will cause the
167       Misc mode to be assumed.
168
169

Options that are not mode-specific are:

171       -h, --help
172              Display a general help message or, after one of  the  above  op‐
173              tions, a mode-specific help message.
174
175
176       --help-options
177              Display  more  detailed help about command-line parsing and some
178              commonly used options.
179
180
181       -V, --version
182              Print version information for mdadm.
183
184
185       -v, --verbose
186              Be more verbose about what is happening.  This can be used twice
187              to be extra-verbose.  The extra verbosity currently only affects
188              --detail --scan and --examine --scan.
189
190
191       -q, --quiet
192              Avoid printing purely informative messages.   With  this,  mdadm
193              will be silent unless there is something really important to re‐
194              port.
195
196
197
198       -f, --force
199              Be more forceful about  certain  operations.   See  the  various
200              modes  for  the  exact  meaning of this option in different con‐
201              texts.
202
203
204       -c, --config=
205              Specify the config file or directory.  If not specified, the de‐
206              fault  config  file  and  default conf.d directory will be used.
207              See mdadm.conf(5) for more details.
208
209              If the config file given is  partitions  then  nothing  will  be
210              read, but mdadm will act as though the config file contained ex‐
211              actly
212                  DEVICE partitions containers
213              and will read /proc/partitions to find  a  list  of  devices  to
214              scan,  and /proc/mdstat to find a list of containers to examine.
215              If the word none is given for the config file, then  mdadm  will
216              act as though the config file were empty.
217
218              If the name given is of a directory, then mdadm will collect all
219              the files contained in the  directory  with  a  name  ending  in
220              .conf,  sort  them  lexically, and process all of those files as
221              config files.
222
223
224       -s, --scan
225              Scan config file or /proc/mdstat for  missing  information.   In
226              general,  this  option gives mdadm permission to get any missing
227              information (like component devices, array devices, array  iden‐
228              tities,  and alert destination) from the configuration file (see
229              previous option); one exception is MISC mode when using --detail
230              or  --stop, in which case --scan says to get a list of array de‐
231              vices from /proc/mdstat.
232
233
234       -e, --metadata=
235              Declare the style of RAID metadata (superblock) to be used.  The
236              default  is 1.2 for --create, and to guess for other operations.
237              The default can be overridden by setting the metadata value  for
238              the CREATE keyword in mdadm.conf.
239
240              Options are:
241
242
243              0, 0.90
244                     Use  the  original  0.90  format superblock.  This format
245                     limits arrays to 28 component devices and  limits  compo‐
246                     nent  devices of levels 1 and greater to 2 terabytes.  It
247                     is also possible for there to be confusion about  whether
248                     the superblock applies to a whole device or just the last
249                     partition, if that partition starts on a 64K boundary.
250
251
252              1, 1.0, 1.1, 1.2 default
253                     Use the new version-1 format superblock.  This has  fewer
254                     restrictions.   It can easily be moved between hosts with
255                     different endian-ness, and a recovery  operation  can  be
256                     checkpointed  and  restarted.  The different sub-versions
257                     store the superblock at different locations  on  the  de‐
258                     vice, either at the end (for 1.0), at the start (for 1.1)
259                     or 4K from the start (for 1.2).   "1"  is  equivalent  to
260                     "1.2"  (the commonly preferred 1.x format).  "default" is
261                     equivalent to "1.2".
262
263              ddf    Use the "Industry Standard" DDF (Disk Data Format) format
264                     defined  by  SNIA.  When creating a DDF array a CONTAINER
265                     will be created, and normal arrays can be created in that
266                     container.
267
268              imsm   Use  the Intel(R) Matrix Storage Manager metadata format.
269                     This creates a CONTAINER which is managed  in  a  similar
270                     manner  to DDF, and is supported by an option-rom on some
271                     platforms:
272
273                     https://www.intel.com/content/www/us/en/support/prod
274                     ucts/122484/memory-and-storage/ssd-software/intel-vir‐
275                     tual-raid-on-cpu-intel-vroc.html
276
277       --homehost=
278              This will override any HOMEHOST setting in the config  file  and
279              provides the identity of the host which should be considered the
280              home for any arrays.
281
282              When creating an array, the homehost will  be  recorded  in  the
283              metadata.  For version-1 superblocks, it will be prefixed to the
284              array name.  For version-0.90 superblocks, part of the SHA1 hash
285              of the hostname will be stored in the latter half of the UUID.
286
287              When  reporting  information  about an array, any array which is
288              tagged for the given homehost will be reported as such.
289
290              When using Auto-Assemble, only arrays tagged for the given home‐
291              host  will  be  allowed to use 'local' names (i.e. not ending in
292              '_' followed by a digit string).  See below under Auto-Assembly.
293
294              The special name "any" can be used as a wild card.  If an  array
295              is  created  with  --homehost=any  then  the  name "any" will be
296              stored in the array and it can be assembled in the same  way  on
297              any  host.   If an array is assembled with this option, then the
298              homehost recorded on the array will be ignored.
299
300
301       --prefer=
302              When mdadm needs to print the name  for  a  device  it  normally
303              finds  the  name  in  /dev which refers to the device and is the
304              shortest.  When a path component is given  with  --prefer  mdadm
305              will  prefer  a  longer name if it contains that component.  For
306              example --prefer=by-uuid will prefer a name in a subdirectory of
307              /dev called by-uuid.
308
309              This  functionality  is  currently only provided by --detail and
310              --monitor.
311
312
313       --home-cluster=
314              specifies the cluster name for the md device. The md device  can
315              be  assembled  only on the cluster which matches the name speci‐
316              fied. If this option is not provided, mdadm tries to detect  the
317              cluster name automatically.
318
319

For create, build, or grow:

321       -n, --raid-devices=
322              Specify  the  number of active devices in the array.  This, plus
323              the number of spare devices (see below) must equal the number of
324              component-devices  (including "missing" devices) that are listed
325              on the command line for --create.  Setting a value of 1 is prob‐
326              ably  a mistake and so requires that --force be specified first.
327              A value of 1 will then be allowed for linear,  multipath,  RAID0
328              and RAID1.  It is never allowed for RAID4, RAID5 or RAID6.
329              This  number  can only be changed using --grow for RAID1, RAID4,
330              RAID5 and RAID6 arrays, and only on kernels  which  provide  the
331              necessary support.
332
333
334       -x, --spare-devices=
335              Specify  the  number of spare (eXtra) devices in the initial ar‐
336              ray.  Spares can also be added and removed later.  The number of
337              component devices listed on the command line must equal the num‐
338              ber of RAID devices plus the number of spare devices.
339
340
341       -z, --size=
342              Amount (in Kilobytes) of space to use from each  drive  in  RAID
343              levels  1/4/5/6/10  and  for  RAID 0 on external metadata.  This
344              must be a multiple of the chunk size, and must leave about 128Kb
345              of  space  at  the end of the drive for the RAID superblock.  If
346              this is not specified (as it normally is not) the smallest drive
347              (or  partition)  sets  the  size,  though if there is a variance
348              among the drives of greater than 1%, a warning is issued.
349
350              A suffix of 'K', 'M', 'G' or 'T' can be given to indicate  Kilo‐
351              bytes, Megabytes, Gigabytes or Terabytes respectively.
352
353              Sometimes  a  replacement drive can be a little smaller than the
354              original drives though this should be minimised by  IDEMA  stan‐
355              dards.   Such  a  replacement  drive will be rejected by md.  To
356              guard against this it can be useful  to  set  the  initial  size
357              slightly  smaller  than  the smaller device with the aim that it
358              will still be larger than any replacement.
359
360              This option can be used with --create for determining  the  ini‐
361              tial  size of an array. For external metadata, it can be used on
362              a volume, but not on a container itself.   Setting  the  initial
363              size of RAID 0 array is only valid for external metadata.
364
365              This  value  can  be  set  with --grow for RAID level 1/4/5/6/10
366              though DDF arrays may not be able to support this.  RAID 0 array
367              size  cannot  be  changed.  If the array was created with a size
368              smaller than the currently active drives, the extra space can be
369              accessed using --grow.  The size can be given as max which means
370              to choose the largest size that fits on all current drives.
371
372              Before reducing the size of the array (with --grow --size=)  you
373              should make sure that space isn't needed.  If the device holds a
374              filesystem, you would need to resize the filesystem to use  less
375              space.
376
377              After  reducing  the  array  size you should check that the data
378              stored in the device is still available.  If the device holds  a
379              filesystem,  then  an  'fsck' of the filesystem is a minimum re‐
380              quirement.  If there are problems the array can be  made  bigger
381              again with no loss with another --grow --size= command.
382
383
384       -Z, --array-size=
385              This  is  only meaningful with --grow and its effect is not per‐
386              sistent: when the array is stopped and restarted the default ar‐
387              ray size will be restored.
388
389              Setting  the  array-size  causes  the array to appear smaller to
390              programs that access the data.  This is particularly needed  be‐
391              fore  reshaping an array so that it will be smaller.  As the re‐
392              shape is not reversible, but setting the size with  --array-size
393              is, it is required that the array size is reduced as appropriate
394              before the number of devices in the array is reduced.
395
396              Before reducing the size of the array you should make sure  that
397              space isn't needed.  If the device holds a filesystem, you would
398              need to resize the filesystem to use less space.
399
400              After reducing the array size you should  check  that  the  data
401              stored  in the device is still available.  If the device holds a
402              filesystem, then an 'fsck' of the filesystem is  a  minimum  re‐
403              quirement.   If  there are problems the array can be made bigger
404              again with no loss with another --grow --array-size= command.
405
406              A suffix of 'K', 'M', 'G' or 'T' can be given to indicate  Kilo‐
407              bytes,  Megabytes, Gigabytes or Terabytes respectively.  A value
408              of max restores the apparent size of the array  to  be  whatever
409              the real amount of available space is.
410
411              Clustered arrays do not support this parameter yet.
412
413
414       -c, --chunk=
415              Specify  chunk  size in kilobytes.  The default when creating an
416              array is 512KB.  To ensure compatibility with earlier  versions,
417              the  default  when building an array with no persistent metadata
418              is 64KB.  This is  only  meaningful  for  RAID0,  RAID4,  RAID5,
419              RAID6, and RAID10.
420
421              RAID4,  RAID5,  RAID6, and RAID10 require the chunk size to be a
422              power of 2, with minimal chunk size being 4KB.
423
424              A suffix of 'K', 'M', 'G' or 'T' can be given to indicate  Kilo‐
425              bytes, Megabytes, Gigabytes or Terabytes respectively.
426
427
428       --rounding=
429              Specify  the  rounding  factor  for a Linear array.  The size of
430              each component will be rounded down to a multiple of this  size.
431              This is a synonym for --chunk but highlights the different mean‐
432              ing for Linear as compared to other RAID levels.  The default is
433              64K  if  a kernel earlier than 2.6.16 is in use, and is 0K (i.e.
434              no rounding) in later kernels.
435
436
437       -l, --level=
438              Set RAID level.  When used with --create, options  are:  linear,
439              raid0,  0,  stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6,
440              6, raid10, 10, multipath, mp, faulty, container.  Obviously some
441              of these are synonymous.
442
443              When  a CONTAINER metadata type is requested, only the container
444              level is permitted, and it does not need to be explicitly given.
445
446              When used with --build, only linear, stripe,  raid0,  0,  raid1,
447              multipath, mp, and faulty are valid.
448
449              Can  be used with --grow to change the RAID level in some cases.
450              See LEVEL CHANGES below.
451
452
453       -p, --layout=
454              This option configures the  fine  details  of  data  layout  for
455              RAID5,  RAID6, and RAID10 arrays, and controls the failure modes
456              for faulty.  It can also be used for working around a kernel bug
457              with RAID0, but generally doesn't need to be used explicitly.
458
459              The layout of the RAID5 parity block can be one of left-asymmet‐
460              ric, left-symmetric, right-asymmetric, right-symmetric, la,  ra,
461              ls, rs.  The default is left-symmetric.
462
463              It is also possible to cause RAID5 to use a RAID4-like layout by
464              choosing parity-first, or parity-last.
465
466              Finally   for   RAID5   there   are   DDF-compatible    layouts,
467              ddf-zero-restart, ddf-N-restart, and ddf-N-continue.
468
469              These  same  layouts  are available for RAID6.  There are also 4
470              layouts that will provide an intermediate stage  for  converting
471              between  RAID5 and RAID6.  These provide a layout which is iden‐
472              tical to the corresponding RAID5 layout on  the  first  N-1  de‐
473              vices,  and has the 'Q' syndrome (the second 'parity' block used
474              by RAID6) on the last device.  These layouts  are:  left-symmet‐
475              ric-6, right-symmetric-6, left-asymmetric-6, right-asymmetric-6,
476              and parity-first-6.
477
478              When setting the failure mode for level faulty, the options are:
479              write-transient,  wt,  read-transient, rt, write-persistent, wp,
480              read-persistent, rp, write-all, read-fixable, rf, clear,  flush,
481              none.
482
483              Each  failure mode can be followed by a number, which is used as
484              a period between fault generation.  Without a number, the  fault
485              is generated once on the first relevant request.  With a number,
486              the fault will be generated after that many requests,  and  will
487              continue to be generated every time the period elapses.
488
489              Multiple  failure  modes  can be current simultaneously by using
490              the --grow option to set subsequent failure modes.
491
492              "clear" or "none" will remove any pending  or  periodic  failure
493              modes, and "flush" will clear any persistent faults.
494
495              The  layout  options  for RAID10 are one of 'n', 'o' or 'f' fol‐
496              lowed by a small number signifying the number of copies of  each
497              datablock.  The default is 'n2'.  The supported options are:
498
499              'n'  signals  'near'  copies.  Multiple copies of one data block
500              are at similar offsets in different devices.
501
502              'o' signals 'offset' copies.  Rather than the chunks  being  du‐
503              plicated  within  a stripe, whole stripes are duplicated but are
504              rotated by one device so duplicate blocks are on  different  de‐
505              vices.  Thus subsequent copies of a block are in the next drive,
506              and are one chunk further down.
507
508              'f' signals 'far' copies (multiple copies  have  very  different
509              offsets).  See md(4) for more detail about 'near', 'offset', and
510              'far'.
511
512              As for the number of copies of each data block, 2 is  normal,  3
513              can  be  useful.  This number can be at most equal to the number
514              of devices in the array.  It does not need to divide evenly into
515              that  number  (e.g. it is perfectly legal to have an 'n2' layout
516              for an array with an odd number of devices).
517
518              A bug introduced in Linux 3.14 means that RAID0 arrays with  de‐
519              vices of differing sizes started using a different layout.  This
520              could lead to data corruption.  Since  Linux  5.4  (and  various
521              stable  releases  that  received backports), the kernel will not
522              accept such an array unless a layout is explicitly set.  It  can
523              be set to 'original' or 'alternate'.  When creating a new array,
524              mdadm will select 'original' by default, so the layout does  not
525              normally need to be set.  An array created for either 'original'
526              or 'alternate' will not be recognized by an  (unpatched)  kernel
527              prior to 5.4.  To create a RAID0 array with devices of differing
528              sizes that can be used on an older kernel, you can set the  lay‐
529              out  to 'dangerous'.  This will use whichever layout the running
530              kernel supports, so the data on the  array  may  become  corrupt
531              when changing kernel from pre-3.14 to a later kernel.
532
533              When an array is converted between RAID5 and RAID6 an intermedi‐
534              ate RAID6 layout is used in which the second parity block (Q) is
535              always  on  the  last  device.   To convert a RAID5 to RAID6 and
536              leave it in this new layout (which does not require re-striping)
537              use --layout=preserve.  This will try to avoid any restriping.
538
539              The  converse  of this is --layout=normalise which will change a
540              non-standard RAID6 layout into a more standard arrangement.
541
542
543       --parity=
544              same as --layout (thus explaining the p of -p).
545
546
547       -b, --bitmap=
548              Specify a file to store a  write-intent  bitmap  in.   The  file
549              should  not  exist  unless --force is also given.  The same file
550              should be provided when assembling the array.  If the  word  in‐
551              ternal  is given, then the bitmap is stored with the metadata on
552              the array, and so is replicated on all  devices.   If  the  word
553              none  is given with --grow mode, then any bitmap that is present
554              is removed. If the word clustered is given, the array is created
555              for a clustered environment. One bitmap is created for each node
556              as defined by the --nodes parameter and are stored internally.
557
558              To help catch typing errors, the filename must contain at  least
559              one slash ('/') if it is a real file (not 'internal' or 'none').
560
561              Note:  external bitmaps are only known to work on ext2 and ext3.
562              Storing bitmap files on other filesystems may result in  serious
563              problems.
564
565              When  creating  an  array  on  devices which are 100G or larger,
566              mdadm automatically adds an internal bitmap as it  will  usually
567              be  beneficial.  This can be suppressed with --bitmap=none or by
568              selecting a different consistency policy with --consistency-pol‐
569              icy.
570
571
572       --bitmap-chunk=
573              Set  the chunk size of the bitmap.  Each bit corresponds to that
574              many Kilobytes of storage.  When using a file-based bitmap,  the
575              default  is  to use the smallest size that is at least 4 and re‐
576              quires no more than 2^21 chunks.  When using an internal bitmap,
577              the  chunk size defaults to 64Meg, or larger if necessary to fit
578              the bitmap into the available space.
579
580              A suffix of 'K', 'M', 'G' or 'T' can be given to indicate  Kilo‐
581              bytes, Megabytes, Gigabytes or Terabytes respectively.
582
583
584       -W, --write-mostly
585              subsequent  devices listed in a --build, --create, or --add com‐
586              mand will be flagged as 'write-mostly'.  This is valid for RAID1
587              only  and  means  that  the  'md' driver will avoid reading from
588              these devices if at all possible.  This can be useful if mirror‐
589              ing over a slow link.
590
591
592       --write-behind=
593              Specify  that  write-behind  mode  should  be enabled (valid for
594              RAID1 only).  If an argument is specified, it will set the maxi‐
595              mum  number of outstanding writes allowed.  The default value is
596              256.  A write-intent bitmap is required in order to  use  write-
597              behind mode, and write-behind is only attempted on drives marked
598              as write-mostly.
599
600
601       --failfast
602              subsequent devices listed in a --create or --add command will be
603              flagged  as   'failfast'.   This  is  valid for RAID1 and RAID10
604              only.  IO requests to these devices will be encouraged  to  fail
605              quickly  rather  than  cause  long delays due to error handling.
606              Also no attempt is made to repair a read error on these devices.
607
608              If an array becomes degraded so that the  'failfast'  device  is
609              the only usable device, the 'failfast' flag will then be ignored
610              and extended delays will be preferred to complete failure.
611
612              The 'failfast' flag is appropriate for storage arrays which have
613              a low probability of true failure, but which may sometimes cause
614              unacceptable delays due to internal maintenance functions.
615
616
617       --assume-clean
618              Tell mdadm that the array pre-existed and is known to be  clean.
619              It  can be useful when trying to recover from a major failure as
620              you can be sure that no data will be affected unless  you  actu‐
621              ally  write  to  the array.  It can also be used when creating a
622              RAID1 or RAID10 if you want to avoid the initial resync, however
623              this  practice  — while normally safe — is not recommended.  Use
624              this only if you really know what you are doing.
625
626              When the devices that will be part of a new  array  were  filled
627              with zeros before creation the operator knows the array is actu‐
628              ally clean. If that is the case,  such  as  after  running  bad‐
629              blocks,  this  argument  can be used to tell mdadm the facts the
630              operator knows.
631
632              When an array is resized to a larger size  with  --grow  --size=
633              the  new  space  is  normally resynced in that same way that the
634              whole array is resynced at creation.  From  Linux  version  3.0,
635              --assume-clean  can be used with that command to avoid the auto‐
636              matic resync.
637
638
639       --backup-file=
640              This is needed when --grow is used to  increase  the  number  of
641              raid  devices  in a RAID5 or RAID6 if there are no spare devices
642              available, or to shrink, change RAID level or layout.   See  the
643              GROW  MODE section below on RAID-DEVICES CHANGES.  The file must
644              be stored on a separate device, not on the RAID array being  re‐
645              shaped.
646
647
648       --data-offset=
649              Arrays  with  1.x  metadata can leave a gap between the start of
650              the device and the start of array data.  This gap  can  be  used
651              for  various  metadata.   The  start  of  data  is  known as the
652              data-offset.  Normally an appropriate data  offset  is  computed
653              automatically.   However  it  can be useful to set it explicitly
654              such as when re-creating an array which was  originally  created
655              using  a  different  version of mdadm which computed a different
656              offset.
657
658              Setting the offset explicitly over-rides the default.  The value
659              given is in Kilobytes unless a suffix of 'K', 'M', 'G' or 'T' is
660              used to explicitly indicate Kilobytes, Megabytes,  Gigabytes  or
661              Terabytes respectively.
662
663              Since  Linux 3.4, --data-offset can also be used with --grow for
664              some  RAID  levels  (initially  on  RAID10).   This  allows  the
665              data-offset  to be changed as part of the reshape process.  When
666              the data offset is changed, no backup file is  required  as  the
667              difference in offsets is used to provide the same functionality.
668
669              When  the  new offset is earlier than the old offset, the number
670              of devices in the array cannot shrink.  When it is after the old
671              offset, the number of devices in the array cannot increase.
672
673              When  creating an array, --data-offset can be specified as vari‐
674              able.  In the case each member device is  expected  to  have  an
675              offset  appended  to the name, separated by a colon.  This makes
676              it possible to recreate exactly an array which has varying  data
677              offsets (as can happen when different versions of mdadm are used
678              to add different devices).
679
680
681       --continue
682              This option is complementary to the --freeze-reshape option  for
683              assembly.  It is needed when --grow operation is interrupted and
684              it is not restarted automatically due to --freeze-reshape  usage
685              during array assembly.  This option is used together with -G , (
686              --grow ) command and device for a pending reshape to be  contin‐
687              ued.   All  parameters required for reshape continuation will be
688              read from array metadata.  If initial  --grow  command  had  re‐
689              quired --backup-file= option to be set, continuation option will
690              require to have exactly the same backup file given as well.
691
692              Any other parameter passed together with --continue option  will
693              be ignored.
694
695
696       -N, --name=
697              Set a name for the array.  This is currently only effective when
698              creating an array with a version-1 superblock, or an array in  a
699              DDF  container.  The name is a simple textual string that can be
700              used to identify array components when assembling.  If  name  is
701              needed  but  not specified, it is taken from the basename of the
702              device that is being created.  e.g. when  creating  /dev/md/home
703              the name will default to home.
704
705
706       -R, --run
707              Insist  that mdadm run the array, even if some of the components
708              appear to be active in another array  or  filesystem.   Normally
709              mdadm will ask for confirmation before including such components
710              in an array.  This option causes that question to be suppressed.
711
712
713       -f, --force
714              Insist that mdadm accept the geometry and layout specified with‐
715              out  question.  Normally mdadm will not allow the creation of an
716              array with only one device, and will try to create a RAID5 array
717              with  one  missing  drive (as this makes the initial resync work
718              faster).  With --force, mdadm will not try to be so clever.
719
720
721       -o, --readonly
722              Start the array read only rather than read-write as normal.   No
723              writes will be allowed to the array, and no resync, recovery, or
724              reshape will be started. It works with Create, Assemble,  Manage
725              and Misc mode.
726
727
728       -a, --auto{=yes,md,mdp,part,p}{NN}
729              Instruct mdadm how to create the device file if needed, possibly
730              allocating an unused minor number.  "md" causes a non-partition‐
731              able  array  to  be used (though since Linux 2.6.28, these array
732              devices are in fact partitionable).  "mdp", "part" or "p" causes
733              a  partitionable  array  (2.6  and later) to be used.  "yes" re‐
734              quires the named md device to have a 'standard' format, and  the
735              type  and minor number will be determined from this.  With mdadm
736              3.0, device creation is normally left up to udev so this  option
737              is unlikely to be needed.  See DEVICE NAMES below.
738
739              The argument can also come immediately after "-a".  e.g. "-ap".
740
741              If  --auto  is  not  given  on the command line or in the config
742              file, then the default will be --auto=yes.
743
744              If --scan is also given, then any auto= entries  in  the  config
745              file  will  override the --auto instruction given on the command
746              line.
747
748              For partitionable arrays, mdadm will create the device file  for
749              the  whole  array  and  for the first 4 partitions.  A different
750              number of partitions can be specified at the end of this  option
751              (e.g.   --auto=p7).   If  the device name ends with a digit, the
752              partition names add a 'p', and a number, e.g.   /dev/md/home1p3.
753              If  there  is  no  trailing digit, then the partition names just
754              have a number added, e.g.  /dev/md/scratch3.
755
756              If the md device name is in a 'standard' format as described  in
757              DEVICE  NAMES,  then  it will be created, if necessary, with the
758              appropriate device number based on that  name.   If  the  device
759              name  is not in one of these formats, then an unused device num‐
760              ber will be allocated.  The device number will be considered un‐
761              used  if  there is no active array for that number, and there is
762              no entry in /dev for that number and with a  non-standard  name.
763              Names  that  are  not  in  'standard' format are only allowed in
764              "/dev/md/".
765
766              This is meaningful with --create or --build.
767
768
769       -a, --add
770              This option can be used in Grow mode in two cases.
771
772              If the target array is a Linear array, then --add can be used to
773              add one or more devices to the array.  They are simply catenated
774              on to the end of the array.  Once added, the devices  cannot  be
775              removed.
776
777              If  the --raid-disks option is being used to increase the number
778              of devices in an array, then --add can be used to add some extra
779              devices  to be included in the array.  In most cases this is not
780              needed as the extra devices can be added as  spares  first,  and
781              then  the  number  of  raid  disks can be changed.  However, for
782              RAID0 it is not possible to add spares.  So to increase the num‐
783              ber of devices in a RAID0, it is necessary to set the new number
784              of devices, and to add the new devices, in the same command.
785
786
787       --nodes
788              Only works when the array is created for  a  clustered  environ‐
789              ment.  It  specifies  the maximum number of nodes in the cluster
790              that will use this device simultaneously. If not specified, this
791              defaults to 4.
792
793
794       --write-journal
795              Specify journal device for the RAID-4/5/6 array. The journal de‐
796              vice should be an SSD with a reasonable lifetime.
797
798
799       -k, --consistency-policy=
800              Specify how the array maintains consistency in the  case  of  an
801              unexpected  shutdown.  Only relevant for RAID levels with redun‐
802              dancy.  Currently supported options are:
803
804
805              resync Full resync is performed and all redundancy  is  regener‐
806                     ated when the array is started after an unclean shutdown.
807
808
809              bitmap Resync  assisted by a write-intent bitmap. Implicitly se‐
810                     lected when using --bitmap.
811
812
813              journal
814                     For RAID levels 4/5/6, the journal device is used to  log
815                     transactions  and  replay  after an unclean shutdown. Im‐
816                     plicitly selected when using --write-journal.
817
818
819              ppl    For RAID5 only, Partial Parity Log is used to  close  the
820                     write  hole  and  eliminate  resync. PPL is stored in the
821                     metadata region of  RAID  member  drives,  no  additional
822                     journal drive is needed.
823
824
825              Can  be  used with --grow to change the consistency policy of an
826              active array in some cases. See CONSISTENCY POLICY  CHANGES  be‐
827              low.
828
829
830

For assemble:

832       -u, --uuid=
833              uuid  of  array to assemble.  Devices which don't have this uuid
834              are excluded
835
836
837       -m, --super-minor=
838              Minor number of device that  array  was  created  for.   Devices
839              which  don't have this minor number are excluded.  If you create
840              an array as /dev/md1, then all superblocks will contain the  mi‐
841              nor number 1, even if the array is later assembled as /dev/md2.
842
843              Giving the literal word "dev" for --super-minor will cause mdadm
844              to use the minor number of the md device that  is  being  assem‐
845              bled.   e.g.  when  assembling  /dev/md0, --super-minor=dev will
846              look for super blocks with a minor number of 0.
847
848              --super-minor is only relevant for v0.90  metadata,  and  should
849              not normally be used.  Using --uuid is much safer.
850
851
852       -N, --name=
853              Specify  the  name  of  the array to assemble.  This must be the
854              name that was specified when creating the array.  It must either
855              match  the  name  stored  in  the superblock exactly, or it must
856              match with the current homehost prefixed to  the  start  of  the
857              given name.
858
859
860       -f, --force
861              Assemble  the array even if the metadata on some devices appears
862              to be out-of-date.  If mdadm cannot find enough working  devices
863              to  start the array, but can find some devices that are recorded
864              as having failed, then it will mark those devices as working  so
865              that  the  array can be started. This works only for native. For
866              external metadata it allows to start dirty degraded RAID  4,  5,
867              6.   An  array  which requires --force to be started may contain
868              data corruption.  Use it carefully.
869
870
871       -R, --run
872              Attempt to start the array even if fewer drives were given  than
873              were  present  last  time the array was active.  Normally if not
874              all the expected drives are found and --scan is not  used,  then
875              the  array will be assembled but not started.  With --run an at‐
876              tempt will be made to start it anyway.
877
878
879       --no-degraded
880              This is the reverse of --run in that it inhibits the startup  of
881              array  unless  all  expected  drives  are present.  This is only
882              needed with --scan, and can be used if the physical  connections
883              to devices are not as reliable as you would like.
884
885
886       -a, --auto{=no,yes,md,mdp,part}
887              See this option under Create and Build options.
888
889
890       -b, --bitmap=
891              Specify  the  bitmap file that was given when the array was cre‐
892              ated.  If an array has an internal bitmap, there is no  need  to
893              specify this when assembling the array.
894
895
896       --backup-file=
897              If  --backup-file was used while reshaping an array (e.g. chang‐
898              ing number of devices or chunk size) and the system crashed dur‐
899              ing  the  critical  section, then the same --backup-file must be
900              presented to --assemble to allow possibly corrupted data  to  be
901              restored, and the reshape to be completed.
902
903
904       --invalid-backup
905              If the file needed for the above option is not available for any
906              reason an empty file can be given together with this  option  to
907              indicate that the backup file is invalid.  In this case the data
908              that was being rearranged at the time of the crash could be  ir‐
909              recoverably  lost, but the rest of the array may still be recov‐
910              erable.  This option should only be used as  a  last  resort  if
911              there is no way to recover the backup file.
912
913
914
915       -U, --update=
916              Update the superblock on each device while assembling the array.
917              The argument given to this flag can be  one  of  sparc2.2,  sum‐
918              maries, uuid, name, nodes, homehost, home-cluster, resync, byte‐
919              order, devicesize, no-bitmap, bbl,  no-bbl,  ppl,  no-ppl,  lay‐
920              out-original, layout-alternate, layout-unspecified, metadata, or
921              super-minor.
922
923              The sparc2.2 option will adjust the superblock of an array  what
924              was  created on a Sparc machine running a patched 2.2 Linux ker‐
925              nel.  This kernel got the alignment of part  of  the  superblock
926              wrong.   You can use the --examine --sparc2.2 option to mdadm to
927              see what effect this would have.
928
929              The super-minor option will update the preferred minor field  on
930              each superblock to match the minor number of the array being as‐
931              sembled.  This can be useful if --examine  reports  a  different
932              "Preferred  Minor"  to --detail.  In some cases this update will
933              be performed automatically by the kernel driver.  In particular,
934              the  update happens automatically at the first write to an array
935              with redundancy (RAID level 1 or greater) on a  2.6  (or  later)
936              kernel.
937
938              The uuid option will change the uuid of the array.  If a UUID is
939              given with the --uuid option that UUID will be  used  as  a  new
940              UUID  and  will  NOT be used to help identify the devices in the
941              array.  If no --uuid is given, a random UUID is chosen.
942
943              The name option will change the name of the array as  stored  in
944              the  superblock.   This  is  only  supported  for  version-1 su‐
945              perblocks.
946
947              The nodes option will change the nodes of the array as stored in
948              the  bitmap  superblock.  This option only works for a clustered
949              environment.
950
951              The homehost option will change the homehost as recorded in  the
952              superblock.   For version-0 superblocks, this is the same as up‐
953              dating the UUID.  For version-1 superblocks, this  involves  up‐
954              dating the name.
955
956              The home-cluster option will change the cluster name as recorded
957              in the superblock and bitmap. This option only works for a clus‐
958              tered environment.
959
960              The  resync option will cause the array to be marked dirty mean‐
961              ing that any redundancy in the array  (e.g.  parity  for  RAID5,
962              copies  for  RAID1)  may be incorrect.  This will cause the RAID
963              system to perform a "resync" pass to make sure that  all  redun‐
964              dant information is correct.
965
966              The  byteorder option allows arrays to be moved between machines
967              with different byte-order, such as  from  a  big-endian  machine
968              like  a  Sparc  or some MIPS machines, to a little-endian x86_64
969              machine.  When assembling such an array for the first time after
970              a move, giving --update=byteorder will cause mdadm to expect su‐
971              perblocks to have their byteorder  reversed,  and  will  correct
972              that order before assembling the array.  This is only valid with
973              original (Version 0.90) superblocks.
974
975              The summaries option will  correct  the  summaries  in  the  su‐
976              perblock.  That is the counts of total, working, active, failed,
977              and spare devices.
978
979              The devicesize option will rarely be of use.  It applies to ver‐
980              sion  1.1  and  1.2  metadata only (where the metadata is at the
981              start of the device) and is only useful when the  component  de‐
982              vice  has changed size (typically become larger).  The version 1
983              metadata records the amount of the device that can  be  used  to
984              store data, so if a device in a version 1.1 or 1.2 array becomes
985              larger, the metadata will still be visible, but the extra  space
986              will not.  In this case it might be useful to assemble the array
987              with --update=devicesize.  This will cause  mdadm  to  determine
988              the maximum usable amount of space on each device and update the
989              relevant field in the metadata.
990
991              The metadata option only works on v0.90 metadata arrays and will
992              convert  them  to  v1.0  metadata.   The array must not be dirty
993              (i.e. it must not need a sync) and it must not have a  write-in‐
994              tent bitmap.
995
996              The  old  metadata  will  remain on the devices, but will appear
997              older than the new metadata and so will usually be ignored.  The
998              old metadata (or indeed the new metadata) can be removed by giv‐
999              ing the appropriate --metadata= option to --zero-superblock.
1000
1001              The no-bitmap option can be used when an array has  an  internal
1002              bitmap which is corrupt in some way so that assembling the array
1003              normally fails.  It will cause any internal  bitmap  to  be  ig‐
1004              nored.
1005
1006              The bbl option will reserve space in each device for a bad block
1007              list.  This will be 4K in size and positioned near  the  end  of
1008              any free space between the superblock and the data.
1009
1010              The  no-bbl option will cause any reservation of space for a bad
1011              block list to be removed.  If the bad block  list  contains  en‐
1012              tries,  this  will  fail,  as removing the list could cause data
1013              corruption.
1014
1015              The ppl option will enable PPL for a  RAID5  array  and  reserve
1016              space  for  PPL  on each device. There must be enough free space
1017              between the data and superblock and  a  write-intent  bitmap  or
1018              journal must not be used.
1019
1020              The no-ppl option will disable PPL in the superblock.
1021
1022              The  layout-original  and layout-alternate options are for RAID0
1023              arrays with non-uniform devices size that  were  in  use  before
1024              Linux  5.4.  If the array was being used with Linux 3.13 or ear‐
1025              lier, then to assemble the array on a new kernel,  --update=lay‐
1026              out-original  must  be given.  If the array was created and used
1027              with a kernel from Linux 3.14 to Linux 5.3,  then  --update=lay‐
1028              out-alternate  must be given.  This only needs to be given once.
1029              Subsequent assembly of the array will happen normally.  For more
1030              information, see md(4).
1031
1032              The layout-unspecified option reverts the effect of layout-orig‐
1033              nal or layout-alternate and allows the array to be again used on
1034              a  kernel  prior  to Linux 5.3.  This option should be used with
1035              great caution.
1036
1037
1038       --freeze-reshape
1039              This option is intended to be used in  start-up  scripts  during
1040              the  initrd  boot phase.  When the array under reshape is assem‐
1041              bled during the initrd phase, this option stops the reshape  af‐
1042              ter the reshape-critical section has been restored. This happens
1043              before the file  system  pivot  operation  and  avoids  loss  of
1044              filesystem  context.  Losing file system context would cause re‐
1045              shape to be broken.
1046
1047              Reshape can be continued later using the --continue  option  for
1048              the grow command.
1049
1050

For Manage mode:

1052       -t, --test
1053              Unless  a  more  serious  error occurred, mdadm will exit with a
1054              status of 2 if no changes were made to the array  and  0  if  at
1055              least  one change was made.  This can be useful when an indirect
1056              specifier such as missing, detached or faulty  is  used  in  re‐
1057              questing  an operation on the array.  --test will report failure
1058              if these specifiers didn't find any match.
1059
1060
1061       -a, --add
1062              hot-add listed devices.  If a device appears  to  have  recently
1063              been  part  of the array (possibly it failed or was removed) the
1064              device is re-added as described in  the  next  point.   If  that
1065              fails  or  the device was never part of the array, the device is
1066              added as a hot-spare.  If the array is degraded, it will immedi‐
1067              ately start to rebuild data onto that spare.
1068
1069              Note  that this and the following options are only meaningful on
1070              array with redundancy.  They don't apply to RAID0 or Linear.
1071
1072
1073       --re-add
1074              re-add a device that was previously removed from an  array.   If
1075              the  metadata  on  the device reports that it is a member of the
1076              array, and the slot that it used is still vacant, then  the  de‐
1077              vice will be added back to the array in the same position.  This
1078              will normally cause the data for that device  to  be  recovered.
1079              However,  based  on  the event count on the device, the recovery
1080              may only require sections that are  flagged  by  a  write-intent
1081              bitmap to be recovered or may not require any recovery at all.
1082
1083              When  used  on  an array that has no metadata (i.e. it was built
1084              with --build) it will be assumed that bitmap-based  recovery  is
1085              enough to make the device fully consistent with the array.
1086
1087              --re-add  can  also be accompanied by --update=devicesize, --up‐
1088              date=bbl, or --update=no-bbl.  See descriptions of these options
1089              when used in Assemble mode for an explanation of their use.
1090
1091              If  the device name given is missing then mdadm will try to find
1092              any device that looks like it should be part of  the  array  but
1093              isn't and will try to re-add all such devices.
1094
1095              If  the device name given is faulty then mdadm will find all de‐
1096              vices in the array that are marked faulty, remove them  and  at‐
1097              tempt to immediately re-add them.  This can be useful if you are
1098              certain that the reason for failure has been resolved.
1099
1100
1101       --add-spare
1102              Add a device as a spare.  This is similar to --add  except  that
1103              it does not attempt --re-add first.  The device will be added as
1104              a spare even if it looks like it could be a recent member of the
1105              array.
1106
1107
1108       -r, --remove
1109              remove  listed  devices.   They  must  not be active.  i.e. they
1110              should be failed or spare devices.
1111
1112              As well as the name of a device file (e.g.  /dev/sda1) the words
1113              failed,  detached and names like set-A can be given to --remove.
1114              The first causes all failed devices to be removed.   The  second
1115              causes  any  device  which  is no longer connected to the system
1116              (i.e an 'open' returns ENXIO) to be removed.  The third will re‐
1117              move a set as described below under --fail.
1118
1119
1120       -f, --fail
1121              Mark  listed devices as faulty.  As well as the name of a device
1122              file, the word detached or a set name like set-A can  be  given.
1123              The former will cause any device that has been detached from the
1124              system to be marked as failed.  It can then be removed.
1125
1126              For RAID10 arrays where the number of copies evenly divides  the
1127              number  of devices, the devices can be conceptually divided into
1128              sets where each set contains a single complete copy of the  data
1129              on  the  array.   Sometimes a RAID10 array will be configured so
1130              that these sets are on separate controllers.  In this case,  all
1131              the devices in one set can be failed by giving a name like set-A
1132              or set-B to --fail.  The appropriate set names are  reported  by
1133              --detail.
1134
1135
1136       --set-faulty
1137              same as --fail.
1138
1139
1140       --replace
1141              Mark  listed  devices  as  requiring  replacement.  As soon as a
1142              spare is available, it will be  rebuilt  and  will  replace  the
1143              marked  device.   This is similar to marking a device as faulty,
1144              but the device remains in service during the recovery process to
1145              increase  resilience  against  multiple  failures.  When the re‐
1146              placement process finishes, the replaced device will  be  marked
1147              as faulty.
1148
1149
1150       --with This can follow a list of --replace devices.  The devices listed
1151              after --with will preferentially be used to replace the  devices
1152              listed after --replace.  These devices must already be spare de‐
1153              vices in the array.
1154
1155
1156       --write-mostly
1157              Subsequent devices that are added  or  re-added  will  have  the
1158              'write-mostly' flag set.  This is only valid for RAID1 and means
1159              that the 'md' driver will avoid reading from  these  devices  if
1160              possible.
1161
1162       --readwrite
1163              Subsequent  devices  that  are  added  or re-added will have the
1164              'write-mostly' flag cleared.
1165
1166       --cluster-confirm
1167              Confirm the existence of the device. This is issued in  response
1168              to  an  --add request by a node in a cluster. When a node adds a
1169              device it sends a message to all nodes in the  cluster  to  look
1170              for a device with a UUID. This translates to a udev notification
1171              with the UUID of the device to be added and the slot number. The
1172              receiving node must acknowledge this message with --cluster-con‐
1173              firm. Valid arguments are <slot>:<devicename> in case the device
1174              is found or <slot>:missing in case the device is not found.
1175
1176
1177       --add-journal
1178              Add  a  journal  to an existing array, or recreate journal for a
1179              RAID-4/5/6 array that lost a journal device. To avoid interrupt‐
1180              ing ongoing write operations, --add-journal only works for array
1181              in Read-Only state.
1182
1183
1184       --failfast
1185              Subsequent devices that are added  or  re-added  will  have  the
1186              'failfast'  flag  set.   This is only valid for RAID1 and RAID10
1187              and means that the 'md' driver will avoid long timeouts on error
1188              handling where possible.
1189
1190       --nofailfast
1191              Subsequent  devices  that  are re-added will be re-added without
1192              the 'failfast' flag set.
1193
1194
1195       Each of these options requires that the first device listed is the  ar‐
1196       ray  to  be  acted  upon, and the remainder are component devices to be
1197       added, removed, marked as faulty, etc.   Several  different  operations
1198       can be specified for different devices, e.g.
1199            mdadm /dev/md0 --add /dev/sda1 --fail /dev/sdb1 --remove /dev/sdb1
1200       Each operation applies to all devices listed until the next operation.
1201
1202       If  an  array  is  using a write-intent bitmap, then devices which have
1203       been removed can be re-added in a way that avoids a full reconstruction
1204       but  instead just updates the blocks that have changed since the device
1205       was removed.  For arrays with persistent metadata (superblocks) this is
1206       done  automatically.  For arrays created with --build mdadm needs to be
1207       told that this device we removed recently with --re-add.
1208
1209       Devices can only be removed from an array if they  are  not  in  active
1210       use,  i.e.  that must be spares or failed devices.  To remove an active
1211       device, it must first be marked as faulty.
1212
1213

For Misc mode:

1215       -Q, --query
1216              Examine a device to see (1) if it is an md device and (2) if  it
1217              is  a  component of an md array.  Information about what is dis‐
1218              covered is presented.
1219
1220
1221       -D, --detail
1222              Print details of one or more md devices.
1223
1224
1225       --detail-platform
1226              Print details of the platform's RAID  capabilities  (firmware  /
1227              hardware  topology) for a given metadata format. If used without
1228              an argument, mdadm will scan all controllers looking  for  their
1229              capabilities.  Otherwise, mdadm will only look at the controller
1230              specified by the argument in the form of an absolute filepath or
1231              a link, e.g.  /sys/devices/pci0000:00/0000:00:1f.2.
1232
1233
1234       -Y, --export
1235              When  used with --detail, --detail-platform, --examine, or --in‐
1236              cremental output will be formatted as key=value pairs  for  easy
1237              import into the environment.
1238
1239              With --incremental The value MD_STARTED indicates whether an ar‐
1240              ray was started (yes) or not, which may include  a  reason  (un‐
1241              safe,  nothing, no).  Also the value MD_FOREIGN indicates if the
1242              array is expected on this host (no), or seems to be  from  else‐
1243              where (yes).
1244
1245
1246       -E, --examine
1247              Print  contents  of  the metadata stored on the named device(s).
1248              Note the contrast between --examine and --detail.  --examine ap‐
1249              plies  to  devices which are components of an array, while --de‐
1250              tail applies to a whole array which is currently active.
1251
1252       --sparc2.2
1253              If an array was created on a SPARC machine with a 2.2 Linux ker‐
1254              nel  patched  with  RAID  support, the superblock will have been
1255              created incorrectly, or at least incompatibly with 2.4 and later
1256              kernels.   Using the --sparc2.2 flag with --examine will fix the
1257              superblock before displaying it.  If  this  appears  to  do  the
1258              right  thing, then the array can be successfully assembled using
1259              --assemble --update=sparc2.2.
1260
1261
1262       -X, --examine-bitmap
1263              Report information about a bitmap file.  The argument is  either
1264              an  external bitmap file or an array component in case of an in‐
1265              ternal bitmap.  Note that running this on an array device  (e.g.
1266              /dev/md0) does not report the bitmap for that array.
1267
1268
1269       --examine-badblocks
1270              List  the  bad-blocks  recorded  for the device, if a bad-blocks
1271              list has been configured. Currently only 1.x and  IMSM  metadata
1272              support bad-blocks lists.
1273
1274
1275       --dump=directory
1276
1277       --restore=directory
1278              Save  metadata from lists devices, or restore metadata to listed
1279              devices.
1280
1281
1282       -R, --run
1283              start a partially assembled array.  If --assemble did  not  find
1284              enough  devices  to  fully  start the array, it might leaving it
1285              partially assembled.  If you wish, you can  then  use  --run  to
1286              start the array in degraded mode.
1287
1288
1289       -S, --stop
1290              deactivate array, releasing all resources.
1291
1292
1293       -o, --readonly
1294              mark array as readonly.
1295
1296
1297       -w, --readwrite
1298              mark array as readwrite.
1299
1300
1301       --zero-superblock
1302              If the device contains a valid md superblock, the block is over‐
1303              written with zeros.  With --force the block where the superblock
1304              would be is overwritten even if it doesn't appear to be valid.
1305
1306              Note:  Be  careful when calling --zero-superblock with clustered
1307              raid. Make sure the array isn't used  or  assembled  in  another
1308              cluster node before executing it.
1309
1310
1311       --kill-subarray=
1312              If the device is a container and the argument to --kill-subarray
1313              specifies an inactive subarray in the container, then the subar‐
1314              ray  is  deleted.   Deleting all subarrays will leave an 'empty-
1315              container' or spare superblock on the  drives.   See  --zero-su‐
1316              perblock  for  completely removing a superblock.  Note that some
1317              formats depend on the subarray index for generating a UUID, this
1318              command  will fail if it would change the UUID of an active sub‐
1319              array.
1320
1321
1322       --update-subarray=
1323              If the device is a container and the argument to --update-subar‐
1324              ray  specifies  a subarray in the container, then attempt to up‐
1325              date the given superblock field in the subarray.  See  below  in
1326              MISC MODE for details.
1327
1328
1329       -t, --test
1330              When  used with --detail, the exit status of mdadm is set to re‐
1331              flect the status of the device.  See below in MISC MODE for  de‐
1332              tails.
1333
1334
1335       -W, --wait
1336              For  each md device given, wait for any resync, recovery, or re‐
1337              shape activity to finish before returning.   mdadm  will  return
1338              with success if it actually waited for every device listed, oth‐
1339              erwise it will return failure.
1340
1341
1342       --wait-clean
1343              For each md device given, or  each  device  in  /proc/mdstat  if
1344              --scan  is  given,  arrange  for the array to be marked clean as
1345              soon as possible.  mdadm will return with success if  the  array
1346              uses  external  metadata and we successfully waited.  For native
1347              arrays, this returns immediately as the  kernel  handles  dirty-
1348              clean  transitions at shutdown.  No action is taken if safe-mode
1349              handling is disabled.
1350
1351
1352       --action=
1353              Set the "sync_action" for all md devices given to one  of  idle,
1354              frozen, check, repair.  Setting to idle will abort any currently
1355              running action though some actions will  automatically  restart.
1356              Setting  to  frozen  will abort any current action and ensure no
1357              other action starts automatically.
1358
1359              Details of check and repair can be found it md(4)  under  SCRUB‐
1360              BING AND MISMATCHES.
1361
1362

For Incremental Assembly mode:

1364       --rebuild-map, -r
1365              Rebuild  the  map  file (/run/mdadm/map) that mdadm uses to help
1366              track which arrays are currently being assembled.
1367
1368
1369       --run, -R
1370              Run any array assembled as soon as a minimal number  of  devices
1371              is available, rather than waiting until all expected devices are
1372              present.
1373
1374
1375       --scan, -s
1376              Only meaningful with -R this will scan the map file  for  arrays
1377              that are being incrementally assembled and will try to start any
1378              that are not already started.  If any such array  is  listed  in
1379              mdadm.conf  as requiring an external bitmap, that bitmap will be
1380              attached first.
1381
1382
1383       --fail, -f
1384              This allows the hot-plug system  to  remove  devices  that  have
1385              fully  disappeared from the kernel.  It will first fail and then
1386              remove the device from any array it belongs to.  The device name
1387              given  should  be a kernel device name such as "sda", not a name
1388              in /dev.
1389
1390
1391       --path=
1392              Only used with --fail.  The 'path' given  will  be  recorded  so
1393              that  if a new device appears at the same location it can be au‐
1394              tomatically added to the same array.  This allows the failed de‐
1395              vice  to be automatically replaced by a new device without meta‐
1396              data if it appears at specified path.   This option is  normally
1397              only set by an udev script.
1398
1399

For Monitor mode:

1401       -m, --mail
1402              Give a mail address to send alerts to.
1403
1404
1405       -p, --program, --alert
1406              Give a program to be run whenever an event is detected.
1407
1408
1409       -y, --syslog
1410              Cause  all events to be reported through 'syslog'.  The messages
1411              have facility of 'daemon' and varying priorities.
1412
1413
1414       -d, --delay
1415              Give a delay in seconds.  mdadm polls the  md  arrays  and  then
1416              waits this many seconds before polling again.  The default is 60
1417              seconds.  Since 2.6.16, there is no need to reduce this  as  the
1418              kernel alerts mdadm immediately when there is any change.
1419
1420
1421       -r, --increment
1422              Give  a  percentage  increment.   mdadm  will generate RebuildNN
1423              events with the given percentage increment.
1424
1425
1426       -f, --daemonise
1427              Tell mdadm to run as a background daemon if it decides to  moni‐
1428              tor  anything.  This causes it to fork and run in the child, and
1429              to disconnect from the terminal.  The process id of the child is
1430              written  to  stdout.  This is useful with --scan which will only
1431              continue monitoring if a mail address or alert program is  found
1432              in the config file.
1433
1434
1435       -i, --pid-file
1436              When  mdadm is running in daemon mode, write the pid of the dae‐
1437              mon process to the specified file, instead  of  printing  it  on
1438              standard output.
1439
1440
1441       -1, --oneshot
1442              Check  arrays only once.  This will generate NewArray events and
1443              more significantly DegradedArray and SparesMissing events.  Run‐
1444              ning
1445                      mdadm --monitor --scan -1
1446              from  a  cron script will ensure regular notification of any de‐
1447              graded arrays.
1448
1449
1450       -t, --test
1451              Generate a TestMessage alert for every array found  at  startup.
1452              This  alert  gets  mailed and passed to the alert program.  This
1453              can be used for testing that alert message do get  through  suc‐
1454              cessfully.
1455
1456
1457       --no-sharing
1458              This  inhibits  the  functionality for moving spares between ar‐
1459              rays.  Only one monitoring process started with --scan but with‐
1460              out this flag is allowed, otherwise the two could interfere with
1461              each other.
1462
1463

ASSEMBLE MODE

1465       Usage: mdadm --assemble md-device options-and-component-devices...
1466
1467       Usage: mdadm --assemble --scan md-devices-and-options...
1468
1469       Usage: mdadm --assemble --scan options...
1470
1471
1472       This usage assembles one or more RAID arrays from  pre-existing  compo‐
1473       nents.  For each array, mdadm needs to know the md device, the identity
1474       of the array, and the number of component devices.  These can be  found
1475       in a number of ways.
1476
1477       In  the first usage example (without the --scan) the first device given
1478       is the md device.  In the second usage example, all devices listed  are
1479       treated  as  md devices and assembly is attempted.  In the third (where
1480       no devices are listed) all md devices that are listed in the configura‐
1481       tion  file are assembled.  If no arrays are described by the configura‐
1482       tion file, then any arrays that can be found on unused devices will  be
1483       assembled.
1484
1485       If  precisely one device is listed, but --scan is not given, then mdadm
1486       acts as though --scan was given and identity information  is  extracted
1487       from the configuration file.
1488
1489       The identity can be given with the --uuid option, the --name option, or
1490       the --super-minor option, will be taken from the  md-device  record  in
1491       the  config  file,  or  will be taken from the super block of the first
1492       component-device listed on the command line.
1493
1494       Devices can be given on the --assemble command line or  in  the  config
1495       file.   Only  devices  which  have  an md superblock which contains the
1496       right identity will be considered for any array.
1497
1498       The config file is only used if explicitly named with --config  or  re‐
1499       quested with (a possibly implicit) --scan.  In the latter case, the de‐
1500       fault config file is used.  See mdadm.conf(5) for more details.
1501
1502       If --scan is not given, then the config file will only be used to  find
1503       the identity of md arrays.
1504
1505       Normally  the  array will be started after it is assembled.  However if
1506       --scan is not given and not all expected drives were listed,  then  the
1507       array  is  not started (to guard against usage errors).  To insist that
1508       the array be started in this case (as may work for RAID1, 4, 5,  6,  or
1509       10), give the --run flag.
1510
1511       If udev is active, mdadm does not create any entries in /dev but leaves
1512       that to udev.  It does record information in /run/mdadm/map which  will
1513       allow udev to choose the correct name.
1514
1515       If  mdadm  detects  that udev is not configured, it will create the de‐
1516       vices in /dev itself.
1517
1518       In Linux kernels prior to version 2.6.28 there were two distinct  types
1519       of  md devices that could be created: one that could be partitioned us‐
1520       ing standard partitioning tools and one that could not.   Since  2.6.28
1521       that  distinction is no longer relevant as both types of devices can be
1522       partitioned.  mdadm will normally create the type that originally could
1523       not be partitioned as it has a well-defined major number (9).
1524
1525       Prior to 2.6.28, it is important that mdadm chooses the correct type of
1526       array device to use.  This can be controlled with  the  --auto  option.
1527       In  particular,  a value of "mdp" or "part" or "p" tells mdadm to use a
1528       partitionable device rather than the default.
1529
1530       In the no-udev case, the value given to --auto can  be  suffixed  by  a
1531       number.   This  tells  mdadm to create that number of partition devices
1532       rather than the default of 4.
1533
1534       The value given to --auto can also be given in the  configuration  file
1535       as a word starting auto= on the ARRAY line for the relevant array.
1536
1537
1538   Auto-Assembly
1539       When  --assemble  is  used with --scan and no devices are listed, mdadm
1540       will first attempt to assemble all the  arrays  listed  in  the  config
1541       file.
1542
1543       If  no  arrays  are  listed in the config (other than those marked <ig‐
1544       nore>) it will look through the available devices for  possible  arrays
1545       and  will  try  to  assemble  anything that it finds.  Arrays which are
1546       tagged as belonging to the given homehost will be assembled and started
1547       normally.   Arrays which do not obviously belong to this host are given
1548       names that are expected not to conflict with anything  local,  and  are
1549       started  "read-auto" so that nothing is written to any device until the
1550       array is written to. i.e.  automatic resync etc is delayed.
1551
1552       If mdadm finds a consistent set of devices that look like  they  should
1553       comprise  an array, and if the superblock is tagged as belonging to the
1554       given home host, it will automatically choose a device name and try  to
1555       assemble  the array.  If the array uses version-0.90 metadata, then the
1556       minor number as recorded in the superblock is used to create a name  in
1557       /dev/md/  so  for example /dev/md/3.  If the array uses version-1 meta‐
1558       data, then the name from the superblock is used to similarly  create  a
1559       name in /dev/md/ (the name will have any 'host' prefix stripped first).
1560
1561       This  behaviour can be modified by the AUTO line in the mdadm.conf con‐
1562       figuration file.  This line can indicate that  specific  metadata  type
1563       should,  or  should  not,  be  automatically assembled.  If an array is
1564       found which is not listed in mdadm.conf and has a metadata format  that
1565       is  denied  by  the AUTO line, then it will not be assembled.  The AUTO
1566       line can also request that all arrays  identified  as  being  for  this
1567       homehost  should  be  assembled regardless of their metadata type.  See
1568       mdadm.conf(5) for further details.
1569
1570       Note: Auto-assembly cannot be used for assembling and  activating  some
1571       arrays  which are undergoing reshape.  In particular as the backup-file
1572       cannot be given, any reshape which requires a backup file  to  continue
1573       cannot  be started by auto-assembly.  An array which is growing to more
1574       devices and has passed the critical  section  can  be  assembled  using
1575       auto-assembly.
1576
1577

BUILD MODE

1579       Usage: mdadm --build md-device --chunk=X --level=Y --raid-devices=Z de‐
1580                   vices
1581
1582
1583       This usage is similar to --create.  The difference is that  it  creates
1584       an  array  without a superblock.  With these arrays there is no differ‐
1585       ence between initially creating the array and  subsequently  assembling
1586       the array, except that hopefully there is useful data there in the sec‐
1587       ond case.
1588
1589       The level may raid0, linear, raid1, raid10, multipath,  or  faulty,  or
1590       one  of  their synonyms.  All devices must be listed and the array will
1591       be started once complete.  It will often be appropriate  to  use  --as‐
1592       sume-clean with levels raid1 or raid10.
1593
1594

CREATE MODE

1596       Usage: mdadm --create md-device --chunk=X --level=Y
1597                   --raid-devices=Z devices
1598
1599
1600       This  usage will initialise a new md array, associate some devices with
1601       it, and activate the array.
1602
1603       The named device will normally not exist when mdadm  --create  is  run,
1604       but will be created by udev once the array becomes active.
1605
1606       The  max  length md-device name is limited to 32 characters.  Different
1607       metadata types have more strict limitation (like  IMSM  where  only  16
1608       characters are allowed).  For that reason, long name could be truncated
1609       or rejected, it depends on metadata policy.
1610
1611       As devices are added, they are checked to see if they contain RAID  su‐
1612       perblocks or filesystems.  They are also checked to see if the variance
1613       in device size exceeds 1%.
1614
1615       If any discrepancy is found, the array will not automatically  be  run,
1616       though the presence of a --run can override this caution.
1617
1618       To  create a "degraded" array in which some devices are missing, simply
1619       give the word "missing" in place of a device  name.   This  will  cause
1620       mdadm  to leave the corresponding slot in the array empty.  For a RAID4
1621       or RAID5 array at most one slot can be "missing"; for a RAID6 array  at
1622       most  two  slots.   For a RAID1 array, only one real device needs to be
1623       given.  All of the others can be "missing".
1624
1625       When creating a RAID5 array, mdadm will automatically create a degraded
1626       array  with  an  extra spare drive.  This is because building the spare
1627       into a degraded array is in general faster than resyncing the parity on
1628       a  non-degraded,  but not clean, array.  This feature can be overridden
1629       with the --force option.
1630
1631       When creating an array with version-1 metadata a name for the array  is
1632       required.   If  this  is  not  given with the --name option, mdadm will
1633       choose a name based on the last component of the name of the device be‐
1634       ing  created.  So if /dev/md3 is being created, then the name 3 will be
1635       chosen.  If /dev/md/home is being created, then the name home  will  be
1636       used.
1637
1638       When  creating  a  partition  based array, using mdadm with version-1.x
1639       metadata, the partition type should be set to 0xDA (non fs-data).  This
1640       type  of  selection  allows for greater precision since using any other
1641       [RAID auto-detect (0xFD) or a GNU/Linux partition (0x83)], might create
1642       problems in the event of array recovery through a live cdrom.
1643
1644       A  new array will normally get a randomly assigned 128bit UUID which is
1645       very likely to be unique.  If you have a specific need, you can  choose
1646       a UUID for the array by giving the --uuid= option.  Be warned that cre‐
1647       ating two arrays with the same UUID is a recipe  for  disaster.   Also,
1648       using  --uuid=  when  creating a v0.90 array will silently override any
1649       --homehost= setting.
1650
1651       If the array type supports a write-intent bitmap, and if the devices in
1652       the array exceed 100G is size, an internal write-intent bitmap will au‐
1653       tomatically be added unless some other option is  explicitly  requested
1654       with  the --bitmap option or a different consistency policy is selected
1655       with the --consistency-policy option. In any case, space for  a  bitmap
1656       will  be  reserved  so  that  one can be added later with --grow --bit‐
1657       map=internal.
1658
1659       If the metadata type supports it (currently only  1.x  and  IMSM  meta‐
1660       data),  space will be allocated to store a bad block list.  This allows
1661       a modest number of bad blocks to be recorded, allowing the drive to re‐
1662       main in service while only partially functional.
1663
1664       When creating an array within a CONTAINER mdadm can be given either the
1665       list of devices to use, or simply the name of the container.  The  for‐
1666       mer case gives control over which devices in the container will be used
1667       for the array.  The latter case allows mdadm  to  automatically  choose
1668       which devices to use based on how much spare space is available.
1669
1670       The General Management options that are valid with --create are:
1671
1672       --run  insist  on running the array even if some devices look like they
1673              might be in use.
1674
1675
1676       --readonly
1677              start the array in readonly mode.
1678
1679

MANAGE MODE

1681       Usage: mdadm device options... devices...
1682
1683       This usage will allow individual devices in an array to be failed,  re‐
1684       moved  or added.  It is possible to perform multiple operations with on
1685       command.  For example:
1686         mdadm /dev/md0 -f /dev/hda1 -r /dev/hda1 -a /dev/hda1
1687       will firstly mark /dev/hda1 as faulty in /dev/md0 and will then  remove
1688       it from the array and finally add it back in as a spare.  However, only
1689       one md array can be affected by a single command.
1690
1691       When a device is added to an active array, mdadm checks to  see  if  it
1692       has  metadata on it which suggests that it was recently a member of the
1693       array.  If it does, it tries to "re-add" the  device.   If  there  have
1694       been  no  changes  since  the device was removed, or if the array has a
1695       write-intent bitmap which has recorded  whatever  changes  there  were,
1696       then  the device will immediately become a full member of the array and
1697       those differences recorded in the bitmap will be resolved.
1698
1699

MISC MODE

1701       Usage: mdadm options ...  devices ...
1702
1703       MISC mode includes a number of distinct operations that operate on dis‐
1704       tinct devices.  The operations are:
1705
1706       --query
1707              The  device  is examined to see if it is (1) an active md array,
1708              or (2) a component of an md array.  The  information  discovered
1709              is reported.
1710
1711
1712       --detail
1713              The  device should be an active md device.  mdadm will display a
1714              detailed description of the array.  --brief or --scan will cause
1715              the output to be less detailed and the format to be suitable for
1716              inclusion in mdadm.conf.  The exit status of mdadm will normally
1717              be 0 unless mdadm failed to get useful information about the de‐
1718              vice(s); however, if the --test option is given, then  the  exit
1719              status will be:
1720
1721              0      The array is functioning normally.
1722
1723              1      The array has at least one failed device.
1724
1725              2      The array has multiple failed devices such that it is un‐
1726                     usable.
1727
1728              4      There was an error while trying to get information  about
1729                     the device.
1730
1731
1732       --detail-platform
1733              Print  detail  of  the  platform's RAID capabilities (firmware /
1734              hardware topology).  If the metadata is  specified  with  -e  or
1735              --metadata= then the return status will be:
1736
1737              0      metadata  successfully enumerated its platform components
1738                     on this system
1739
1740              1      metadata is platform independent
1741
1742              2      metadata failed to find its platform components  on  this
1743                     system
1744
1745
1746       --update-subarray=
1747              If the device is a container and the argument to --update-subar‐
1748              ray specifies a subarray in the container, then attempt  to  up‐
1749              date the given superblock field in the subarray.  Similar to up‐
1750              dating an array in "assemble" mode, the field to update  is  se‐
1751              lected  by  -U  or  --update=  option. The supported options are
1752              name, ppl, no-ppl, bitmap and no-bitmap.
1753
1754              The name option updates the subarray name in  the  metadata,  it
1755              may  not  affect the device node name or the device node symlink
1756              until the subarray is  re-assembled.   If  updating  name  would
1757              change the UUID of an active subarray this operation is blocked,
1758              and the command will end in an error.
1759
1760              The ppl and no-ppl options enable and disable PPL in  the  meta‐
1761              data. Currently supported only for IMSM subarrays.
1762
1763              The bitmap and no-bitmap options enable and disable write-intent
1764              bitmap in the metadata. Currently supported only for IMSM subar‐
1765              rays.
1766
1767
1768       --examine
1769              The  device  should  be  a component of an md array.  mdadm will
1770              read the md superblock of the device and display  the  contents.
1771              If  --brief  or  --scan is given, then multiple devices that are
1772              components of the one array are grouped together and reported in
1773              a single entry suitable for inclusion in mdadm.conf.
1774
1775              Having --scan without listing any devices will cause all devices
1776              listed in the config file to be examined.
1777
1778
1779       --dump=directory
1780              If the device contains RAID metadata, a file will be created  in
1781              the  directory and the metadata will be written to it.  The file
1782              will be the same size as the device and will have  the  metadata
1783              written  at  the same location as it exists in the device.  How‐
1784              ever, the file will be "sparse" so that only those  blocks  con‐
1785              taining metadata will be allocated. The total space used will be
1786              small.
1787
1788              The filename used in the directory will be the base name of  the
1789              device.    Further, if any links appear in /dev/disk/by-id which
1790              point to the device, then hard links to the file will be created
1791              in directory based on these by-id names.
1792
1793              Multiple  devices  can  be listed and their metadata will all be
1794              stored in the one directory.
1795
1796
1797       --restore=directory
1798              This is the reverse of --dump.  mdadm will locate a file in  the
1799              directory  that  has a name appropriate for the given device and
1800              will restore metadata from it.  Names that match /dev/disk/by-id
1801              names  are preferred, however if two of those refer to different
1802              files, mdadm will not choose between them but will abort the op‐
1803              eration.
1804
1805              If  a  file name is given instead of a directory then mdadm will
1806              restore from that file to a single device, always  provided  the
1807              size  of  the file matches that of the device, and the file con‐
1808              tains valid metadata.
1809
1810       --stop The devices should be active md arrays  which  will  be  deacti‐
1811              vated, as long as they are not currently in use.
1812
1813
1814       --run  This will fully activate a partially assembled md array.
1815
1816
1817       --readonly
1818              This  will  mark an active array as read-only, providing that it
1819              is not currently being used.
1820
1821
1822       --readwrite
1823              This will change a readonly array back to being read/write.
1824
1825
1826       --scan For all operations except --examine, --scan will cause the oper‐
1827              ation  to  be applied to all arrays listed in /proc/mdstat.  For
1828              --examine, --scan causes all devices listed in the  config  file
1829              to be examined.
1830
1831
1832       -b, --brief
1833              Be less verbose.  This is used with --detail and --examine.  Us‐
1834              ing --brief with --verbose gives an intermediate level  of  ver‐
1835              bosity.
1836
1837

MONITOR MODE

1839       Usage: mdadm --monitor options... devices...
1840
1841
1842       Monitor option can work in two modes:
1843
1844       •   system wide mode, follow all md devices based on /proc/mdstat,
1845
1846       •   follow only specified MD devices in command line.
1847
1848       --scan - indicates system wide mode. Option causes the monitor to track
1849       all md devices that appear in /proc/mdstat.  If it is not set, then  at
1850       least one device must be specified.
1851
1852       Monitor  usage  causes mdadm to periodically poll a number of md arrays
1853       and to report on any events noticed.
1854
1855       In both modes, monitor will work as long as there is  an  active  array
1856       with  redundancy and it is defined to follow (for --scan every array is
1857       followed).
1858
1859       As well as reporting events, mdadm may move a spare drive from one  ar‐
1860       ray to another if they are in the same spare-group or domain and if the
1861       destination array has a failed drive but no spares.
1862
1863       The result of monitoring the arrays is the generation of events.  These
1864       events  are  passed  to  a  separate  program (if specified) and may be
1865       mailed to a given E-mail address.
1866
1867       When passing events to a program, the program  is  run  once  for  each
1868       event,  and  is  given  2 or 3 command-line arguments: the first is the
1869       name of the event (see below), the second is the name of the md  device
1870       which  is  affected,  and  the third is the name of a related device if
1871       relevant (such as a component device that has failed).
1872
1873       If --scan is given, then a program or an e-mail address must be  speci‐
1874       fied  on  the command line or in the config file. If neither are avail‐
1875       able, then mdadm will not monitor anything.  For devices given directly
1876       in  command line, without program or email specified, each event is re‐
1877       ported to stdout.
1878
1879       Note: For systems where is configured  via  systemd,  mdmonitor(mdmoni‐
1880       tor.service)  should  be configured. The service is designed to be pri‐
1881       mary solution for array monitoring, it is configured to work in  system
1882       wide  mode.   It is automatically started and stopped according to cur‐
1883       rent state and types of MD arrays in system.  The service  may  require
1884       additional configuration, like e-mail or delay.  That should be done in
1885       mdadm.conf.
1886
1887       The different events are:
1888
1889
1890           DeviceDisappeared
1891                  An md array which previously was configured  appears  to  no
1892                  longer be configured. (syslog priority: Critical)
1893
1894                  If mdadm was told to monitor an array which is RAID0 or Lin‐
1895                  ear, then it will report DeviceDisappeared  with  the  extra
1896                  information  Wrong-Level.   This is because RAID0 and Linear
1897                  do not support the device-failed, hot-spare and resync oper‐
1898                  ations which are monitored.
1899
1900
1901           RebuildStarted
1902                  An  md  array started reconstruction (e.g. recovery, resync,
1903                  reshape, check, repair). (syslog priority: Warning)
1904
1905
1906           RebuildNN
1907                  Where NN is a two-digit number (eg. 05, 48). This  indicates
1908                  that  the  rebuild has reached that percentage of the total.
1909                  The events are generated at a fixed increment  from  0.  The
1910                  increment  size  may be specified with a command-line option
1911                  (the default is 20). (syslog priority: Warning)
1912
1913
1914           RebuildFinished
1915                  An md array that was rebuilding, isn't any more, either  be‐
1916                  cause it finished normally or was aborted. (syslog priority:
1917                  Warning)
1918
1919
1920           Fail   An active component device of an array has  been  marked  as
1921                  faulty. (syslog priority: Critical)
1922
1923
1924           FailSpare
1925                  A  spare component device which was being rebuilt to replace
1926                  a faulty device has failed. (syslog priority: Critical)
1927
1928
1929           SpareActive
1930                  A spare component device which was being rebuilt to  replace
1931                  a  faulty  device has been successfully rebuilt and has been
1932                  made active.  (syslog priority: Info)
1933
1934
1935           NewArray
1936                  A new md array has been detected in the  /proc/mdstat  file.
1937                  (syslog priority: Info)
1938
1939
1940           DegradedArray
1941                  A  newly noticed array appears to be degraded.  This message
1942                  is not generated when mdadm notices a  drive  failure  which
1943                  causes  degradation, but only when mdadm notices that an ar‐
1944                  ray is degraded when it first sees the array.  (syslog  pri‐
1945                  ority: Critical)
1946
1947
1948           MoveSpare
1949                  A spare drive has been moved from one array in a spare-group
1950                  or domain to another to allow a failed drive to be replaced.
1951                  (syslog priority: Info)
1952
1953
1954           SparesMissing
1955                  If  mdadm  has been told, via the config file, that an array
1956                  should have a certain number of spare devices, and mdadm de‐
1957                  tects  that it has fewer than this number when it first sees
1958                  the array, it will report a SparesMissing message.   (syslog
1959                  priority: Warning)
1960
1961
1962           TestMessage
1963                  An  array  was  found  at  startup,  and the --test flag was
1964                  given.  (syslog priority: Info)
1965
1966       Only Fail,  FailSpare,  DegradedArray,  SparesMissing  and  TestMessage
1967       cause  Email  to be sent.  All events cause the program to be run.  The
1968       program is run with two or three arguments: the event name,  the  array
1969       device and possibly a second device.
1970
1971       Each event has an associated array device (e.g.  /dev/md1) and possibly
1972       a second device.  For Fail, FailSpare, and SpareActive the  second  de‐
1973       vice is the relevant component device.  For MoveSpare the second device
1974       is the array that the spare was moved from.
1975
1976       For mdadm to move spares from one array to another, the  different  ar‐
1977       rays need to be labeled with the same spare-group or the spares must be
1978       allowed to migrate through matching POLICY domains in the configuration
1979       file.   The  spare-group  name  can be any string; it is only necessary
1980       that different spare groups use different names.
1981
1982       When mdadm detects that an array in a spare group has fewer active  de‐
1983       vices  than necessary for the complete array, and has no spare devices,
1984       it will look for another array in the same spare group that has a  full
1985       complement  of working drives and a spare.  It will then attempt to re‐
1986       move the spare from the second array and add it to the first.   If  the
1987       removal  succeeds  but  the  adding fails, then it is added back to the
1988       original array.
1989
1990       If the spare group for a degraded array is not defined, mdadm will look
1991       at the rules of spare migration specified by POLICY lines in mdadm.conf
1992       and then follow similar steps as above if a matching spare is found.
1993
1994

GROW MODE

1996       The GROW mode is used for changing the size or shape of an  active  ar‐
1997       ray.
1998
1999       During the kernel 2.6 era the following changes were added:
2000
2001       •   change the "size" attribute for RAID1, RAID4, RAID5 and RAID6.
2002
2003       •   increase  or decrease the "raid-devices" attribute of RAID0, RAID1,
2004           RAID4, RAID5, and RAID6.
2005
2006       •   change the chunk-size and layout of RAID0, RAID4, RAID5, RAID6  and
2007           RAID10.
2008
2009       •   convert  between  RAID1 and RAID5, between RAID5 and RAID6, between
2010           RAID0, RAID4, and RAID5, and  between  RAID0  and  RAID10  (in  the
2011           near-2 mode).
2012
2013       •   add  a  write-intent  bitmap to any array which supports these bit‐
2014           maps, or remove a write-intent bitmap from such an array.
2015
2016       •   change the array's consistency policy.
2017
2018       Using GROW on containers is currently supported only for  Intel's  IMSM
2019       container  format.   The  number  of  devices in a container can be in‐
2020       creased - which affects all arrays in the container - or an array in  a
2021       container  can  be converted between levels where those levels are sup‐
2022       ported by the container, and the  conversion  is  on  of  those  listed
2023       above.
2024
2025
2026       Notes:
2027
2028       •   Intel's  native  checkpointing doesn't use --backup-file option and
2029           it is transparent for assembly feature.
2030
2031       •   Roaming between Windows(R) and Linux systems for IMSM  metadata  is
2032           not supported during grow process.
2033
2034       •   When growing a raid0 device, the new component disk size (or exter‐
2035           nal backup size) should be larger than LCM(old, new) * chunk-size *
2036           2,  where  LCM()  is  the  least common multiple of the old and new
2037           count of component disks, and "* 2" comes from the fact that  mdadm
2038           refuses to use more than half of a spare device for backup space.
2039
2040
2041   SIZE CHANGES
2042       Normally  when  an array is built the "size" is taken from the smallest
2043       of the drives.  If all the small drives in an arrays  are,  over  time,
2044       removed  and  replaced with larger drives, then you could have an array
2045       of large drives with only a small  amount  used.   In  this  situation,
2046       changing  the  "size"  with  "GROW"  mode will allow the extra space to
2047       start being used.  If the size is increased in  this  way,  a  "resync"
2048       process will start to make sure the new parts of the array are synchro‐
2049       nised.
2050
2051       Note that when an array changes size, any filesystem that may be stored
2052       in the array will not automatically grow or shrink to use or vacate the
2053       space.  The filesystem will need to be explicitly told to use the extra
2054       space  after  growing, or to reduce its size prior to shrinking the ar‐
2055       ray.
2056
2057       Also, the size of an array cannot be changed while  it  has  an  active
2058       bitmap.   If  an array has a bitmap, it must be removed before the size
2059       can be changed. Once the change is complete a new bitmap  can  be  cre‐
2060       ated.
2061
2062
2063       Note: --grow --size is not yet supported for external file bitmap.
2064
2065
2066   RAID-DEVICES CHANGES
2067       A  RAID1  array  can  work  with  any  number of devices from 1 upwards
2068       (though 1 is not very useful).  There may be times which  you  want  to
2069       increase  or  decrease the number of active devices.  Note that this is
2070       different to hot-add or hot-remove which changes the number of inactive
2071       devices.
2072
2073       When  reducing  the number of devices in a RAID1 array, the slots which
2074       are to be removed from the array must already be vacant.  That is,  the
2075       devices which were in those slots must be failed and removed.
2076
2077       When  the  number  of  devices  is  increased,  any hot spares that are
2078       present will be activated immediately.
2079
2080       Changing the number of active devices in a RAID5 or RAID6 is much  more
2081       effort.  Every block in the array will need to be read and written back
2082       to a new location.  From 2.6.17, the Linux Kernel is able  to  increase
2083       the number of devices in a RAID5 safely, including restarting an inter‐
2084       rupted "reshape".  From 2.6.31, the Linux Kernel is able to increase or
2085       decrease the number of devices in a RAID5 or RAID6.
2086
2087       From  2.6.35, the Linux Kernel is able to convert a RAID0 in to a RAID4
2088       or RAID5.  mdadm uses this functionality and the ability to add devices
2089       to  a RAID4 to allow devices to be added to a RAID0.  When requested to
2090       do this, mdadm will convert the RAID0 to a  RAID4,  add  the  necessary
2091       disks  and  make the reshape happen, and then convert the RAID4 back to
2092       RAID0.
2093
2094       When decreasing the number of devices, the size of the array will  also
2095       decrease.   If  there was data in the array, it could get destroyed and
2096       this is not reversible, so you should firstly shrink the filesystem  on
2097       the array to fit within the new size.  To help prevent accidents, mdadm
2098       requires that the size of the  array  be  decreased  first  with  mdadm
2099       --grow  --array-size.   This  is a reversible change which simply makes
2100       the end of the array inaccessible.  The integrity of any data can  then
2101       be checked before the non-reversible reduction in the number of devices
2102       is request.
2103
2104       When relocating the first few stripes on a RAID5 or RAID6,  it  is  not
2105       possible  to  keep  the  data  on disk completely consistent and crash-
2106       proof.  To provide the required safety, mdadm disables  writes  to  the
2107       array  while this "critical section" is reshaped, and takes a backup of
2108       the data that is in that section.  For grows, this backup may be stored
2109       in  any spare devices that the array has, however it can also be stored
2110       in a separate file specified with the --backup-file option, and is  re‐
2111       quired  to  be  specified  for  shrinks,  RAID level changes and layout
2112       changes.  If this option is used, and the system does crash during  the
2113       critical  period, the same file must be passed to --assemble to restore
2114       the backup and reassemble the array.  When shrinking rather than  grow‐
2115       ing  the array, the reshape is done from the end towards the beginning,
2116       so the "critical section" is at the end of the reshape.
2117
2118
2119   LEVEL CHANGES
2120       Changing the RAID level of any array happens instantaneously.   However
2121       in  the  RAID5 to RAID6 case this requires a non-standard layout of the
2122       RAID6 data, and in the RAID6 to RAID5 case that non-standard layout  is
2123       required  before  the  change  can be accomplished.  So while the level
2124       change is instant, the accompanying layout change can take quite a long
2125       time.  A --backup-file is required.  If the array is not simultaneously
2126       being grown or shrunk, so that the array size will remain  the  same  -
2127       for  example,  reshaping  a  3-drive  RAID5  into a 4-drive RAID6 - the
2128       backup file will be used not just for a "critical section" but through‐
2129       out the reshape operation, as described below under LAYOUT CHANGES.
2130
2131
2132   CHUNK-SIZE AND LAYOUT CHANGES
2133       Changing  the  chunk-size or layout without also changing the number of
2134       devices as the same time will involve re-writing all  blocks  in-place.
2135       To  ensure  against  data  loss in the case of a crash, a --backup-file
2136       must be provided for these changes.  Small sections of the  array  will
2137       be  copied  to  the  backup file while they are being rearranged.  This
2138       means that all the data is copied twice, once to the backup and once to
2139       the  new  layout  on  the  array,  so this type of reshape will go very
2140       slowly.
2141
2142       If the reshape is interrupted for any reason, this backup file must  be
2143       made  available  to  mdadm  --assemble so the array can be reassembled.
2144       Consequently, the file cannot be stored on the device being reshaped.
2145
2146
2147
2148   BITMAP CHANGES
2149       A write-intent bitmap can be added to, or removed from, an  active  ar‐
2150       ray.   Either  internal  bitmaps, or bitmaps stored in a separate file,
2151       can be added.  Note that if you add a bitmap stored in a file which  is
2152       in  a  filesystem  that is on the RAID array being affected, the system
2153       will deadlock.  The bitmap must be on a separate filesystem.
2154
2155
2156   CONSISTENCY POLICY CHANGES
2157       The consistency policy of an active array can be changed by  using  the
2158       --consistency-policy option in Grow mode. Currently this works only for
2159       the ppl and resync policies and allows to enable or disable  the  RAID5
2160       Partial Parity Log (PPL).
2161
2162

INCREMENTAL MODE

2164       Usage:  mdadm  --incremental  [--run]  [--quiet]  component-device [op‐
2165                   tional-aliases-for-device]
2166
2167       Usage: mdadm --incremental --fail component-device
2168
2169       Usage: mdadm --incremental --rebuild-map
2170
2171       Usage: mdadm --incremental --run --scan
2172
2173
2174       This mode is designed to be used in conjunction with a device discovery
2175       system.   As devices are found in a system, they can be passed to mdadm
2176       --incremental to be conditionally added to an appropriate array.
2177
2178       Conversely, it can also be used with the --fail flag to do just the op‐
2179       posite  and  find whatever array a particular device is part of and re‐
2180       move the device from that array.
2181
2182       If the device passed is a CONTAINER device created by a  previous  call
2183       to  mdadm,  then rather than trying to add that device to an array, all
2184       the arrays described by the metadata of the container will be started.
2185
2186       mdadm performs a number of tests to determine if the device is part  of
2187       an  array, and which array it should be part of.  If an appropriate ar‐
2188       ray is found, or can be created, mdadm adds the device to the array and
2189       conditionally starts the array.
2190
2191       Note  that  mdadm will normally only add devices to an array which were
2192       previously working (active or spare) parts of that array.  The  support
2193       for  automatic  inclusion  of  a new drive as a spare in some array re‐
2194       quires a configuration through POLICY in config file.
2195
2196       The tests that mdadm makes are as follow:
2197
2198       +      Is the device permitted by mdadm.conf?  That is, is it listed in
2199              a  DEVICES line in that file.  If DEVICES is absent then the de‐
2200              fault it to allow any device.  Similarly if DEVICES contains the
2201              special  word  partitions then any device is allowed.  Otherwise
2202              the device name given to mdadm, or one of the aliases given,  or
2203              an alias found in the filesystem, must match one of the names or
2204              patterns in a DEVICES line.
2205
2206              This is the only context where the aliases are used.   They  are
2207              usually provided by a udev rules mentioning $env{DEVLINKS}.
2208
2209
2210       +      Does the device have a valid md superblock?  If a specific meta‐
2211              data version is requested with --metadata or -e then  only  that
2212              style  of  metadata is accepted, otherwise mdadm finds any known
2213              version of metadata.  If no md metadata is found, the device may
2214              be still added to an array as a spare if POLICY allows.
2215
2216
2217
2218       mdadm  keeps  a  list  of  arrays  that  it  has partially assembled in
2219       /run/mdadm/map.  If no array exists which matches the metadata  on  the
2220       new  device,  mdadm must choose a device name and unit number.  It does
2221       this based on any name given in  mdadm.conf  or  any  name  information
2222       stored in the metadata.  If this name suggests a unit number, that num‐
2223       ber will be used, otherwise a free unit number will  be  chosen.   Nor‐
2224       mally mdadm will prefer to create a partitionable array, however if the
2225       CREATE line in mdadm.conf suggests that a  non-partitionable  array  is
2226       preferred, that will be honoured.
2227
2228       If  the array is not found in the config file and its metadata does not
2229       identify it as belonging to the "homehost", then mdadm  will  choose  a
2230       name  for  the  array  which  is certain not to conflict with any array
2231       which does belong to this host.  It does this be adding  an  underscore
2232       and a small number to the name preferred by the metadata.
2233
2234       Once  an appropriate array is found or created and the device is added,
2235       mdadm must decide if the array is ready to be started.   It  will  nor‐
2236       mally compare the number of available (non-spare) devices to the number
2237       of devices that the metadata suggests need to be active.  If there  are
2238       at  least that many, the array will be started.  This means that if any
2239       devices are missing the array will not be restarted.
2240
2241       As an alternative, --run may be passed to mdadm in which case the array
2242       will be run as soon as there are enough devices present for the data to
2243       be accessible.  For a RAID1, that means one device will start  the  ar‐
2244       ray.   For  a clean RAID5, the array will be started as soon as all but
2245       one drive is present.
2246
2247       Note that neither of these approaches is really ideal.  If  it  can  be
2248       known that all device discovery has completed, then
2249          mdadm -IRs
2250       can  be run which will try to start all arrays that are being incremen‐
2251       tally assembled.  They are started in "read-auto" mode  in  which  they
2252       are  read-only until the first write request.  This means that no meta‐
2253       data updates are made and no attempt at  resync  or  recovery  happens.
2254       Further  devices  that  are  found  before the first write can still be
2255       added safely.
2256
2257

ENVIRONMENT

2259       This section describes environment variables that affect how mdadm  op‐
2260       erates.
2261
2262
2263       MDADM_NO_MDMON
2264              Setting  this  value  to 1 will prevent mdadm from automatically
2265              launching mdmon.  This variable is intended primarily for debug‐
2266              ging mdadm/mdmon.
2267
2268
2269       MDADM_NO_UDEV
2270              Normally,  mdadm  does  not create any device nodes in /dev, but
2271              leaves that task to udev.  If udev appears not to be configured,
2272              or  if  this  environment variable is set to '1', the mdadm will
2273              create and devices that are needed.
2274
2275
2276       MDADM_NO_SYSTEMCTL
2277              If mdadm detects that systemd is in use it will normally request
2278              systemd  to  start various background tasks (particularly mdmon)
2279              rather than forking and running them in  the  background.   This
2280              can be suppressed by setting MDADM_NO_SYSTEMCTL=1.
2281
2282
2283       IMSM_NO_PLATFORM
2284              A  key value of IMSM metadata is that it allows interoperability
2285              with boot ROMs on Intel platforms, and with other major  operat‐
2286              ing  systems.  Consequently, mdadm will only allow an IMSM array
2287              to be created or modified if detects that it is  running  on  an
2288              Intel  platform which supports IMSM, and supports the particular
2289              configuration of IMSM that is being requested (some  functional‐
2290              ity requires newer OROM support).
2291
2292              These  checks can be suppressed by setting IMSM_NO_PLATFORM=1 in
2293              the environment.  This can be useful for testing or for disaster
2294              recovery.  You should be aware that interoperability may be com‐
2295              promised by setting this value.
2296
2297
2298       MDADM_GROW_ALLOW_OLD
2299              If an array is stopped while it is performing a reshape and that
2300              reshape  was making use of a backup file, then when the array is
2301              re-assembled mdadm will sometimes complain that the backup  file
2302              is too old.  If this happens and you are certain it is the right
2303              backup  file,  you  can  over-ride   this   check   by   setting
2304              MDADM_GROW_ALLOW_OLD=1 in the environment.
2305
2306
2307       MDADM_CONF_AUTO
2308              Any  string  given in this variable is added to the start of the
2309              AUTO line in the config file, or treated as the whole AUTO  line
2310              if  none  is  given.  It can be used to disable certain metadata
2311              types when mdadm is called from a boot script.  For example
2312                  export MDADM_CONF_AUTO='-ddf -imsm'
2313              will make sure that mdadm does not  automatically  assemble  any
2314              DDF  or  IMSM arrays that are found.  This can be useful on sys‐
2315              tems configured to manage such arrays with dmraid.
2316
2317
2318

EXAMPLES

2320         mdadm --query /dev/name-of-device
2321       This will find out if a given device is a RAID array,  or  is  part  of
2322       one, and will provide brief information about the device.
2323
2324         mdadm --assemble --scan
2325       This  will  assemble and start all arrays listed in the standard config
2326       file.  This command will typically go in a system startup file.
2327
2328         mdadm --stop --scan
2329       This will shut down all arrays that can be shut down (i.e. are not cur‐
2330       rently in use).  This will typically go in a system shutdown script.
2331
2332         mdadm --follow --scan --delay=120
2333       If  (and  only  if)  there  is an Email address or program given in the
2334       standard config file, then monitor the status of all arrays  listed  in
2335       that file by polling them ever 2 minutes.
2336
2337         mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hd[ac]1
2338       Create /dev/md0 as a RAID1 array consisting of /dev/hda1 and /dev/hdc1.
2339
2340         echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf
2341         mdadm --detail --scan >> mdadm.conf
2342       This  will  create a prototype config file that describes currently ac‐
2343       tive arrays that are known to be made from partitions of  IDE  or  SCSI
2344       drives.   This file should be reviewed before being used as it may con‐
2345       tain unwanted detail.
2346
2347         echo 'DEVICE /dev/hd[a-z] /dev/sd*[a-z]' > mdadm.conf
2348         mdadm --examine --scan --config=mdadm.conf >> mdadm.conf
2349       This will find arrays which could be assembled from  existing  IDE  and
2350       SCSI  whole  drives  (not partitions), and store the information in the
2351       format of a config file.  This file is very likely to contain  unwanted
2352       detail,  particularly  the devices= entries.  It should be reviewed and
2353       edited before being used as an actual config file.
2354
2355         mdadm --examine --brief --scan --config=partitions
2356         mdadm -Ebsc partitions
2357       Create a list of devices by reading /proc/partitions,  scan  these  for
2358       RAID superblocks, and printout a brief listing of all that were found.
2359
2360         mdadm -Ac partitions -m 0 /dev/md0
2361       Scan all partitions and devices listed in /proc/partitions and assemble
2362       /dev/md0 out of all such devices with a RAID superblock  with  a  minor
2363       number of 0.
2364
2365         mdadm --monitor --scan --daemonise > /run/mdadm/mon.pid
2366       If  config  file contains a mail address or alert program, run mdadm in
2367       the background in monitor mode monitoring all md devices.   Also  write
2368       pid of mdadm daemon to /run/mdadm/mon.pid.
2369
2370         mdadm -Iq /dev/somedevice
2371       Try to incorporate newly discovered device into some array as appropri‐
2372       ate.
2373
2374         mdadm --incremental --rebuild-map --run --scan
2375       Rebuild the array map from any current arrays, and then start any  that
2376       can be started.
2377
2378         mdadm /dev/md4 --fail detached --remove detached
2379       Any  devices  which are components of /dev/md4 will be marked as faulty
2380       and then remove from the array.
2381
2382         mdadm --grow /dev/md4 --level=6 --backup-file=/root/backup-md4
2383       The array /dev/md4 which is currently a RAID5 array will  be  converted
2384       to  RAID6.   There should normally already be a spare drive attached to
2385       the array as a RAID6 needs one more drive than a matching RAID5.
2386
2387         mdadm --create /dev/md/ddf --metadata=ddf --raid-disks 6 /dev/sd[a-f]
2388       Create a DDF array over 6 devices.
2389
2390         mdadm --create /dev/md/home -n3 -l5 -z 30000000 /dev/md/ddf
2391       Create a RAID5 array over any 3 devices in the given DDF set.  Use only
2392       30 gigabytes of each device.
2393
2394         mdadm -A /dev/md/ddf1 /dev/sd[a-f]
2395       Assemble a pre-exist ddf array.
2396
2397         mdadm -I /dev/md/ddf1
2398       Assemble  all arrays contained in the ddf array, assigning names as ap‐
2399       propriate.
2400
2401         mdadm --create --help
2402       Provide help about the Create mode.
2403
2404         mdadm --config --help
2405       Provide help about the format of the config file.
2406
2407         mdadm --help
2408       Provide general help.
2409
2410

FILES

2412   /proc/mdstat
2413       If you're using the /proc filesystem, /proc/mdstat lists all active  md
2414       devices  with  information  about them.  mdadm uses this to find arrays
2415       when --scan is given in Misc mode, and to monitor array  reconstruction
2416       on Monitor mode.
2417
2418
2419   /etc/mdadm.conf (or /etc/mdadm/mdadm.conf)
2420       Default config file.  See mdadm.conf(5) for more details.
2421
2422
2423   /etc/mdadm.conf.d (or /etc/mdadm/mdadm.conf.d)
2424       Default  directory  containing  configuration files.  See mdadm.conf(5)
2425       for more details.
2426
2427
2428   /run/mdadm/map
2429       When --incremental mode is used, this file gets a list of  arrays  cur‐
2430       rently being created.
2431
2432

DEVICE NAMES

2434       mdadm understand two sorts of names for array devices.
2435
2436       The  first  is  the so-called 'standard' format name, which matches the
2437       names used by the kernel and which appear in /proc/mdstat.
2438
2439       The second sort can be freely chosen,  but  must  reside  in  /dev/md/.
2440       When  giving a device name to mdadm to create or assemble an array, ei‐
2441       ther full path name such as /dev/md0 or /dev/md/home can be  given,  or
2442       just the suffix of the second sort of name, such as home can be given.
2443
2444       When mdadm chooses device names during auto-assembly or incremental as‐
2445       sembly, it will sometimes add a small sequence number to the end of the
2446       name  to  avoid  conflicted  between multiple arrays that have the same
2447       name.  If mdadm can reasonably determine that the array really is meant
2448       for this host, either by a hostname in the metadata, or by the presence
2449       of the array in mdadm.conf, then it will leave off the suffix if possi‐
2450       ble.  Also if the homehost is specified as <ignore> mdadm will only use
2451       a suffix if a different array of the same name  already  exists  or  is
2452       listed in the config file.
2453
2454       The  standard names for non-partitioned arrays (the only sort of md ar‐
2455       ray available in 2.4 and earlier) are of the form
2456
2457              /dev/mdNN
2458
2459       where NN is a number.  The standard names for partitionable arrays  (as
2460       available from 2.6 onwards) are of the form:
2461
2462              /dev/md_dNN
2463
2464       Partition  numbers  should  be indicated by adding "pMM" to these, thus
2465       "/dev/md/d1p2".
2466
2467       From kernel version 2.6.28 the "non-partitioned array" can actually  be
2468       partitioned.   So  the  "md_dNN" names are no longer needed, and parti‐
2469       tions such as "/dev/mdNNpXX" are possible.
2470
2471       From kernel version 2.6.29 standard names can be non-numeric  following
2472       the form:
2473
2474              /dev/md_XXX
2475
2476       where XXX is any string.  These names are supported by mdadm since ver‐
2477       sion 3.3 provided they are enabled in mdadm.conf.
2478
2479

NOTE

2481       mdadm was previously known as mdctl.
2482
2483

SEE ALSO

2485       For further information on mdadm usage, MD and the  various  levels  of
2486       RAID, see:
2487
2488              https://raid.wiki.kernel.org/
2489
2490       (based upon Jakob Østergaard's Software-RAID.HOWTO)
2491
2492       The latest version of mdadm should always be available from
2493
2494              https://www.kernel.org/pub/linux/utils/raid/mdadm/
2495
2496       Related man pages:
2497
2498       mdmon(8), mdadm.conf(5), md(4).
2499
2500
2501
2502v4.2                                                                  MDADM(8)
Impressum