1MD(4) Kernel Interfaces Manual MD(4)
2
3
4
6 md - Multiple Device driver aka Linux Software RAID
7
9 /dev/mdn
10 /dev/md/n
11 /dev/md/name
12
14 The md driver provides virtual devices that are created from one or
15 more independent underlying devices. This array of devices often con‐
16 tains redundancy and the devices are often disk drives, hence the acro‐
17 nym RAID which stands for a Redundant Array of Independent Disks.
18
19 md supports RAID levels 1 (mirroring), 4 (striped array with parity
20 device), 5 (striped array with distributed parity information), 6
21 (striped array with distributed dual redundancy information), and 10
22 (striped and mirrored). If some number of underlying devices fails
23 while using one of these levels, the array will continue to function;
24 this number is one for RAID levels 4 and 5, two for RAID level 6, and
25 all but one (N-1) for RAID level 1, and dependent on configuration for
26 level 10.
27
28 md also supports a number of pseudo RAID (non-redundant) configurations
29 including RAID0 (striped array), LINEAR (catenated array), MULTIPATH (a
30 set of different interfaces to the same device), and FAULTY (a layer
31 over a single device into which errors can be injected).
32
33
34 MD METADATA
35 Each device in an array may have some metadata stored in the device.
36 This metadata is sometimes called a superblock. The metadata records
37 information about the structure and state of the array. This allows
38 the array to be reliably re-assembled after a shutdown.
39
40 From Linux kernel version 2.6.10, md provides support for two different
41 formats of metadata, and other formats can be added. Prior to this
42 release, only one format is supported.
43
44 The common format — known as version 0.90 — has a superblock that is 4K
45 long and is written into a 64K aligned block that starts at least 64K
46 and less than 128K from the end of the device (i.e. to get the address
47 of the superblock round the size of the device down to a multiple of
48 64K and then subtract 64K). The available size of each device is the
49 amount of space before the super block, so between 64K and 128K is lost
50 when a device in incorporated into an MD array. This superblock stores
51 multi-byte fields in a processor-dependent manner, so arrays cannot
52 easily be moved between computers with different processors.
53
54 The new format — known as version 1 — has a superblock that is normally
55 1K long, but can be longer. It is normally stored between 8K and 12K
56 from the end of the device, on a 4K boundary, though variations can be
57 stored at the start of the device (version 1.1) or 4K from the start of
58 the device (version 1.2). This metadata format stores multibyte data
59 in a processor-independent format and supports up to hundreds of compo‐
60 nent devices (version 0.90 only supports 28).
61
62 The metadata contains, among other things:
63
64 LEVEL The manner in which the devices are arranged into the array
65 (linear, raid0, raid1, raid4, raid5, raid10, multipath).
66
67 UUID a 128 bit Universally Unique Identifier that identifies the
68 array that contains this device.
69
70
71 When a version 0.90 array is being reshaped (e.g. adding extra devices
72 to a RAID5), the version number is temporarily set to 0.91. This
73 ensures that if the reshape process is stopped in the middle (e.g. by a
74 system crash) and the machine boots into an older kernel that does not
75 support reshaping, then the array will not be assembled (which would
76 cause data corruption) but will be left untouched until a kernel that
77 can complete the reshape processes is used.
78
79
80 ARRAYS WITHOUT METADATA
81 While it is usually best to create arrays with superblocks so that they
82 can be assembled reliably, there are some circumstances when an array
83 without superblocks is preferred. These include:
84
85 LEGACY ARRAYS
86 Early versions of the md driver only supported Linear and Raid0
87 configurations and did not use a superblock (which is less crit‐
88 ical with these configurations). While such arrays should be
89 rebuilt with superblocks if possible, md continues to support
90 them.
91
92 FAULTY Being a largely transparent layer over a different device, the
93 FAULTY personality doesn't gain anything from having a
94 superblock.
95
96 MULTIPATH
97 It is often possible to detect devices which are different paths
98 to the same storage directly rather than having a distinctive
99 superblock written to the device and searched for on all paths.
100 In this case, a MULTIPATH array with no superblock makes sense.
101
102 RAID1 In some configurations it might be desired to create a raid1
103 configuration that does not use a superblock, and to maintain
104 the state of the array elsewhere. While not encouraged for gen‐
105 eral us, it does have special-purpose uses and is supported.
106
107
108 ARRAYS WITH EXTERNAL METADATA
109 From release 2.6.28, the md driver supports arrays with externally man‐
110 aged metadata. That is, the metadata is not managed by the kernel by
111 rather by a user-space program which is external to the kernel. This
112 allows support for a variety of metadata formats without cluttering the
113 kernel with lots of details.
114
115 md is able to communicate with the user-space program through various
116 sysfs attributes so that it can make appropriate changes to the meta‐
117 data - for example to make a device as faulty. When necessary, md will
118 wait for the program to acknowledge the event by writing to a sysfs
119 attribute. The manual page for mdmon(8) contains more detail about
120 this interaction.
121
122
123 CONTAINERS
124 Many metadata formats use a single block of metadata to describe a num‐
125 ber of different arrays which all use the same set of devices. In this
126 case it is helpful for the kernel to know about the full set of devices
127 as a whole. This set is known to md as a container. A container is an
128 md array with externally managed metadata and with device offset and
129 size so that it just covers the metadata part of the devices. The
130 remainder of each device is available to be incorporated into various
131 arrays.
132
133
134 LINEAR
135 A linear array simply catenates the available space on each drive to
136 form one large virtual drive.
137
138 One advantage of this arrangement over the more common RAID0 arrange‐
139 ment is that the array may be reconfigured at a later time with an
140 extra drive, so the array is made bigger without disturbing the data
141 that is on the array. This can even be done on a live array.
142
143 If a chunksize is given with a LINEAR array, the usable space on each
144 device is rounded down to a multiple of this chunksize.
145
146
147 RAID0
148 A RAID0 array (which has zero redundancy) is also known as a striped
149 array. A RAID0 array is configured at creation with a Chunk Size which
150 must be a power of two (prior to Linux 2.6.31), and at least 4
151 kibibytes.
152
153 The RAID0 driver assigns the first chunk of the array to the first
154 device, the second chunk to the second device, and so on until all
155 drives have been assigned one chunk. This collection of chunks forms a
156 stripe. Further chunks are gathered into stripes in the same way, and
157 are assigned to the remaining space in the drives.
158
159 If devices in the array are not all the same size, then once the small‐
160 est device has been exhausted, the RAID0 driver starts collecting
161 chunks into smaller stripes that only span the drives which still have
162 remaining space.
163
164
165
166 RAID1
167 A RAID1 array is also known as a mirrored set (though mirrors tend to
168 provide reflected images, which RAID1 does not) or a plex.
169
170 Once initialised, each device in a RAID1 array contains exactly the
171 same data. Changes are written to all devices in parallel. Data is
172 read from any one device. The driver attempts to distribute read
173 requests across all devices to maximise performance.
174
175 All devices in a RAID1 array should be the same size. If they are not,
176 then only the amount of space available on the smallest device is used
177 (any extra space on other devices is wasted).
178
179 Note that the read balancing done by the driver does not make the RAID1
180 performance profile be the same as for RAID0; a single stream of
181 sequential input will not be accelerated (e.g. a single dd), but multi‐
182 ple sequential streams or a random workload will use more than one
183 spindle. In theory, having an N-disk RAID1 will allow N sequential
184 threads to read from all disks.
185
186 Individual devices in a RAID1 can be marked as "write-mostly". This
187 drives are excluded from the normal read balancing and will only be
188 read from when there is no other option. This can be useful for
189 devices connected over a slow link.
190
191
192 RAID4
193 A RAID4 array is like a RAID0 array with an extra device for storing
194 parity. This device is the last of the active devices in the array.
195 Unlike RAID0, RAID4 also requires that all stripes span all drives, so
196 extra space on devices that are larger than the smallest is wasted.
197
198 When any block in a RAID4 array is modified, the parity block for that
199 stripe (i.e. the block in the parity device at the same device offset
200 as the stripe) is also modified so that the parity block always con‐
201 tains the "parity" for the whole stripe. I.e. its content is equiva‐
202 lent to the result of performing an exclusive-or operation between all
203 the data blocks in the stripe.
204
205 This allows the array to continue to function if one device fails. The
206 data that was on that device can be calculated as needed from the par‐
207 ity block and the other data blocks.
208
209
210 RAID5
211 RAID5 is very similar to RAID4. The difference is that the parity
212 blocks for each stripe, instead of being on a single device, are dis‐
213 tributed across all devices. This allows more parallelism when writ‐
214 ing, as two different block updates will quite possibly affect parity
215 blocks on different devices so there is less contention.
216
217 This also allows more parallelism when reading, as read requests are
218 distributed over all the devices in the array instead of all but one.
219
220
221 RAID6
222 RAID6 is similar to RAID5, but can handle the loss of any two devices
223 without data loss. Accordingly, it requires N+2 drives to store N
224 drives worth of data.
225
226 The performance for RAID6 is slightly lower but comparable to RAID5 in
227 normal mode and single disk failure mode. It is very slow in dual disk
228 failure mode, however.
229
230
231 RAID10
232 RAID10 provides a combination of RAID1 and RAID0, and is sometimes
233 known as RAID1+0. Every datablock is duplicated some number of times,
234 and the resulting collection of datablocks are distributed over multi‐
235 ple drives.
236
237 When configuring a RAID10 array, it is necessary to specify the number
238 of replicas of each data block that are required (this will normally be
239 2) and whether the replicas should be 'near', 'offset' or 'far'. (Note
240 that the 'offset' layout is only available from 2.6.18).
241
242 When 'near' replicas are chosen, the multiple copies of a given chunk
243 are laid out consecutively across the stripes of the array, so the two
244 copies of a datablock will likely be at the same offset on two adjacent
245 devices.
246
247 When 'far' replicas are chosen, the multiple copies of a given chunk
248 are laid out quite distant from each other. The first copy of all data
249 blocks will be striped across the early part of all drives in RAID0
250 fashion, and then the next copy of all blocks will be striped across a
251 later section of all drives, always ensuring that all copies of any
252 given block are on different drives.
253
254 The 'far' arrangement can give sequential read performance equal to
255 that of a RAID0 array, but at the cost of reduced write performance.
256
257 When 'offset' replicas are chosen, the multiple copies of a given chunk
258 are laid out on consecutive drives and at consecutive offsets. Effec‐
259 tively each stripe is duplicated and the copies are offset by one
260 device. This should give similar read characteristics to 'far' if a
261 suitably large chunk size is used, but without as much seeking for
262 writes.
263
264 It should be noted that the number of devices in a RAID10 array need
265 not be a multiple of the number of replica of each data block; however,
266 there must be at least as many devices as replicas.
267
268 If, for example, an array is created with 5 devices and 2 replicas,
269 then space equivalent to 2.5 of the devices will be available, and
270 every block will be stored on two different devices.
271
272 Finally, it is possible to have an array with both 'near' and 'far'
273 copies. If an array is configured with 2 near copies and 2 far copies,
274 then there will be a total of 4 copies of each block, each on a differ‐
275 ent drive. This is an artifact of the implementation and is unlikely
276 to be of real value.
277
278
279 MULTIPATH
280 MULTIPATH is not really a RAID at all as there is only one real device
281 in a MULTIPATH md array. However there are multiple access points
282 (paths) to this device, and one of these paths might fail, so there are
283 some similarities.
284
285 A MULTIPATH array is composed of a number of logically different
286 devices, often fibre channel interfaces, that all refer the the same
287 real device. If one of these interfaces fails (e.g. due to cable prob‐
288 lems), the multipath driver will attempt to redirect requests to
289 another interface.
290
291 The MULTIPATH drive is not receiving any ongoing development and should
292 be considered a legacy driver. The device-mapper based multipath driv‐
293 ers should be preferred for new installations.
294
295
296 FAULTY
297 The FAULTY md module is provided for testing purposes. A faulty array
298 has exactly one component device and is normally assembled without a
299 superblock, so the md array created provides direct access to all of
300 the data in the component device.
301
302 The FAULTY module may be requested to simulate faults to allow testing
303 of other md levels or of filesystems. Faults can be chosen to trigger
304 on read requests or write requests, and can be transient (a subsequent
305 read/write at the address will probably succeed) or persistent (subse‐
306 quent read/write of the same address will fail). Further, read faults
307 can be "fixable" meaning that they persist until a write request at the
308 same address.
309
310 Fault types can be requested with a period. In this case, the fault
311 will recur repeatedly after the given number of requests of the rele‐
312 vant type. For example if persistent read faults have a period of 100,
313 then every 100th read request would generate a fault, and the faulty
314 sector would be recorded so that subsequent reads on that sector would
315 also fail.
316
317 There is a limit to the number of faulty sectors that are remembered.
318 Faults generated after this limit is exhausted are treated as tran‐
319 sient.
320
321 The list of faulty sectors can be flushed, and the active list of fail‐
322 ure modes can be cleared.
323
324
325 UNCLEAN SHUTDOWN
326 When changes are made to a RAID1, RAID4, RAID5, RAID6, or RAID10 array
327 there is a possibility of inconsistency for short periods of time as
328 each update requires at least two block to be written to different
329 devices, and these writes probably won't happen at exactly the same
330 time. Thus if a system with one of these arrays is shutdown in the
331 middle of a write operation (e.g. due to power failure), the array may
332 not be consistent.
333
334 To handle this situation, the md driver marks an array as "dirty"
335 before writing any data to it, and marks it as "clean" when the array
336 is being disabled, e.g. at shutdown. If the md driver finds an array
337 to be dirty at startup, it proceeds to correct any possibly inconsis‐
338 tency. For RAID1, this involves copying the contents of the first
339 drive onto all other drives. For RAID4, RAID5 and RAID6 this involves
340 recalculating the parity for each stripe and making sure that the par‐
341 ity block has the correct data. For RAID10 it involves copying one of
342 the replicas of each block onto all the others. This process, known as
343 "resynchronising" or "resync" is performed in the background. The
344 array can still be used, though possibly with reduced performance.
345
346 If a RAID4, RAID5 or RAID6 array is degraded (missing at least one
347 drive, two for RAID6) when it is restarted after an unclean shutdown,
348 it cannot recalculate parity, and so it is possible that data might be
349 undetectably corrupted. The 2.4 md driver does not alert the operator
350 to this condition. The 2.6 md driver will fail to start an array in
351 this condition without manual intervention, though this behaviour can
352 be overridden by a kernel parameter.
353
354
355 RECOVERY
356 If the md driver detects a write error on a device in a RAID1, RAID4,
357 RAID5, RAID6, or RAID10 array, it immediately disables that device
358 (marking it as faulty) and continues operation on the remaining
359 devices. If there are spare drives, the driver will start recreating
360 on one of the spare drives the data which was on that failed drive,
361 either by copying a working drive in a RAID1 configuration, or by doing
362 calculations with the parity block on RAID4, RAID5 or RAID6, or by
363 finding and copying originals for RAID10.
364
365 In kernels prior to about 2.6.15, a read error would cause the same
366 effect as a write error. In later kernels, a read-error will instead
367 cause md to attempt a recovery by overwriting the bad block. i.e. it
368 will find the correct data from elsewhere, write it over the block that
369 failed, and then try to read it back again. If either the write or the
370 re-read fail, md will treat the error the same way that a write error
371 is treated, and will fail the whole device.
372
373 While this recovery process is happening, the md driver will monitor
374 accesses to the array and will slow down the rate of recovery if other
375 activity is happening, so that normal access to the array will not be
376 unduly affected. When no other activity is happening, the recovery
377 process proceeds at full speed. The actual speed targets for the two
378 different situations can be controlled by the speed_limit_min and
379 speed_limit_max control files mentioned below.
380
381
382 SCRUBBING AND MISMATCHES
383 As storage devices can develop bad blocks at any time it is valuable to
384 regularly read all blocks on all devices in an array so as to catch
385 such bad blocks early. This process is called scrubbing.
386
387 md arrays can be scrubbed by writing either check or repair to the file
388 md/sync_action in the sysfs directory for the device.
389
390 Requesting a scrub will cause md to read every block on every device in
391 the array, and check that the data is consistent. For RAID1 and
392 RAID10, this means checking that the copies are identical. For RAID4,
393 RAID5, RAID6 this means checking that the parity block is (or blocks
394 are) correct.
395
396 If a read error is detected during this process, the normal read-error
397 handling causes correct data to be found from other devices and to be
398 written back to the faulty device. In many case this will effectively
399 fix the bad block.
400
401 If all blocks read successfully but are found to not be consistent,
402 then this is regarded as a mismatch.
403
404 If check was used, then no action is taken to handle the mismatch, it
405 is simply recorded. If repair was used, then a mismatch will be
406 repaired in the same way that resync repairs arrays. For RAID5/RAID6
407 new parity blocks are written. For RAID1/RAID10, all but one block are
408 overwritten with the content of that one block.
409
410 A count of mismatches is recorded in the sysfs file md/mismatch_cnt.
411 This is set to zero when a scrub starts and is incremented whenever a
412 sector is found that is a mismatch. md normally works in units much
413 larger than a single sector and when it finds a mismatch, it does not
414 determin exactly how many actual sectors were affected but simply adds
415 the number of sectors in the IO unit that was used. So a value of 128
416 could simply mean that a single 64KB check found an error (128 x
417 512bytes = 64KB).
418
419 If an array is created by mdadm with --assume-clean then a subsequent
420 check could be expected to find some mismatches.
421
422 On a truly clean RAID5 or RAID6 array, any mismatches should indicate a
423 hardware problem at some level - software issues should never cause
424 such a mismatch.
425
426 However on RAID1 and RAID10 it is possible for software issues to cause
427 a mismatch to be reported. This does not necessarily mean that the
428 data on the array is corrupted. It could simply be that the system
429 does not care what is stored on that part of the array - it is unused
430 space.
431
432 The most likely cause for an unexpected mismatch on RAID1 or RAID10
433 occurs if a swap partition or swap file is stored on the array.
434
435 When the swap subsystem wants to write a page of memory out, it flags
436 the page as 'clean' in the memory manager and requests the swap device
437 to write it out. It is quite possible that the memory will be changed
438 while the write-out is happening. In that case the 'clean' flag will
439 be found to be clear when the write completes and so the swap subsystem
440 will simply forget that the swapout had been attempted, and will possi‐
441 bly choose a different page to write out.
442
443 If the swap device was on RAID1 (or RAID10), then the data is sent from
444 memory to a device twice (or more depending on the number of devices in
445 the array). Thus it is possible that the memory gets changed between
446 the times it is sent, so different data can be written to the different
447 devices in the array. This will be detected by check as a mismatch.
448 However it does not reflect any corruption as the block where this mis‐
449 match occurs is being treated by the swap system as being empty, and
450 the data will never be read from that block.
451
452 It is conceivable for a similar situation to occur on non-swap files,
453 though it is less likely.
454
455 Thus the mismatch_cnt value can not be interpreted very reliably on
456 RAID1 or RAID10, especially when the device is used for swap.
457
458
459
460 BITMAP WRITE-INTENT LOGGING
461 From Linux 2.6.13, md supports a bitmap based write-intent log. If
462 configured, the bitmap is used to record which blocks of the array may
463 be out of sync. Before any write request is honoured, md will make
464 sure that the corresponding bit in the log is set. After a period of
465 time with no writes to an area of the array, the corresponding bit will
466 be cleared.
467
468 This bitmap is used for two optimisations.
469
470 Firstly, after an unclean shutdown, the resync process will consult the
471 bitmap and only resync those blocks that correspond to bits in the bit‐
472 map that are set. This can dramatically reduce resync time.
473
474 Secondly, when a drive fails and is removed from the array, md stops
475 clearing bits in the intent log. If that same drive is re-added to the
476 array, md will notice and will only recover the sections of the drive
477 that are covered by bits in the intent log that are set. This can
478 allow a device to be temporarily removed and reinserted without causing
479 an enormous recovery cost.
480
481 The intent log can be stored in a file on a separate device, or it can
482 be stored near the superblocks of an array which has superblocks.
483
484 It is possible to add an intent log to an active array, or remove an
485 intent log if one is present.
486
487 In 2.6.13, intent bitmaps are only supported with RAID1. Other levels
488 with redundancy are supported from 2.6.15.
489
490
491 WRITE-BEHIND
492 From Linux 2.6.14, md supports WRITE-BEHIND on RAID1 arrays.
493
494 This allows certain devices in the array to be flagged as write-mostly.
495 MD will only read from such devices if there is no other option.
496
497 If a write-intent bitmap is also provided, write requests to write-
498 mostly devices will be treated as write-behind requests and md will not
499 wait for writes to those requests to complete before reporting the
500 write as complete to the filesystem.
501
502 This allows for a RAID1 with WRITE-BEHIND to be used to mirror data
503 over a slow link to a remote computer (providing the link isn't too
504 slow). The extra latency of the remote link will not slow down normal
505 operations, but the remote system will still have a reasonably up-to-
506 date copy of all data.
507
508
509 RESTRIPING
510 Restriping, also known as Reshaping, is the processes of re-arranging
511 the data stored in each stripe into a new layout. This might involve
512 changing the number of devices in the array (so the stripes are wider),
513 changing the chunk size (so stripes are deeper or shallower), or chang‐
514 ing the arrangement of data and parity (possibly changing the raid
515 level, e.g. 1 to 5 or 5 to 6).
516
517 As of Linux 2.6.17, md can reshape a raid5 array to have more devices.
518 Other possibilities may follow in future kernels.
519
520 During any stripe process there is a 'critical section' during which
521 live data is being overwritten on disk. For the operation of increas‐
522 ing the number of drives in a raid5, this critical section covers the
523 first few stripes (the number being the product of the old and new num‐
524 ber of devices). After this critical section is passed, data is only
525 written to areas of the array which no longer hold live data — the live
526 data has already been located away.
527
528 md is not able to ensure data preservation if there is a crash (e.g.
529 power failure) during the critical section. If md is asked to start an
530 array which failed during a critical section of restriping, it will
531 fail to start the array.
532
533 To deal with this possibility, a user-space program must
534
535 · Disable writes to that section of the array (using the sysfs inter‐
536 face),
537
538 · take a copy of the data somewhere (i.e. make a backup),
539
540 · allow the process to continue and invalidate the backup and restore
541 write access once the critical section is passed, and
542
543 · provide for restoring the critical data before restarting the array
544 after a system crash.
545
546 mdadm versions from 2.4 do this for growing a RAID5 array.
547
548 For operations that do not change the size of the array, like simply
549 increasing chunk size, or converting RAID5 to RAID6 with one extra
550 device, the entire process is the critical section. In this case, the
551 restripe will need to progress in stages, as a section is suspended,
552 backed up, restriped, and released; this is not yet implemented.
553
554
555 SYSFS INTERFACE
556 Each block device appears as a directory in sysfs (which is usually
557 mounted at /sys). For MD devices, this directory will contain a subdi‐
558 rectory called md which contains various files for providing access to
559 information about the array.
560
561 This interface is documented more fully in the file Documenta‐
562 tion/md.txt which is distributed with the kernel sources. That file
563 should be consulted for full documentation. The following are just a
564 selection of attribute files that are available.
565
566
567 md/sync_speed_min
568 This value, if set, overrides the system-wide setting in
569 /proc/sys/dev/raid/speed_limit_min for this array only. Writing
570 the value system to this file will cause the system-wide setting
571 to have effect.
572
573
574 md/sync_speed_max
575 This is the partner of md/sync_speed_min and overrides
576 /proc/sys/dev/raid/spool_limit_max described below.
577
578
579 md/sync_action
580 This can be used to monitor and control the resync/recovery
581 process of MD. In particular, writing "check" here will cause
582 the array to read all data block and check that they are consis‐
583 tent (e.g. parity is correct, or all mirror replicas are the
584 same). Any discrepancies found are NOT corrected.
585
586 A count of problems found will be stored in md/mismatch_count.
587
588 Alternately, "repair" can be written which will cause the same
589 check to be performed, but any errors will be corrected.
590
591 Finally, "idle" can be written to stop the check/repair process.
592
593
594 md/stripe_cache_size
595 This is only available on RAID5 and RAID6. It records the size
596 (in pages per device) of the stripe cache which is used for
597 synchronising all write operations to the array and all read
598 operations if the array is degraded. The default is 256. Valid
599 values are 17 to 32768. Increasing this number can increase
600 performance in some situations, at some cost in system memory.
601 Note, setting this value too high can result in an "out of mem‐
602 ory" condition for the system.
603
604 memory_consumed = system_page_size * nr_disks *
605 stripe_cache_size
606
607
608 md/preread_bypass_threshold
609 This is only available on RAID5 and RAID6. This variable sets
610 the number of times MD will service a full-stripe-write before
611 servicing a stripe that requires some "prereading". For fair‐
612 ness this defaults to 1. Valid values are 0 to
613 stripe_cache_size. Setting this to 0 maximizes sequential-write
614 throughput at the cost of fairness to threads doing small or
615 random writes.
616
617
618 KERNEL PARAMETERS
619 The md driver recognised several different kernel parameters.
620
621 raid=noautodetect
622 This will disable the normal detection of md arrays that happens
623 at boot time. If a drive is partitioned with MS-DOS style par‐
624 titions, then if any of the 4 main partitions has a partition
625 type of 0xFD, then that partition will normally be inspected to
626 see if it is part of an MD array, and if any full arrays are
627 found, they are started. This kernel parameter disables this
628 behaviour.
629
630
631 raid=partitionable
632
633 raid=part
634 These are available in 2.6 and later kernels only. They indi‐
635 cate that autodetected MD arrays should be created as partition‐
636 able arrays, with a different major device number to the origi‐
637 nal non-partitionable md arrays. The device number is listed as
638 mdp in /proc/devices.
639
640
641 md_mod.start_ro=1
642
643 /sys/module/md_mod/parameters/start_ro
644 This tells md to start all arrays in read-only mode. This is a
645 soft read-only that will automatically switch to read-write on
646 the first write request. However until that write request,
647 nothing is written to any device by md, and in particular, no
648 resync or recovery operation is started.
649
650
651 md_mod.start_dirty_degraded=1
652
653 /sys/module/md_mod/parameters/start_dirty_degraded
654 As mentioned above, md will not normally start a RAID4, RAID5,
655 or RAID6 that is both dirty and degraded as this situation can
656 imply hidden data loss. This can be awkward if the root
657 filesystem is affected. Using this module parameter allows such
658 arrays to be started at boot time. It should be understood that
659 there is a real (though small) risk of data corruption in this
660 situation.
661
662
663 md=n,dev,dev,...
664
665 md=dn,dev,dev,...
666 This tells the md driver to assemble /dev/md n from the listed
667 devices. It is only necessary to start the device holding the
668 root filesystem this way. Other arrays are best started once
669 the system is booted.
670
671 In 2.6 kernels, the d immediately after the = indicates that a
672 partitionable device (e.g. /dev/md/d0) should be created rather
673 than the original non-partitionable device.
674
675
676 md=n,l,c,i,dev...
677 This tells the md driver to assemble a legacy RAID0 or LINEAR
678 array without a superblock. n gives the md device number, l
679 gives the level, 0 for RAID0 or -1 for LINEAR, c gives the chunk
680 size as a base-2 logarithm offset by twelve, so 0 means 4K, 1
681 means 8K. i is ignored (legacy support).
682
683
685 /proc/mdstat
686 Contains information about the status of currently running
687 array.
688
689 /proc/sys/dev/raid/speed_limit_min
690 A readable and writable file that reflects the current "goal"
691 rebuild speed for times when non-rebuild activity is current on
692 an array. The speed is in Kibibytes per second, and is a per-
693 device rate, not a per-array rate (which means that an array
694 with more disks will shuffle more data for a given speed). The
695 default is 1000.
696
697
698 /proc/sys/dev/raid/speed_limit_max
699 A readable and writable file that reflects the current "goal"
700 rebuild speed for times when no non-rebuild activity is current
701 on an array. The default is 200,000.
702
703
705 mdadm(8), mkraid(8).
706
707
708
709 MD(4)