1LVMRAID(7) LVMRAID(7)
2
3
4
6 lvmraid — LVM RAID
7
8
10 lvm(8) RAID is a way to create a Logical Volume (LV) that uses multiple
11 physical devices to improve performance or tolerate device failures.
12 In LVM, the physical devices are Physical Volumes (PVs) in a single
13 Volume Group (VG).
14
15 How LV data blocks are placed onto PVs is determined by the RAID level.
16 RAID levels are commonly referred to as 'raid' followed by a number,
17 e.g. raid1, raid5 or raid6. Selecting a RAID level involves making
18 tradeoffs among: physical device requirements, fault tolerance, and
19 performance. A description of the RAID levels can be found at
20 www.snia.org/sites/default/files/SNIA_DDF_Technical_Position_v2.0.pdf
21
22 LVM RAID uses both Device Mapper (DM) and Multiple Device (MD) drivers
23 from the Linux kernel. DM is used to create and manage visible LVM
24 devices, and MD is used to place data on physical devices.
25
26 LVM creates hidden LVs (dm devices) layered between the visible LV and
27 physical devices. LVs in the middle layers are called sub LVs. For
28 LVM raid, a sub LV pair to store data and metadata (raid superblock and
29 write intent bitmap) is created per raid image/leg (see lvs command
30 examples below).
31
32
34 To create a RAID LV, use lvcreate and specify an LV type. The LV type
35 corresponds to a RAID level. The basic RAID levels that can be used
36 are: raid0, raid1, raid4, raid5, raid6, raid10.
37
38 lvcreate --type RaidLevel [OPTIONS] --name Name --size Size VG [PVs]
39
40 To display the LV type of an existing LV, run:
41
42 lvs -o name,segtype LV
43
44 (The LV type is also referred to as "segment type" or "segtype".)
45
46 LVs can be created with the following types:
47
48
49 raid0
50
51
52 Also called striping, raid0 spreads LV data across multiple devices in
53 units of stripe size. This is used to increase performance. LV data
54 will be lost if any of the devices fail.
55
56 lvcreate --type raid0 [--stripes Number --stripesize Size] VG [PVs]
57
58
59 --stripes specifies the number of devices to spread the LV across.
60
61
62 --stripesize specifies the size of each stripe in kilobytes. This is
63 the amount of data that is written to one device before moving
64 to the next.
65
66 PVs specifies the devices to use. If not specified, lvm will choose
67 Number devices, one for each stripe based on the number of PVs avail‐
68 able or supplied.
69
70
71 raid1
72
73
74 Also called mirroring, raid1 uses multiple devices to duplicate LV
75 data. The LV data remains available if all but one of the devices
76 fail. The minimum number of devices (i.e. sub LV pairs) required is 2.
77
78 lvcreate --type raid1 [--mirrors Number] VG [PVs]
79
80
81 --mirrors specifies the number of mirror images in addition to the
82 original LV image, e.g. --mirrors 1 means there are two images
83 of the data, the original and one mirror image.
84
85 PVs specifies the devices to use. If not specified, lvm will choose
86 Number devices, one for each image.
87
88
89 raid4
90
91
92 raid4 is a form of striping that uses an extra, first device dedicated
93 to storing parity blocks. The LV data remains available if one device
94 fails. The parity is used to recalculate data that is lost from a sin‐
95 gle device. The minimum number of devices required is 3.
96
97 lvcreate --type raid4 [--stripes Number --stripesize Size] VG [PVs]
98
99
100 --stripes specifies the number of devices to use for LV data. This
101 does not include the extra device lvm adds for storing parity
102 blocks. A raid4 LV with Number stripes requires Number+1
103 devices. Number must be 2 or more.
104
105
106 --stripesize specifies the size of each stripe in kilobytes. This is
107 the amount of data that is written to one device before moving
108 to the next.
109
110 PVs specifies the devices to use. If not specified, lvm will choose
111 Number+1 separate devices.
112
113 raid4 is called non-rotating parity because the parity blocks are
114 always stored on the same device.
115
116
117 raid5
118
119
120 raid5 is a form of striping that uses an extra device for storing par‐
121 ity blocks. LV data and parity blocks are stored on each device, typi‐
122 cally in a rotating pattern for performance reasons. The LV data
123 remains available if one device fails. The parity is used to recalcu‐
124 late data that is lost from a single device. The minimum number of
125 devices required is 3 (unless converting from 2 legged raid1 to reshape
126 to more stripes; see reshaping).
127
128 lvcreate --type raid5 [--stripes Number --stripesize Size] VG [PVs]
129
130
131 --stripes specifies the number of devices to use for LV data. This
132 does not include the extra device lvm adds for storing parity
133 blocks. A raid5 LV with Number stripes requires Number+1
134 devices. Number must be 2 or more.
135
136
137 --stripesize specifies the size of each stripe in kilobytes. This is
138 the amount of data that is written to one device before moving
139 to the next.
140
141 PVs specifies the devices to use. If not specified, lvm will choose
142 Number+1 separate devices.
143
144 raid5 is called rotating parity because the parity blocks are placed on
145 different devices in a round-robin sequence. There are variations of
146 raid5 with different algorithms for placing the parity blocks. The
147 default variant is raid5_ls (raid5 left symmetric, which is a rotating
148 parity 0 with data restart.) See RAID5 variants below.
149
150
151 raid6
152
153
154 raid6 is a form of striping like raid5, but uses two extra devices for
155 parity blocks. LV data and parity blocks are stored on each device,
156 typically in a rotating pattern for perfomramce reasons. The LV data
157 remains available if up to two devices fail. The parity is used to
158 recalculate data that is lost from one or two devices. The minimum
159 number of devices required is 5.
160
161 lvcreate --type raid6 [--stripes Number --stripesize Size] VG [PVs]
162
163
164 --stripes specifies the number of devices to use for LV data. This
165 does not include the extra two devices lvm adds for storing par‐
166 ity blocks. A raid6 LV with Number stripes requires Number+2
167 devices. Number must be 3 or more.
168
169
170 --stripesize specifies the size of each stripe in kilobytes. This is
171 the amount of data that is written to one device before moving
172 to the next.
173
174 PVs specifies the devices to use. If not specified, lvm will choose
175 Number+2 separate devices.
176
177 Like raid5, there are variations of raid6 with different algorithms for
178 placing the parity blocks. The default variant is raid6_zr (raid6 zero
179 restart, aka left symmetric, which is a rotating parity 0 with data
180 restart.) See RAID6 variants below.
181
182
183 raid10
184
185
186 raid10 is a combination of raid1 and raid0, striping data across mir‐
187 rored devices. LV data remains available if one or more devices
188 remains in each mirror set. The minimum number of devices required is
189 4.
190
191 lvcreate --type raid10
192 [--mirrors NumberMirrors]
193 [--stripes NumberStripes --stripesize Size]
194 VG [PVs]
195
196
197 --mirrors specifies the number of mirror images within each stripe.
198 e.g. --mirrors 1 means there are two images of the data, the
199 original and one mirror image.
200
201
202 --stripes specifies the total number of devices to use in all raid1
203 images (not the number of raid1 devices to spread the LV across,
204 even though that is the effective result). The number of
205 devices in each raid1 mirror will be NumberStripes/(NumberMir‐
206 rors+1), e.g. mirrors 1 and stripes 4 will stripe data across
207 two raid1 mirrors, where each mirror is devices.
208
209
210 --stripesize specifies the size of each stripe in kilobytes. This is
211 the amount of data that is written to one device before moving
212 to the next.
213
214 PVs specifies the devices to use. If not specified, lvm will choose
215 the necessary devices. Devices are used to create mirrors in the order
216 listed, e.g. for mirrors 1, stripes 2, listing PV1 PV2 PV3 PV4 results
217 in mirrors PV1/PV2 and PV3/PV4.
218
219 RAID10 is not mirroring on top of stripes, which would be RAID01, which
220 is less tolerant of device failures.
221
222
223
225 Synchronization is the process that makes all the devices in a RAID LV
226 consistent with each other.
227
228 In a RAID1 LV, all mirror images should have the same data. When a new
229 mirror image is added, or a mirror image is missing data, then images
230 need to be synchronized. Data blocks are copied from an existing image
231 to a new or outdated image to make them match.
232
233 In a RAID 4/5/6 LV, parity blocks and data blocks should match based on
234 the parity calculation. When the devices in a RAID LV change, the data
235 and parity blocks can become inconsistent and need to be synchronized.
236 Correct blocks are read, parity is calculated, and recalculated blocks
237 are written.
238
239 The RAID implementation keeps track of which parts of a RAID LV are
240 synchronized. When a RAID LV is first created and activated the first
241 synchronization is called initialization. A pointer stored in the raid
242 metadata keeps track of the initialization process thus allowing it to
243 be restarted after a deactivation of the RaidLV or a crash. Any writes
244 to the RaidLV dirties the respective region of the write intent bitmap
245 which allow for fast recovery of the regions after a crash. Without
246 this, the entire LV would need to be synchronized every time it was
247 activated.
248
249 Automatic synchronization happens when a RAID LV is activated, but it
250 is usually partial because the bitmaps reduce the areas that are
251 checked. A full sync becomes necessary when devices in the RAID LV are
252 replaced.
253
254 The synchronization status of a RAID LV is reported by the following
255 command, where "Cpy%Sync" = "100%" means sync is complete:
256
257 lvs -a -o name,sync_percent
258
259
260
261 Scrubbing
262 Scrubbing is a full scan of the RAID LV requested by a user. Scrubbing
263 can find problems that are missed by partial synchronization.
264
265 Scrubbing assumes that RAID metadata and bitmaps may be inaccurate, so
266 it verifies all RAID metadata, LV data, and parity blocks. Scrubbing
267 can find inconsistencies caused by hardware errors or degradation.
268 These kinds of problems may be undetected by automatic synchronization
269 which excludes areas outside of the RAID write-intent bitmap.
270
271 The command to scrub a RAID LV can operate in two different modes:
272
273 lvchange --syncaction check|repair LV
274
275
276 check Check mode is read-only and only detects inconsistent areas in
277 the RAID LV, it does not correct them.
278
279
280 repair Repair mode checks and writes corrected blocks to synchronize
281 any inconsistent areas.
282
283
284 Scrubbing can consume a lot of bandwidth and slow down application I/O
285 on the RAID LV. To control the I/O rate used for scrubbing, use:
286
287
288 --maxrecoveryrate Size[k|UNIT]
289 Sets the maximum recovery rate for a RAID LV. Size is specified
290 as an amount per second for each device in the array. If no
291 suffix is given, then KiB/sec/device is used. Setting the
292 recovery rate to 0 means it will be unbounded.
293
294
295 --minrecoveryrate Size[k|UNIT]
296 Sets the minimum recovery rate for a RAID LV. Size is specified
297 as an amount per second for each device in the array. If no
298 suffix is given, then KiB/sec/device is used. Setting the
299 recovery rate to 0 means it will be unbounded.
300
301
302 To display the current scrubbing in progress on an LV, including the
303 syncaction mode and percent complete, run:
304
305 lvs -a -o name,raid_sync_action,sync_percent
306
307 After scrubbing is complete, to display the number of inconsistent
308 blocks found, run:
309
310 lvs -o name,raid_mismatch_count
311
312 Also, if mismatches were found, the lvs attr field will display the
313 letter "m" (mismatch) in the 9th position, e.g.
314
315 # lvs -o name,vgname,segtype,attr vg/lv
316 LV VG Type Attr
317 lv vg raid1 Rwi-a-r-m-
318
319
320
321 Scrubbing Limitations
322 The check mode can only report the number of inconsistent blocks, it
323 cannot report which blocks are inconsistent. This makes it impossible
324 to know which device has errors, or if the errors affect file system
325 data, metadata or nothing at all.
326
327 The repair mode can make the RAID LV data consistent, but it does not
328 know which data is correct. The result may be consistent but incorrect
329 data. When two different blocks of data must be made consistent, it
330 chooses the block from the device that would be used during RAID
331 intialization. However, if the PV holding corrupt data is known,
332 lvchange --rebuild can be used in place of scrubbing to reconstruct the
333 data on the bad device.
334
335 Future developments might include:
336
337 Allowing a user to choose the correct version of data during repair.
338
339 Using a majority of devices to determine the correct version of data to
340 use in a 3-way RAID1 or RAID6 LV.
341
342 Using a checksumming device to pin-point when and where an error
343 occurs, allowing it to be rewritten.
344
345
346
348 An LV is often a combination of other hidden LVs called SubLVs. The
349 SubLVs either use physical devices, or are built from other SubLVs
350 themselves. SubLVs hold LV data blocks, RAID parity blocks, and RAID
351 metadata. SubLVs are generally hidden, so the lvs -a option is
352 required to display them:
353
354 lvs -a -o name,segtype,devices
355
356 SubLV names begin with the visible LV name, and have an automatic suf‐
357 fix indicating its role:
358
359
360 · SubLVs holding LV data or parity blocks have the suffix _rimage_#.
361 These SubLVs are sometimes referred to as DataLVs.
362
363
364 · SubLVs holding RAID metadata have the suffix _rmeta_#. RAID meta‐
365 data includes superblock information, RAID type, bitmap, and device
366 health information. These SubLVs are sometimes referred to as Met‐
367 aLVs.
368
369
370 SubLVs are an internal implementation detail of LVM. The way they are
371 used, constructed and named may change.
372
373 The following examples show the SubLV arrangement for each of the basic
374 RAID LV types, using the fewest number of devices allowed for each.
375
376
377 Examples
378 raid0
379 Each rimage SubLV holds a portion of LV data. No parity is used. No
380 RAID metadata is used.
381
382 # lvcreate --type raid0 --stripes 2 --name lvr0 ...
383
384 # lvs -a -o name,segtype,devices
385 lvr0 raid0 lvr0_rimage_0(0),lvr0_rimage_1(0)
386 [lvr0_rimage_0] linear /dev/sda(...)
387 [lvr0_rimage_1] linear /dev/sdb(...)
388
389 raid1
390 Each rimage SubLV holds a complete copy of LV data. No parity is used.
391 Each rmeta SubLV holds RAID metadata.
392
393 # lvcreate --type raid1 --mirrors 1 --name lvr1 ...
394
395 # lvs -a -o name,segtype,devices
396 lvr1 raid1 lvr1_rimage_0(0),lvr1_rimage_1(0)
397 [lvr1_rimage_0] linear /dev/sda(...)
398 [lvr1_rimage_1] linear /dev/sdb(...)
399 [lvr1_rmeta_0] linear /dev/sda(...)
400 [lvr1_rmeta_1] linear /dev/sdb(...)
401
402 raid4
403 At least three rimage SubLVs each hold a portion of LV data and one
404 rimage SubLV holds parity. Each rmeta SubLV holds RAID metadata.
405
406 # lvcreate --type raid4 --stripes 2 --name lvr4 ...
407
408 # lvs -a -o name,segtype,devices
409 lvr4 raid4 lvr4_rimage_0(0),\
410 lvr4_rimage_1(0),\
411 lvr4_rimage_2(0)
412 [lvr4_rimage_0] linear /dev/sda(...)
413 [lvr4_rimage_1] linear /dev/sdb(...)
414 [lvr4_rimage_2] linear /dev/sdc(...)
415 [lvr4_rmeta_0] linear /dev/sda(...)
416 [lvr4_rmeta_1] linear /dev/sdb(...)
417 [lvr4_rmeta_2] linear /dev/sdc(...)
418
419 raid5
420 At least three rimage SubLVs each typcially hold a portion of LV data
421 and parity (see section on raid5) Each rmeta SubLV holds RAID metadata.
422
423 # lvcreate --type raid5 --stripes 2 --name lvr5 ...
424
425 # lvs -a -o name,segtype,devices
426 lvr5 raid5 lvr5_rimage_0(0),\
427 lvr5_rimage_1(0),\
428 lvr5_rimage_2(0)
429 [lvr5_rimage_0] linear /dev/sda(...)
430 [lvr5_rimage_1] linear /dev/sdb(...)
431 [lvr5_rimage_2] linear /dev/sdc(...)
432 [lvr5_rmeta_0] linear /dev/sda(...)
433 [lvr5_rmeta_1] linear /dev/sdb(...)
434 [lvr5_rmeta_2] linear /dev/sdc(...)
435
436 raid6
437 At least five rimage SubLVs each typically hold a portion of LV data
438 and parity. (see section on raid6) Each rmeta SubLV holds RAID meta‐
439 data.
440
441 # lvcreate --type raid6 --stripes 3 --name lvr6
442
443 # lvs -a -o name,segtype,devices
444 lvr6 raid6 lvr6_rimage_0(0),\
445 lvr6_rimage_1(0),\
446 lvr6_rimage_2(0),\
447 lvr6_rimage_3(0),\
448 lvr6_rimage_4(0),\
449 lvr6_rimage_5(0)
450 [lvr6_rimage_0] linear /dev/sda(...)
451 [lvr6_rimage_1] linear /dev/sdb(...)
452 [lvr6_rimage_2] linear /dev/sdc(...)
453 [lvr6_rimage_3] linear /dev/sdd(...)
454 [lvr6_rimage_4] linear /dev/sde(...)
455 [lvr6_rimage_5] linear /dev/sdf(...)
456 [lvr6_rmeta_0] linear /dev/sda(...)
457 [lvr6_rmeta_1] linear /dev/sdb(...)
458 [lvr6_rmeta_2] linear /dev/sdc(...)
459 [lvr6_rmeta_3] linear /dev/sdd(...)
460 [lvr6_rmeta_4] linear /dev/sde(...)
461 [lvr6_rmeta_5] linear /dev/sdf(...)
462
463 raid10
464 At least four rimage SubLVs each hold a portion of LV data. No parity
465 is used. Each rmeta SubLV holds RAID metadata.
466
467 # lvcreate --type raid10 --stripes 2 --mirrors 1 --name lvr10
468
469 # lvs -a -o name,segtype,devices
470 lvr10 raid10 lvr10_rimage_0(0),\
471 lvr10_rimage_1(0),\
472 lvr10_rimage_2(0),\
473 lvr10_rimage_3(0)
474 [lvr10_rimage_0] linear /dev/sda(...)
475 [lvr10_rimage_1] linear /dev/sdb(...)
476 [lvr10_rimage_2] linear /dev/sdc(...)
477 [lvr10_rimage_3] linear /dev/sdd(...)
478 [lvr10_rmeta_0] linear /dev/sda(...)
479 [lvr10_rmeta_1] linear /dev/sdb(...)
480 [lvr10_rmeta_2] linear /dev/sdc(...)
481 [lvr10_rmeta_3] linear /dev/sdd(...)
482
483
484
486 Physical devices in a RAID LV can fail or be lost for multiple reasons.
487 A device could be disconnected, permanently failed, or temporarily dis‐
488 connected. The purpose of RAID LVs (levels 1 and higher) is to con‐
489 tinue operating in a degraded mode, without losing LV data, even after
490 a device fails. The number of devices that can fail without the loss
491 of LV data depends on the RAID level:
492
493
494 · RAID0 (striped) LVs cannot tolerate losing any devices. LV data
495 will be lost if any devices fail.
496
497
498 · RAID1 LVs can tolerate losing all but one device without LV data
499 loss.
500
501
502 · RAID4 and RAID5 LVs can tolerate losing one device without LV data
503 loss.
504
505
506 · RAID6 LVs can tolerate losing two devices without LV data loss.
507
508
509 · RAID10 is variable, and depends on which devices are lost. It
510 stripes across multiple mirror groups with raid1 layout thus it can
511 tolerate losing all but one device in each of these groups without
512 LV data loss.
513
514
515 If a RAID LV is missing devices, or has other device-related problems,
516 lvs reports this in the health_status (and attr) fields:
517
518 lvs -o name,lv_health_status
519
520 partial
521 Devices are missing from the LV. This is also indicated by the letter
522 "p" (partial) in the 9th position of the lvs attr field.
523
524 refresh needed
525 A device was temporarily missing but has returned. The LV needs to be
526 refreshed to use the device again (which will usually require partial
527 synchronization). This is also indicated by the letter "r" (refresh
528 needed) in the 9th position of the lvs attr field. See Refreshing an
529 LV. This could also indicate a problem with the device, in which case
530 it should be be replaced, see Replacing Devices.
531
532 mismatches exist
533 See Scrubbing.
534
535 Most commands will also print a warning if a device is missing, e.g.
536 WARNING: Device for PV uItL3Z-wBME-DQy0-... not found or rejected ...
537
538 This warning will go away if the device returns or is removed from the
539 VG (see vgreduce --removemissing).
540
541
542
543 Activating an LV with missing devices
544 A RAID LV that is missing devices may be activated or not, depending on
545 the "activation mode" used in lvchange:
546
547 lvchange -ay --activationmode complete|degraded|partial LV
548
549 complete
550 The LV is only activated if all devices are present.
551
552 degraded
553 The LV is activated with missing devices if the RAID level can tolerate
554 the number of missing devices without LV data loss.
555
556 partial
557 The LV is always activated, even if portions of the LV data are missing
558 because of the missing device(s). This should only be used to perform
559 extreme recovery or repair operations.
560
561 lvm.conf(5) activation/activation_mode
562 controls the activation mode when not specified by the command.
563
564 The default value is printed by:
565 lvmconfig --type default activation/activation_mode
566
567
568 Replacing Devices
569 Devices in a RAID LV can be replaced by other devices in the VG. When
570 replacing devices that are no longer visible on the system, use lvcon‐
571 vert --repair. When replacing devices that are still visible, use
572 lvconvert --replace. The repair command will attempt to restore the
573 same number of data LVs that were previously in the LV. The replace
574 option can be repeated to replace multiple PVs. Replacement devices
575 can be optionally listed with either option.
576
577 lvconvert --repair LV [NewPVs]
578
579 lvconvert --replace OldPV LV [NewPV]
580
581 lvconvert --replace OldPV1 --replace OldPV2 LV [NewPVs]
582
583 New devices require synchronization with existing devices, see Synchro‐
584 nization.
585
586
587 Refreshing an LV
588 Refreshing a RAID LV clears any transient device failures (device was
589 temporarily disconnected) and returns the LV to its fully redundant
590 mode. Restoring a device will usually require at least partial syn‐
591 chronization (see Synchronization). Failure to clear a transient fail‐
592 ure results in the RAID LV operating in degraded mode until it is reac‐
593 tivated. Use the lvchange command to refresh an LV:
594
595 lvchange --refresh LV
596
597 # lvs -o name,vgname,segtype,attr,size vg
598 LV VG Type Attr LSize
599 lv vg raid1 Rwi-a-r-r- 100.00g
600
601 # lvchange --refresh vg/lv
602
603 # lvs -o name,vgname,segtype,attr,size vg
604 LV VG Type Attr LSize
605 lv vg raid1 Rwi-a-r--- 100.00g
606
607
608 Automatic repair
609 If a device in a RAID LV fails, device-mapper in the kernel notifies
610 the dmeventd(8) monitoring process (see Monitoring). dmeventd can be
611 configured to automatically respond using:
612
613 lvm.conf(5) activation/raid_fault_policy
614
615 Possible settings are:
616
617 warn
618 A warning is added to the system log indicating that a device has
619 failed in the RAID LV. It is left to the user to repair the LV, e.g.
620 replace failed devices.
621
622 allocate
623 dmeventd automatically attempts to repair the LV using spare devices in
624 the VG. Note that even a transient failure is treated as a permanent
625 failure under this setting. A new device is allocated and full syn‐
626 chronization is started.
627
628 The specific command run by dmeventd to warn or repair is:
629 lvconvert --repair --use-policies LV
630
631
632
633 Corrupted Data
634 Data on a device can be corrupted due to hardware errors without the
635 device ever being disconnected or there being any fault in the soft‐
636 ware. This should be rare, and can be detected (see Scrubbing).
637
638
639
640 Rebuild specific PVs
641 If specific PVs in a RAID LV are known to have corrupt data, the data
642 on those PVs can be reconstructed with:
643
644 lvchange --rebuild PV LV
645
646 The rebuild option can be repeated with different PVs to replace the
647 data on multiple PVs.
648
649
650
652 When a RAID LV is activated the dmeventd(8) process is started to moni‐
653 tor the health of the LV. Various events detected in the kernel can
654 cause a notification to be sent from device-mapper to the monitoring
655 process, including device failures and synchronization completion (e.g.
656 for initialization or scrubbing).
657
658 The LVM configuration file contains options that affect how the moni‐
659 toring process will respond to failure events (e.g. raid_fault_policy).
660 It is possible to turn on and off monitoring with lvchange, but it is
661 not recommended to turn this off unless you have a thorough knowledge
662 of the consequences.
663
664
665
667 There are a number of options in the LVM configuration file that affect
668 the behavior of RAID LVs. The tunable options are listed below. A
669 detailed description of each can be found in the LVM configuration file
670 itself.
671 mirror_segtype_default
672 raid10_segtype_default
673 raid_region_size
674 raid_fault_policy
675 activation_mode
676
677
678
680 A RAID1 LV can be tuned so that certain devices are avoided for reading
681 while all devices are still written to.
682
683 lvchange --[raid]writemostly PV[:y|n|t] LV
684
685 The specified device will be marked as "write mostly", which means that
686 reading from this device will be avoided, and other devices will be
687 preferred for reading (unless no other devices are available.) This
688 minimizes the I/O to the specified device.
689
690 If the PV name has no suffix, the write mostly attribute is set. If
691 the PV name has the suffix :n, the write mostly attribute is cleared,
692 and the suffix :t toggles the current setting.
693
694 The write mostly option can be repeated on the command line to change
695 multiple devices at once.
696
697 To report the current write mostly setting, the lvs attr field will
698 show the letter "w" in the 9th position when write mostly is set:
699
700 lvs -a -o name,attr
701
702 When a device is marked write mostly, the maximum number of outstanding
703 writes to that device can be configured. Once the maximum is reached,
704 further writes become synchronous. When synchronous, a write to the LV
705 will not complete until writes to all the mirror images are complete.
706
707 lvchange --[raid]writebehind Number LV
708
709 To report the current write behind setting, run:
710
711 lvs -o name,raid_write_behind
712
713 When write behind is not configured, or set to 0, all LV writes are
714 synchronous.
715
716
717
719 RAID takeover is converting a RAID LV from one RAID level to another,
720 e.g. raid5 to raid6. Changing the RAID level is usually done to
721 increase or decrease resilience to device failures or to restripe LVs.
722 This is done using lvconvert and specifying the new RAID level as the
723 LV type:
724
725 lvconvert --type RaidLevel LV [PVs]
726
727 The most common and recommended RAID takeover conversions are:
728
729
730 linear to raid1
731 Linear is a single image of LV data, and converting it to raid1
732 adds a mirror image which is a direct copy of the original lin‐
733 ear image.
734
735
736 striped/raid0 to raid4/5/6
737 Adding parity devices to a striped volume results in raid4/5/6.
738
739
740 Unnatural conversions that are not recommended include converting
741 between striped and non-striped types. This is because file systems
742 often optimize I/O patterns based on device striping values. If those
743 values change, it can decrease performance.
744
745 Converting to a higher RAID level requires allocating new SubLVs to
746 hold RAID metadata, and new SubLVs to hold parity blocks for LV data.
747 Converting to a lower RAID level removes the SubLVs that are no longer
748 needed.
749
750 Conversion often requires full synchronization of the RAID LV (see Syn‐
751 chronization). Converting to RAID1 requires copying all LV data blocks
752 to N new images on new devices. Converting to a parity RAID level
753 requires reading all LV data blocks, calculating parity, and writing
754 the new parity blocks. Synchronization can take a long time depending
755 on the throughpout of the devices used and the size of the RaidLV. It
756 can degrade performance (rate controls also apply to conversion; see
757 --minrecoveryrate and --maxrecoveryrate.)
758
759 Warning: though it is possible to create striped LVs with up to 128
760 stripes, a maximum of 64 stripes can be converted to raid0, 63 to
761 raid4/5 and 62 to raid6 because of the added parity SubLVs. A striped
762 LV with a maximum of 32 stripes can be converted to raid10.
763
764
765 The following takeover conversions are currently possible:
766
767 · between striped and raid0.
768
769 · between linear and raid1.
770
771 · between mirror and raid1.
772
773 · between raid1 with two images and raid4/5.
774
775 · between striped/raid0 and raid4.
776
777 · between striped/raid0 and raid5.
778
779 · between striped/raid0 and raid6.
780
781 · between raid4 and raid5.
782
783 · between raid4/raid5 and raid6.
784
785 · between striped/raid0 and raid10.
786
787 · between striped and raid4.
788
789
790 Indirect conversions
791 Converting from one raid level to another may require multiple steps,
792 converting first to intermediate raid levels.
793
794 linear to raid6
795
796 To convert an LV from linear to raid6:
797 1. convert to raid1 with two images
798 2. convert to raid5 (internally raid5_ls) with two images
799 3. convert to raid5 with three or more stripes (reshape)
800 4. convert to raid6 (internally raid6_ls_6)
801 5. convert to raid6 (internally raid6_zr, reshape)
802
803 The commands to perform the steps above are:
804 1. lvconvert --type raid1 --mirrors 1 LV
805 2. lvconvert --type raid5 LV
806 3. lvconvert --stripes 3 LV
807 4. lvconvert --type raid6 LV
808 5. lvconvert --type raid6 LV
809
810 The final conversion from raid6_ls_6 to raid6_zr is done to avoid the
811 potential write/recovery performance reduction in raid6_ls_6 because of
812 the dedicated parity device. raid6_zr rotates data and parity blocks
813 to avoid this.
814
815 linear to striped
816
817 To convert an LV from linear to striped:
818 1. convert to raid1 with two images
819 2. convert to raid5_n
820 3. convert to raid5_n with five 128k stripes (reshape)
821 4. convert raid5_n to striped
822
823 The commands to perform the steps above are:
824 1. lvconvert --type raid1 --mirrors 1 LV
825 2. lvconvert --type raid5_n LV
826 3. lvconvert --stripes 5 --stripesize 128k LV
827 4. lvconvert --type striped LV
828
829 The raid5_n type in step 2 is used because it has dedicated parity Sub‐
830 LVs at the end, and can be converted to striped directly. The stripe
831 size is increased in step 3 to add extra space for the conversion
832 process. This step grows the LV size by a factor of five. After con‐
833 version, this extra space can be reduced (or used to grow the file sys‐
834 tem using the LV).
835
836 Reversing these steps will convert a striped LV to linear.
837
838 raid6 to striped
839
840 To convert an LV from raid6_nr to striped:
841 1. convert to raid6_n_6
842 2. convert to striped
843
844 The commands to perform the steps above are:
845 1. lvconvert --type raid6_n_6 LV
846 2. lvconvert --type striped LV
847
848
849
850 Examples
851 Converting an LV from linear to raid1.
852
853 # lvs -a -o name,segtype,size vg
854 LV Type LSize
855 lv linear 300.00g
856
857 # lvconvert --type raid1 --mirrors 1 vg/lv
858
859 # lvs -a -o name,segtype,size vg
860 LV Type LSize
861 lv raid1 300.00g
862 [lv_rimage_0] linear 300.00g
863 [lv_rimage_1] linear 300.00g
864 [lv_rmeta_0] linear 3.00m
865 [lv_rmeta_1] linear 3.00m
866
867 Converting an LV from mirror to raid1.
868
869 # lvs -a -o name,segtype,size vg
870 LV Type LSize
871 lv mirror 100.00g
872 [lv_mimage_0] linear 100.00g
873 [lv_mimage_1] linear 100.00g
874 [lv_mlog] linear 3.00m
875
876 # lvconvert --type raid1 vg/lv
877
878 # lvs -a -o name,segtype,size vg
879 LV Type LSize
880 lv raid1 100.00g
881 [lv_rimage_0] linear 100.00g
882 [lv_rimage_1] linear 100.00g
883 [lv_rmeta_0] linear 3.00m
884 [lv_rmeta_1] linear 3.00m
885
886 Converting an LV from linear to raid1 (with 3 images).
887
888 # lvconvert --type raid1 --mirrors 2 vg/lv
889
890 Converting an LV from striped (with 4 stripes) to raid6_n_6.
891
892 # lvcreate --stripes 4 -L64M -n lv vg
893
894 # lvconvert --type raid6 vg/lv
895
896 # lvs -a -o lv_name,segtype,sync_percent,data_copies
897 LV Type Cpy%Sync #Cpy
898 lv raid6_n_6 100.00 3
899 [lv_rimage_0] linear
900 [lv_rimage_1] linear
901 [lv_rimage_2] linear
902 [lv_rimage_3] linear
903 [lv_rimage_4] linear
904 [lv_rimage_5] linear
905 [lv_rmeta_0] linear
906 [lv_rmeta_1] linear
907 [lv_rmeta_2] linear
908 [lv_rmeta_3] linear
909 [lv_rmeta_4] linear
910 [lv_rmeta_5] linear
911
912 This convert begins by allocating MetaLVs (rmeta_#) for each of the
913 existing stripe devices. It then creates 2 additional MetaLV/DataLV
914 pairs (rmeta_#/rimage_#) for dedicated raid6 parity.
915
916 If rotating data/parity is required, such as with raid6_nr, it must be
917 done by reshaping (see below).
918
919
920
922 RAID reshaping is changing attributes of a RAID LV while keeping the
923 same RAID level. This includes changing RAID layout, stripe size, or
924 number of stripes.
925
926 When changing the RAID layout or stripe size, no new SubLVs (MetaLVs or
927 DataLVs) need to be allocated, but DataLVs are extended by a small
928 amount (typically 1 extent). The extra space allows blocks in a stripe
929 to be updated safely, and not be corrupted in case of a crash. If a
930 crash occurs, reshaping can just be restarted.
931
932 (If blocks in a stripe were updated in place, a crash could leave them
933 partially updated and corrupted. Instead, an existing stripe is qui‐
934 esced, read, changed in layout, and the new stripe written to free
935 space. Once that is done, the new stripe is unquiesced and used.)
936
937
938 Examples
939 (Command output shown in examples may change.)
940
941 Converting raid6_n_6 to raid6_nr with rotating data/parity.
942
943 This conversion naturally follows a previous conversion from
944 striped/raid0 to raid6_n_6 (shown above). It completes the transition
945 to a more traditional RAID6.
946
947 # lvs -o lv_name,segtype,sync_percent,data_copies
948 LV Type Cpy%Sync #Cpy
949 lv raid6_n_6 100.00 3
950 [lv_rimage_0] linear
951 [lv_rimage_1] linear
952 [lv_rimage_2] linear
953 [lv_rimage_3] linear
954 [lv_rimage_4] linear
955 [lv_rimage_5] linear
956 [lv_rmeta_0] linear
957 [lv_rmeta_1] linear
958 [lv_rmeta_2] linear
959 [lv_rmeta_3] linear
960 [lv_rmeta_4] linear
961 [lv_rmeta_5] linear
962
963 # lvconvert --type raid6_nr vg/lv
964
965 # lvs -a -o lv_name,segtype,sync_percent,data_copies
966 LV Type Cpy%Sync #Cpy
967 lv raid6_nr 100.00 3
968 [lv_rimage_0] linear
969 [lv_rimage_0] linear
970 [lv_rimage_1] linear
971 [lv_rimage_1] linear
972 [lv_rimage_2] linear
973 [lv_rimage_2] linear
974 [lv_rimage_3] linear
975 [lv_rimage_3] linear
976 [lv_rimage_4] linear
977 [lv_rimage_5] linear
978 [lv_rmeta_0] linear
979 [lv_rmeta_1] linear
980 [lv_rmeta_2] linear
981 [lv_rmeta_3] linear
982 [lv_rmeta_4] linear
983 [lv_rmeta_5] linear
984
985 The DataLVs are larger (additional segment in each) which provides
986 space for out-of-place reshaping. The result is:
987
988 # lvs -a -o lv_name,segtype,seg_pe_ranges,dataoffset
989 LV Type PE Ranges DOff
990 lv raid6_nr lv_rimage_0:0-32 \
991 lv_rimage_1:0-32 \
992 lv_rimage_2:0-32 \
993 lv_rimage_3:0-32
994 [lv_rimage_0] linear /dev/sda:0-31 2048
995 [lv_rimage_0] linear /dev/sda:33-33
996 [lv_rimage_1] linear /dev/sdaa:0-31 2048
997 [lv_rimage_1] linear /dev/sdaa:33-33
998 [lv_rimage_2] linear /dev/sdab:1-33 2048
999 [lv_rimage_3] linear /dev/sdac:1-33 2048
1000 [lv_rmeta_0] linear /dev/sda:32-32
1001 [lv_rmeta_1] linear /dev/sdaa:32-32
1002 [lv_rmeta_2] linear /dev/sdab:0-0
1003 [lv_rmeta_3] linear /dev/sdac:0-0
1004
1005 All segments with PE ranges '33-33' provide the out-of-place reshape
1006 space. The dataoffset column shows that the data was moved from ini‐
1007 tial offset 0 to 2048 sectors on each component DataLV.
1008
1009 For performance reasons the raid6_nr RaidLV can be restriped. Convert
1010 it from 3-way striped to 5-way-striped.
1011
1012 # lvconvert --stripes 5 vg/lv
1013 Using default stripesize 64.00 KiB.
1014 WARNING: Adding stripes to active logical volume vg/lv will \
1015 grow it from 99 to 165 extents!
1016 Run "lvresize -l99 vg/lv" to shrink it or use the additional \
1017 capacity.
1018 Logical volume vg/lv successfully converted.
1019
1020 # lvs vg/lv
1021 LV VG Attr LSize Cpy%Sync
1022 lv vg rwi-a-r-s- 652.00m 52.94
1023
1024 # lvs -a -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1025 LV Attr Type PE Ranges DOff
1026 lv rwi-a-r--- raid6_nr lv_rimage_0:0-33 \
1027 lv_rimage_1:0-33 \
1028 lv_rimage_2:0-33 ... \
1029 lv_rimage_5:0-33 \
1030 lv_rimage_6:0-33 0
1031 [lv_rimage_0] iwi-aor--- linear /dev/sda:0-32 0
1032 [lv_rimage_0] iwi-aor--- linear /dev/sda:34-34
1033 [lv_rimage_1] iwi-aor--- linear /dev/sdaa:0-32 0
1034 [lv_rimage_1] iwi-aor--- linear /dev/sdaa:34-34
1035 [lv_rimage_2] iwi-aor--- linear /dev/sdab:0-32 0
1036 [lv_rimage_2] iwi-aor--- linear /dev/sdab:34-34
1037 [lv_rimage_3] iwi-aor--- linear /dev/sdac:1-34 0
1038 [lv_rimage_4] iwi-aor--- linear /dev/sdad:1-34 0
1039 [lv_rimage_5] iwi-aor--- linear /dev/sdae:1-34 0
1040 [lv_rimage_6] iwi-aor--- linear /dev/sdaf:1-34 0
1041 [lv_rmeta_0] ewi-aor--- linear /dev/sda:33-33
1042 [lv_rmeta_1] ewi-aor--- linear /dev/sdaa:33-33
1043 [lv_rmeta_2] ewi-aor--- linear /dev/sdab:33-33
1044 [lv_rmeta_3] ewi-aor--- linear /dev/sdac:0-0
1045 [lv_rmeta_4] ewi-aor--- linear /dev/sdad:0-0
1046 [lv_rmeta_5] ewi-aor--- linear /dev/sdae:0-0
1047 [lv_rmeta_6] ewi-aor--- linear /dev/sdaf:0-0
1048
1049 Stripes also can be removed from raid5 and 6. Convert the 5-way
1050 striped raid6_nr LV to 4-way-striped. The force option needs to be
1051 used, because removing stripes (i.e. image SubLVs) from a RaidLV will
1052 shrink its size.
1053
1054 # lvconvert --stripes 4 vg/lv
1055 Using default stripesize 64.00 KiB.
1056 WARNING: Removing stripes from active logical volume vg/lv will \
1057 shrink it from 660.00 MiB to 528.00 MiB!
1058 THIS MAY DESTROY (PARTS OF) YOUR DATA!
1059 If that leaves the logical volume larger than 206 extents due \
1060 to stripe rounding,
1061 you may want to grow the content afterwards (filesystem etc.)
1062 WARNING: to remove freed stripes after the conversion has finished,\
1063 you have to run "lvconvert --stripes 4 vg/lv"
1064 Logical volume vg/lv successfully converted.
1065
1066 # lvs -a -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1067 LV Attr Type PE Ranges DOff
1068 lv rwi-a-r-s- raid6_nr lv_rimage_0:0-33 \
1069 lv_rimage_1:0-33 \
1070 lv_rimage_2:0-33 ... \
1071 lv_rimage_5:0-33 \
1072 lv_rimage_6:0-33 0
1073 [lv_rimage_0] Iwi-aor--- linear /dev/sda:0-32 0
1074 [lv_rimage_0] Iwi-aor--- linear /dev/sda:34-34
1075 [lv_rimage_1] Iwi-aor--- linear /dev/sdaa:0-32 0
1076 [lv_rimage_1] Iwi-aor--- linear /dev/sdaa:34-34
1077 [lv_rimage_2] Iwi-aor--- linear /dev/sdab:0-32 0
1078 [lv_rimage_2] Iwi-aor--- linear /dev/sdab:34-34
1079 [lv_rimage_3] Iwi-aor--- linear /dev/sdac:1-34 0
1080 [lv_rimage_4] Iwi-aor--- linear /dev/sdad:1-34 0
1081 [lv_rimage_5] Iwi-aor--- linear /dev/sdae:1-34 0
1082 [lv_rimage_6] Iwi-aor-R- linear /dev/sdaf:1-34 0
1083 [lv_rmeta_0] ewi-aor--- linear /dev/sda:33-33
1084 [lv_rmeta_1] ewi-aor--- linear /dev/sdaa:33-33
1085 [lv_rmeta_2] ewi-aor--- linear /dev/sdab:33-33
1086 [lv_rmeta_3] ewi-aor--- linear /dev/sdac:0-0
1087 [lv_rmeta_4] ewi-aor--- linear /dev/sdad:0-0
1088 [lv_rmeta_5] ewi-aor--- linear /dev/sdae:0-0
1089 [lv_rmeta_6] ewi-aor-R- linear /dev/sdaf:0-0
1090
1091 The 's' in column 9 of the attribute field shows the RaidLV is still
1092 reshaping. The 'R' in the same column of the attribute field shows the
1093 freed image Sub LVs which will need removing once the reshaping fin‐
1094 ished.
1095
1096 # lvs -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1097 LV Attr Type PE Ranges DOff
1098 lv rwi-a-r-R- raid6_nr lv_rimage_0:0-33 \
1099 lv_rimage_1:0-33 \
1100 lv_rimage_2:0-33 ... \
1101 lv_rimage_5:0-33 \
1102 lv_rimage_6:0-33 8192
1103
1104 Now that the reshape is finished the 'R' atribute on the RaidLV shows
1105 images can be removed.
1106
1107 # lvs -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1108 LV Attr Type PE Ranges DOff
1109 lv rwi-a-r-R- raid6_nr lv_rimage_0:0-33 \
1110 lv_rimage_1:0-33 \
1111 lv_rimage_2:0-33 ... \
1112 lv_rimage_5:0-33 \
1113 lv_rimage_6:0-33 8192
1114
1115 This is achieved by repeating the command ("lvconvert --stripes 4
1116 vg/lv" would be sufficient).
1117
1118 # lvconvert --stripes 4 vg/lv
1119 Using default stripesize 64.00 KiB.
1120 Logical volume vg/lv successfully converted.
1121
1122 # lvs -a -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1123 LV Attr Type PE Ranges DOff
1124 lv rwi-a-r--- raid6_nr lv_rimage_0:0-33 \
1125 lv_rimage_1:0-33 \
1126 lv_rimage_2:0-33 ... \
1127 lv_rimage_5:0-33 8192
1128 [lv_rimage_0] iwi-aor--- linear /dev/sda:0-32 8192
1129 [lv_rimage_0] iwi-aor--- linear /dev/sda:34-34
1130 [lv_rimage_1] iwi-aor--- linear /dev/sdaa:0-32 8192
1131 [lv_rimage_1] iwi-aor--- linear /dev/sdaa:34-34
1132 [lv_rimage_2] iwi-aor--- linear /dev/sdab:0-32 8192
1133 [lv_rimage_2] iwi-aor--- linear /dev/sdab:34-34
1134 [lv_rimage_3] iwi-aor--- linear /dev/sdac:1-34 8192
1135 [lv_rimage_4] iwi-aor--- linear /dev/sdad:1-34 8192
1136 [lv_rimage_5] iwi-aor--- linear /dev/sdae:1-34 8192
1137 [lv_rmeta_0] ewi-aor--- linear /dev/sda:33-33
1138 [lv_rmeta_1] ewi-aor--- linear /dev/sdaa:33-33
1139 [lv_rmeta_2] ewi-aor--- linear /dev/sdab:33-33
1140 [lv_rmeta_3] ewi-aor--- linear /dev/sdac:0-0
1141 [lv_rmeta_4] ewi-aor--- linear /dev/sdad:0-0
1142 [lv_rmeta_5] ewi-aor--- linear /dev/sdae:0-0
1143
1144 # lvs -a -o lv_name,attr,segtype,reshapelen vg
1145 LV Attr Type RSize
1146 lv rwi-a-r--- raid6_nr 24.00m
1147 [lv_rimage_0] iwi-aor--- linear 4.00m
1148 [lv_rimage_0] iwi-aor--- linear
1149 [lv_rimage_1] iwi-aor--- linear 4.00m
1150 [lv_rimage_1] iwi-aor--- linear
1151 [lv_rimage_2] iwi-aor--- linear 4.00m
1152 [lv_rimage_2] iwi-aor--- linear
1153 [lv_rimage_3] iwi-aor--- linear 4.00m
1154 [lv_rimage_4] iwi-aor--- linear 4.00m
1155 [lv_rimage_5] iwi-aor--- linear 4.00m
1156 [lv_rmeta_0] ewi-aor--- linear
1157 [lv_rmeta_1] ewi-aor--- linear
1158 [lv_rmeta_2] ewi-aor--- linear
1159 [lv_rmeta_3] ewi-aor--- linear
1160 [lv_rmeta_4] ewi-aor--- linear
1161 [lv_rmeta_5] ewi-aor--- linear
1162
1163 Future developments might include automatic removal of the freed
1164 images.
1165
1166 If the reshape space shall be removed any lvconvert command not chang‐
1167 ing the layout can be used:
1168
1169 # lvconvert --stripes 4 vg/lv
1170 Using default stripesize 64.00 KiB.
1171 No change in RAID LV vg/lv layout, freeing reshape space.
1172 Logical volume vg/lv successfully converted.
1173
1174 # lvs -a -o lv_name,attr,segtype,reshapelen vg
1175 LV Attr Type RSize
1176 lv rwi-a-r--- raid6_nr 0
1177 [lv_rimage_0] iwi-aor--- linear 0
1178 [lv_rimage_0] iwi-aor--- linear
1179 [lv_rimage_1] iwi-aor--- linear 0
1180 [lv_rimage_1] iwi-aor--- linear
1181 [lv_rimage_2] iwi-aor--- linear 0
1182 [lv_rimage_2] iwi-aor--- linear
1183 [lv_rimage_3] iwi-aor--- linear 0
1184 [lv_rimage_4] iwi-aor--- linear 0
1185 [lv_rimage_5] iwi-aor--- linear 0
1186 [lv_rmeta_0] ewi-aor--- linear
1187 [lv_rmeta_1] ewi-aor--- linear
1188 [lv_rmeta_2] ewi-aor--- linear
1189 [lv_rmeta_3] ewi-aor--- linear
1190 [lv_rmeta_4] ewi-aor--- linear
1191 [lv_rmeta_5] ewi-aor--- linear
1192
1193 In case the RaidLV should be converted to striped:
1194
1195 # lvconvert --type striped vg/lv
1196 Unable to convert LV vg/lv from raid6_nr to striped.
1197 Converting vg/lv from raid6_nr is directly possible to the \
1198 following layouts:
1199 raid6_nc
1200 raid6_zr
1201 raid6_la_6
1202 raid6_ls_6
1203 raid6_ra_6
1204 raid6_rs_6
1205 raid6_n_6
1206
1207 A direct conversion isn't possible thus the command informed about the
1208 possible ones. raid6_n_6 is suitable to convert to striped so convert
1209 to it first (this is a reshape changing the raid6 layout from raid6_nr
1210 to raid6_n_6).
1211
1212 # lvconvert --type raid6_n_6
1213 Using default stripesize 64.00 KiB.
1214 Converting raid6_nr LV vg/lv to raid6_n_6.
1215 Are you sure you want to convert raid6_nr LV vg/lv? [y/n]: y
1216 Logical volume vg/lv successfully converted.
1217
1218 Wait for the reshape to finish.
1219
1220 # lvconvert --type striped vg/lv
1221 Logical volume vg/lv successfully converted.
1222
1223 # lvs -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1224 LV Attr Type PE Ranges DOff
1225 lv -wi-a----- striped /dev/sda:2-32 \
1226 /dev/sdaa:2-32 \
1227 /dev/sdab:2-32 \
1228 /dev/sdac:3-33
1229 lv -wi-a----- striped /dev/sda:34-35 \
1230 /dev/sdaa:34-35 \
1231 /dev/sdab:34-35 \
1232 /dev/sdac:34-35
1233
1234 From striped we can convert to raid10
1235
1236 # lvconvert --type raid10 vg/lv
1237 Using default stripesize 64.00 KiB.
1238 Logical volume vg/lv successfully converted.
1239
1240 # lvs -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1241 LV Attr Type PE Ranges DOff
1242 lv rwi-a-r--- raid10 lv_rimage_0:0-32 \
1243 lv_rimage_4:0-32 \
1244 lv_rimage_1:0-32 ... \
1245 lv_rimage_3:0-32 \
1246 lv_rimage_7:0-32 0
1247
1248 # lvs -a -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1249 WARNING: Cannot find matching striped segment for vg/lv_rimage_3.
1250 LV Attr Type PE Ranges DOff
1251 lv rwi-a-r--- raid10 lv_rimage_0:0-32 \
1252 lv_rimage_4:0-32 \
1253 lv_rimage_1:0-32 ... \
1254 lv_rimage_3:0-32 \
1255 lv_rimage_7:0-32 0
1256 [lv_rimage_0] iwi-aor--- linear /dev/sda:2-32 0
1257 [lv_rimage_0] iwi-aor--- linear /dev/sda:34-35
1258 [lv_rimage_1] iwi-aor--- linear /dev/sdaa:2-32 0
1259 [lv_rimage_1] iwi-aor--- linear /dev/sdaa:34-35
1260 [lv_rimage_2] iwi-aor--- linear /dev/sdab:2-32 0
1261 [lv_rimage_2] iwi-aor--- linear /dev/sdab:34-35
1262 [lv_rimage_3] iwi-XXr--- linear /dev/sdac:3-35 0
1263 [lv_rimage_4] iwi-aor--- linear /dev/sdad:1-33 0
1264 [lv_rimage_5] iwi-aor--- linear /dev/sdae:1-33 0
1265 [lv_rimage_6] iwi-aor--- linear /dev/sdaf:1-33 0
1266 [lv_rimage_7] iwi-aor--- linear /dev/sdag:1-33 0
1267 [lv_rmeta_0] ewi-aor--- linear /dev/sda:0-0
1268 [lv_rmeta_1] ewi-aor--- linear /dev/sdaa:0-0
1269 [lv_rmeta_2] ewi-aor--- linear /dev/sdab:0-0
1270 [lv_rmeta_3] ewi-aor--- linear /dev/sdac:0-0
1271 [lv_rmeta_4] ewi-aor--- linear /dev/sdad:0-0
1272 [lv_rmeta_5] ewi-aor--- linear /dev/sdae:0-0
1273 [lv_rmeta_6] ewi-aor--- linear /dev/sdaf:0-0
1274 [lv_rmeta_7] ewi-aor--- linear /dev/sdag:0-0
1275
1276 raid10 allows to add stripes but can't remove them.
1277
1278
1279 A more elaborate example to convert from linear to striped with interim
1280 conversions to raid1 then raid5 followed by restripe (4 steps).
1281
1282 We start with the linear LV.
1283
1284 # lvs -a -o name,size,segtype,syncpercent,datastripes,\
1285 stripesize,reshapelenle,devices vg
1286 LV LSize Type Cpy%Sync #DStr Stripe RSize Devices
1287 lv 128.00m linear 1 0 /dev/sda(0)
1288
1289 Then convert it to a 2-way raid1.
1290
1291 # lvconvert --mirrors 1 vg/lv
1292 Logical volume vg/lv successfully converted.
1293
1294 # lvs -a -o name,size,segtype,datastripes,\
1295 stripesize,reshapelenle,devices vg
1296 LV LSize Type #DStr Stripe RSize Devices
1297 lv 128.00m raid1 2 0 lv_rimage_0(0),\
1298 lv_rimage_1(0)
1299 [lv_rimage_0] 128.00m linear 1 0 /dev/sda(0)
1300 [lv_rimage_1] 128.00m linear 1 0 /dev/sdhx(1)
1301 [lv_rmeta_0] 4.00m linear 1 0 /dev/sda(32)
1302 [lv_rmeta_1] 4.00m linear 1 0 /dev/sdhx(0)
1303
1304 Once the raid1 LV is fully synchronized we convert it to raid5_n (only
1305 2-way raid1 LVs can be converted to raid5). We select raid5_n here
1306 because it has dedicated parity SubLVs at the end and can be converted
1307 to striped directly without any additional conversion.
1308
1309 # lvconvert --type raid5_n vg/lv
1310 Using default stripesize 64.00 KiB.
1311 Logical volume vg/lv successfully converted.
1312
1313 # lvs -a -o name,size,segtype,syncpercent,datastripes,\
1314 stripesize,reshapelenle,devices vg
1315 LV LSize Type #DStr Stripe RSize Devices
1316 lv 128.00m raid5_n 1 64.00k 0 lv_rimage_0(0),\
1317 lv_rimage_1(0)
1318 [lv_rimage_0] 128.00m linear 1 0 0 /dev/sda(0)
1319 [lv_rimage_1] 128.00m linear 1 0 0 /dev/sdhx(1)
1320 [lv_rmeta_0] 4.00m linear 1 0 /dev/sda(32)
1321 [lv_rmeta_1] 4.00m linear 1 0 /dev/sdhx(0)
1322
1323 Now we'll change the number of data stripes from 1 to 5 and request
1324 128K stripe size in one command. This will grow the size of the LV by
1325 a factor of 5 (we add 4 data stripes to the one given). That additonal
1326 space can be used by e.g. growing any contained filesystem or the LV
1327 can be reduced in size after the reshaping conversion has finished.
1328
1329 # lvconvert --stripesize 128k --stripes 5 vg/lv
1330 Converting stripesize 64.00 KiB of raid5_n LV vg/lv to 128.00 KiB.
1331 WARNING: Adding stripes to active logical volume vg/lv will grow \
1332 it from 32 to 160 extents!
1333 Run "lvresize -l32 vg/lv" to shrink it or use the additional capacity.
1334 Logical volume vg/lv successfully converted.
1335
1336 # lvs -a -o name,size,segtype,datastripes,\
1337 stripesize,reshapelenle,devices
1338 LV LSize Type #DStr Stripe RSize Devices
1339 lv 640.00m raid5_n 5 128.00k 6 lv_rimage_0(0),\
1340 lv_rimage_1(0),\
1341 lv_rimage_2(0),\
1342 lv_rimage_3(0),\
1343 lv_rimage_4(0),\
1344 lv_rimage_5(0)
1345 [lv_rimage_0] 132.00m linear 1 0 1 /dev/sda(33)
1346 [lv_rimage_0] 132.00m linear 1 0 /dev/sda(0)
1347 [lv_rimage_1] 132.00m linear 1 0 1 /dev/sdhx(33)
1348 [lv_rimage_1] 132.00m linear 1 0 /dev/sdhx(1)
1349 [lv_rimage_2] 132.00m linear 1 0 1 /dev/sdhw(33)
1350 [lv_rimage_2] 132.00m linear 1 0 /dev/sdhw(1)
1351 [lv_rimage_3] 132.00m linear 1 0 1 /dev/sdhv(33)
1352 [lv_rimage_3] 132.00m linear 1 0 /dev/sdhv(1)
1353 [lv_rimage_4] 132.00m linear 1 0 1 /dev/sdhu(33)
1354 [lv_rimage_4] 132.00m linear 1 0 /dev/sdhu(1)
1355 [lv_rimage_5] 132.00m linear 1 0 1 /dev/sdht(33)
1356 [lv_rimage_5] 132.00m linear 1 0 /dev/sdht(1)
1357 [lv_rmeta_0] 4.00m linear 1 0 /dev/sda(32)
1358 [lv_rmeta_1] 4.00m linear 1 0 /dev/sdhx(0)
1359 [lv_rmeta_2] 4.00m linear 1 0 /dev/sdhw(0)
1360 [lv_rmeta_3] 4.00m linear 1 0 /dev/sdhv(0)
1361 [lv_rmeta_4] 4.00m linear 1 0 /dev/sdhu(0)
1362 [lv_rmeta_5] 4.00m linear 1 0 /dev/sdht(0)
1363
1364 Once the conversion has finished we can can convert to striped.
1365
1366 # lvconvert --type striped vg/lv
1367 Logical volume vg/lv successfully converted.
1368
1369 # lvs -a -o name,size,segtype,datastripes,\
1370 stripesize,reshapelenle,devices vg
1371 LV LSize Type #DStr Stripe RSize Devices
1372 lv 640.00m striped 5 128.00k /dev/sda(33),\
1373 /dev/sdhx(33),\
1374 /dev/sdhw(33),\
1375 /dev/sdhv(33),\
1376 /dev/sdhu(33)
1377 lv 640.00m striped 5 128.00k /dev/sda(0),\
1378 /dev/sdhx(1),\
1379 /dev/sdhw(1),\
1380 /dev/sdhv(1),\
1381 /dev/sdhu(1)
1382
1383 Reversing these steps will convert a given striped LV to linear.
1384
1385 Mind the facts that stripes are removed thus the capacity of the RaidLV
1386 will shrink and that changing the RaidLV layout will influence its per‐
1387 formance.
1388
1389 "lvconvert --stripes 1 vg/lv" for converting to 1 stripe will inform
1390 upfront about the reduced size to allow for resizing the content or
1391 growing the RaidLV before actually converting to 1 stripe. The --force
1392 option is needed to allow stripe removing conversions to prevent data
1393 loss.
1394
1395 Of course any interim step can be the intended last one (e.g. striped->
1396 raid1).
1397
1398
1400 raid5_ls
1401 · RAID5 left symmetric
1402 · Rotating parity N with data restart
1403
1404 raid5_la
1405 · RAID5 left symmetric
1406 · Rotating parity N with data continuation
1407
1408 raid5_rs
1409 · RAID5 right symmetric
1410 · Rotating parity 0 with data restart
1411
1412 raid5_ra
1413 · RAID5 right asymmetric
1414 · Rotating parity 0 with data continuation
1415
1416 raid5_n
1417 · RAID5 parity n
1418 · Dedicated parity device n used for striped/raid0 conversions
1419 · Used for RAID Takeover
1420
1421
1423 raid6
1424 · RAID6 zero restart (aka left symmetric)
1425 · Rotating parity 0 with data restart
1426 · Same as raid6_zr
1427
1428 raid6_zr
1429 · RAID6 zero restart (aka left symmetric)
1430 · Rotating parity 0 with data restart
1431
1432 raid6_nr
1433 · RAID6 N restart (aka right symmetric)
1434 · Rotating parity N with data restart
1435
1436 raid6_nc
1437 · RAID6 N continue
1438 · Rotating parity N with data continuation
1439
1440 raid6_n_6
1441 · RAID6 last parity devices
1442 · Fixed dedicated last devices (P-Syndrome N-1 and Q-Syndrome N)
1443 with striped data used for striped/raid0 conversions
1444 · Used for RAID Takeover
1445
1446 raid6_{ls,rs,la,ra}_6
1447 · RAID6 last parity device
1448 · Dedicated last parity device used for conversions from/to
1449 raid5_{ls,rs,la,ra}
1450
1451 raid6_ls_6
1452 · RAID6 N continue
1453 · Same as raid5_ls for N-1 devices with fixed Q-Syndrome N
1454 · Used for RAID Takeover
1455
1456 raid6_la_6
1457 · RAID6 N continue
1458 · Same as raid5_la for N-1 devices with fixed Q-Syndrome N
1459 · Used forRAID Takeover
1460
1461 raid6_rs_6
1462 · RAID6 N continue
1463 · Same as raid5_rs for N-1 devices with fixed Q-Syndrome N
1464 · Used for RAID Takeover
1465
1466 raid6_ra_6
1467 · RAID6 N continue
1468 · ame as raid5_ra for N-1 devices with fixed Q-Syndrome N
1469 · Used for RAID Takeover
1470
1471
1472
1473
1475 The 2.6.38-rc1 version of the Linux kernel introduced a device-mapper
1476 target to interface with the software RAID (MD) personalities. This
1477 provided device-mapper with RAID 4/5/6 capabilities and a larger devel‐
1478 opment community. Later, support for RAID1, RAID10, and RAID1E (RAID
1479 10 variants) were added. Support for these new kernel RAID targets was
1480 added to LVM version 2.02.87. The capabilities of the LVM raid1 type
1481 have surpassed the old mirror type. raid1 is now recommended instead
1482 of mirror. raid1 became the default for mirroring in LVM version
1483 2.02.100.
1484
1485
1486
1487
1488Red Hat, Inc LVM TOOLS 2.03.09(2) (2020-03-26) LVMRAID(7)