1LVMRAID(7)                                                          LVMRAID(7)
2
3
4

NAME

6       lvmraid — LVM RAID
7

DESCRIPTION

9       lvm(8) RAID is a way to create a Logical Volume (LV) that uses multiple
10       physical devices to improve performance or  tolerate  device  failures.
11       In  LVM,  the  physical  devices are Physical Volumes (PVs) in a single
12       Volume Group (VG).
13
14       How LV data blocks are placed onto PVs is determined by the RAID level.
15       RAID  levels  are  commonly referred to as 'raid' followed by a number,
16       e.g.  raid1, raid5 or raid6.  Selecting a RAID  level  involves  making
17       tradeoffs  among:  physical  device  requirements, fault tolerance, and
18       performance.  A description of the RAID levels can be found at
19       www.snia.org/sites/default/files/SNIA_DDF_Technical_Position_v2.0.pdf
20
21       LVM RAID uses both Device Mapper (DM) and Multiple Device (MD)  drivers
22       from the Linux kernel.  DM is used to create and manage visible LVM de‐
23       vices, and MD is used to place data on physical devices.
24
25       LVM creates hidden LVs (dm devices) layered between the visible LV  and
26       physical  devices.   LVs  in the middle layers are called sub LVs.  For
27       LVM raid, a sub LV pair to store data and metadata (raid superblock and
28       write intent bitmap) is created per raid image/leg (see lvs command ex‐
29       amples below).
30

USAGE

32       To create a RAID LV, use lvcreate and specify an LV type.  The LV  type
33       corresponds  to  a  RAID level.  The basic RAID levels that can be used
34       are: raid0, raid1, raid4, raid5, raid6, raid10.
35
36       lvcreate --type RaidLevel [OPTIONS] --name Name --size Size VG [PVs]
37
38       To display the LV type of an existing LV, run:
39
40       lvs -o name,segtype LV
41
42       (The LV type is also referred to as "segment type" or "segtype".)
43
44       LVs can be created with the following types:
45
46   raid0
47       Also called striping, raid0 spreads LV data across multiple devices  in
48       units  of  stripe size.  This is used to increase performance.  LV data
49       will be lost if any of the devices fail.
50
51       lvcreate --type raid0 [--stripes Number --stripesize Size] VG [PVs]
52
53       --stripes Number
54              specifies the Number of devices to spread the LV across.
55
56       --stripesize Size
57              specifies the Size of each stripe in  kilobytes.   This  is  the
58              amount  of  data  that is written to one device before moving to
59              the next.
60
61       PVs specifies the devices to use.  If not specified,  lvm  will  choose
62       Number  devices,  one for each stripe based on the number of PVs avail‐
63       able or supplied.
64
65   raid1
66       Also called mirroring, raid1 uses  multiple  devices  to  duplicate  LV
67       data.   The  LV  data  remains  available if all but one of the devices
68       fail.  The minimum number of devices (i.e. sub LV pairs) required is 2.
69
70       lvcreate --type raid1 [--mirrors Number] VG [PVs]
71
72       --mirrors Number
73              specifies the Number of mirror images in addition to the  origi‐
74              nal LV image, e.g. --mirrors 1 means there are two images of the
75              data, the original and one mirror image.
76
77       PVs specifies the devices to use.  If not specified,  lvm  will  choose
78       Number devices, one for each image.
79
80   raid4
81       raid4  is a form of striping that uses an extra, first device dedicated
82       to storing parity blocks.  The LV data remains available if one  device
83       fails.  The parity is used to recalculate data that is lost from a sin‐
84       gle device.  The minimum number of devices required is 3.
85
86       lvcreate --type raid4 [--stripes Number --stripesize Size] VG [PVs]
87
88       --stripes Number
89              specifies the Number of devices to use for LV data.   This  does
90              not include the extra device lvm adds for storing parity blocks.
91              A raid4 LV with Number stripes requires Number+1 devices.   Num‐
92              ber must be 2 or more.
93
94       --stripesize Size
95              specifies  the  Size  of  each stripe in kilobytes.  This is the
96              amount of data that is written to one device  before  moving  to
97              the next.
98
99       PVs  specifies  the  devices to use.  If not specified, lvm will choose
100       Number+1 separate devices.
101
102       raid4 is called non-rotating parity because the parity blocks  are  al‐
103       ways stored on the same device.
104
105   raid5
106       raid5  is a form of striping that uses an extra device for storing par‐
107       ity blocks.  LV data and parity blocks are stored on each device, typi‐
108       cally  in  a rotating pattern for performance reasons.  The LV data re‐
109       mains available if one device fails.  The parity is used to recalculate
110       data  that is lost from a single device.  The minimum number of devices
111       required is 3 (unless converting from 2 legged raid1 to reshape to more
112       stripes; see reshaping).
113
114       lvcreate --type raid5 [--stripes Number --stripesize Size] VG [PVs]
115
116       --stripes Number
117              specifies  the  Number of devices to use for LV data.  This does
118              not include the extra device lvm adds for storing parity blocks.
119              A  raid5 LV with Number stripes requires Number+1 devices.  Num‐
120              ber must be 2 or more.
121
122       --stripesize Size
123              specifies the Size of each stripe in  kilobytes.   This  is  the
124              amount  of  data  that is written to one device before moving to
125              the next.
126
127       PVs specifies the devices to use.  If not specified,  lvm  will  choose
128       Number+1 separate devices.
129
130       raid5 is called rotating parity because the parity blocks are placed on
131       different devices in a round-robin sequence.  There are  variations  of
132       raid5 with different algorithms for placing the parity blocks.  The de‐
133       fault variant is raid5_ls (raid5 left symmetric, which  is  a  rotating
134       parity 0 with data restart.)  See RAID5 VARIANTS below.
135
136   raid6
137       raid6  is a form of striping like raid5, but uses two extra devices for
138       parity blocks.  LV data and parity blocks are stored  on  each  device,
139       typically  in  a rotating pattern for performance reasons.  The LV data
140       remains available if up to two devices fail.  The parity is used to re‐
141       calculate  data that is lost from one or two devices.  The minimum num‐
142       ber of devices required is 5.
143
144       lvcreate --type raid6 [--stripes Number --stripesize Size] VG [PVs]
145
146       --stripes Number
147              specifies the Number of devices to use for LV data.   This  does
148              not  include  the  extra two devices lvm adds for storing parity
149              blocks.  A raid6 LV with Number stripes  requires  Number+2  de‐
150              vices.  Number must be 3 or more.
151
152       --stripesize Size
153              specifies  the  Size  of  each stripe in kilobytes.  This is the
154              amount of data that is written to one device  before  moving  to
155              the next.
156
157       PVs  specifies  the  devices to use.  If not specified, lvm will choose
158       Number+2 separate devices.
159
160       Like raid5, there are variations of raid6 with different algorithms for
161       placing the parity blocks.  The default variant is raid6_zr (raid6 zero
162       restart, aka left symmetric, which is a rotating  parity  0  with  data
163       restart.)  See RAID6 VARIANTS below.
164
165   raid10
166       raid10  is  a combination of raid1 and raid0, striping data across mir‐
167       rored devices.  LV data remains available if one or  more  devices  re‐
168       mains in each mirror set.  The minimum number of devices required is 4.
169
170       lvcreate --type raid10
171              [--mirrors NumberMirrors]
172              [--stripes NumberStripes --stripesize Size]
173              VG [PVs]
174
175       --mirrors NumberMirrors
176              specifies  the number of mirror images within each stripe.  e.g.
177              --mirrors 1 means there are two images of the data, the original
178              and one mirror image.
179
180       --stripes NumberStripes
181              specifies the total number of devices to use in all raid1 images
182              (not the number of raid1 devices to spread the LV  across,  even
183              though  that is the effective result).  The number of devices in
184              each raid1 mirror will be NumberStripes/(NumberMirrors+1),  e.g.
185              mirrors  1  and stripes 4 will stripe data across two raid1 mir‐
186              rors, where each mirror is devices.
187
188       --stripesize Size
189              specifies the Size of each stripe in  kilobytes.   This  is  the
190              amount  of  data  that is written to one device before moving to
191              the next.
192
193       PVs specifies the devices to use.  If not specified,  lvm  will  choose
194       the necessary devices.  Devices are used to create mirrors in the order
195       listed, e.g. for mirrors 1, stripes 2, listing PV1 PV2 PV3 PV4  results
196       in mirrors PV1/PV2 and PV3/PV4.
197
198       RAID10 is not mirroring on top of stripes, which would be RAID01, which
199       is less tolerant of device failures.
200
201   Configuration Options
202       There are a number of options in the LVM configuration file that affect
203       the behavior of RAID LVs.  The tunable options are listed below.  A de‐
204       tailed description of each can be found in the LVM  configuration  file
205       itself.
206              mirror_segtype_default
207              raid10_segtype_default
208              raid_region_size
209              raid_fault_policy
210              activation_mode
211
212   Monitoring
213       When a RAID LV is activated the dmeventd(8) process is started to moni‐
214       tor the health of the LV.  Various events detected in  the  kernel  can
215       cause  a  notification  to be sent from device-mapper to the monitoring
216       process, including device failures and synchronization completion (e.g.
217       for initialization or scrubbing).
218
219       The  LVM  configuration file contains options that affect how the moni‐
220       toring process will respond to failure events (e.g. raid_fault_policy).
221       It  is  possible to turn on and off monitoring with lvchange, but it is
222       not recommended to turn this off unless you have a  thorough  knowledge
223       of the consequences.
224
225   Synchronization
226       Synchronization  is the process that makes all the devices in a RAID LV
227       consistent with each other.
228
229       In a RAID1 LV, all mirror images should have the same data.  When a new
230       mirror  image  is added, or a mirror image is missing data, then images
231       need to be synchronized.  Data blocks are copied from an existing image
232       to a new or outdated image to make them match.
233
234       In a RAID 4/5/6 LV, parity blocks and data blocks should match based on
235       the parity calculation.  When the devices in a RAID LV change, the data
236       and  parity blocks can become inconsistent and need to be synchronized.
237       Correct blocks are read, parity is calculated, and recalculated  blocks
238       are written.
239
240       The  RAID  implementation  keeps  track of which parts of a RAID LV are
241       synchronized.  When a RAID LV is first created and activated the  first
242       synchronization is called initialization.  A pointer stored in the raid
243       metadata keeps track of the initialization process thus allowing it  to
244       be restarted after a deactivation of the RaidLV or a crash.  Any writes
245       to the RaidLV dirties the respective region of the write intent  bitmap
246       which  allow  for  fast recovery of the regions after a crash.  Without
247       this, the entire LV would need to be synchronized every time it was ac‐
248       tivated.
249
250       Automatic  synchronization  happens when a RAID LV is activated, but it
251       is usually partial because  the  bitmaps  reduce  the  areas  that  are
252       checked.  A full sync becomes necessary when devices in the RAID LV are
253       replaced.
254
255       The synchronization status of a RAID LV is reported  by  the  following
256       command, where "Cpy%Sync" = "100%" means sync is complete:
257
258       lvs -a -o name,sync_percent
259
260   Scrubbing
261       Scrubbing is a full scan of the RAID LV requested by a user.  Scrubbing
262       can find problems that are missed by partial synchronization.
263
264       Scrubbing assumes that RAID metadata and bitmaps may be inaccurate,  so
265       it  verifies  all RAID metadata, LV data, and parity blocks.  Scrubbing
266       can find inconsistencies caused  by  hardware  errors  or  degradation.
267       These  kinds of problems may be undetected by automatic synchronization
268       which excludes areas outside of the RAID write-intent bitmap.
269
270       The command to scrub a RAID LV can operate in two different modes:
271
272       lvchange --syncaction check|repair LV
273
274       check  Check mode is read-only and only detects inconsistent  areas  in
275              the RAID LV, it does not correct them.
276
277       repair Repair  mode  checks  and writes corrected blocks to synchronize
278              any inconsistent areas.
279
280       Scrubbing can consume a lot of bandwidth and slow down application  I/O
281       on the RAID LV.  To control the I/O rate used for scrubbing, use:
282
283       --maxrecoveryrate Size[k|UNIT]
284              Sets the maximum recovery rate for a RAID LV.  Size is specified
285              as an amount per second for each device in  the  array.   If  no
286              suffix  is  given, then KiB/sec/device is used.  Setting the re‐
287              covery rate to 0 means it will be unbounded.
288
289       --minrecoveryrate Size[k|UNIT]
290              Sets the minimum recovery rate for a RAID LV.  Size is specified
291              as  an  amount  per  second for each device in the array.  If no
292              suffix is given, then KiB/sec/device is used.  Setting  the  re‐
293              covery rate to 0 means it will be unbounded.
294
295       To  display  the  current scrubbing in progress on an LV, including the
296       syncaction mode and percent complete, run:
297
298       lvs -a -o name,raid_sync_action,sync_percent
299
300       After scrubbing is complete, to  display  the  number  of  inconsistent
301       blocks found, run:
302
303       lvs -o name,raid_mismatch_count
304
305       Also,  if  mismatches  were  found, the lvs attr field will display the
306       letter "m" (mismatch) in the 9th position, e.g.
307
308       # lvs -o name,vgname,segtype,attr vg/lv
309         LV VG   Type  Attr
310         lv vg   raid1 Rwi-a-r-m-
311
312   Scrubbing Limitations
313       The check mode can only report the number of  inconsistent  blocks,  it
314       cannot  report which blocks are inconsistent.  This makes it impossible
315       to know which device has errors, or if the errors  affect  file  system
316       data, metadata or nothing at all.
317
318       The  repair  mode can make the RAID LV data consistent, but it does not
319       know which data is correct.  The result may be consistent but incorrect
320       data.   When  two  different blocks of data must be made consistent, it
321       chooses the block from the device that would be used during  RAID  ini‐
322       tialization.   However,  if  the  PV  holding  corrupt  data  is known,
323       lvchange --rebuild can be used in place of scrubbing to reconstruct the
324       data on the bad device.
325
326       Future developments might include:
327
328       Allowing a user to choose the correct version of data during repair.
329
330       Using a majority of devices to determine the correct version of data to
331       use in a 3-way RAID1 or RAID6 LV.
332
333       Using a checksumming device to pin-point when and where  an  error  oc‐
334       curs, allowing it to be rewritten.
335
336   SubLVs
337       An  LV  is  often a combination of other hidden LVs called SubLVs.  The
338       SubLVs either use physical devices, or  are  built  from  other  SubLVs
339       themselves.   SubLVs  hold LV data blocks, RAID parity blocks, and RAID
340       metadata.  SubLVs are generally hidden, so the lvs  -a  option  is  re‐
341       quired to display them:
342
343       lvs -a -o name,segtype,devices
344
345       SubLV  names begin with the visible LV name, and have an automatic suf‐
346       fix indicating its role:
347
348            • SubLVs holding LV data or parity blocks have  the  suffix  _rim‐
349              age_#.
350              These SubLVs are sometimes referred to as DataLVs.
351
352            • SubLVs  holding  RAID  metadata  have the suffix _rmeta_#.  RAID
353              metadata includes superblock information, RAID type, bitmap, and
354              device health information.
355              These SubLVs are sometimes referred to as MetaLVs.
356
357       SubLVs  are an internal implementation detail of LVM.  The way they are
358       used, constructed and named may change.
359
360       The following examples show the SubLV arrangement for each of the basic
361       RAID LV types, using the fewest number of devices allowed for each.
362
363       Examples
364
365       raid0
366       Each  rimage  SubLV holds a portion of LV data.  No parity is used.  No
367       RAID metadata is used.
368
369       # lvcreate --type raid0 --stripes 2 --name lvr0 ...
370
371       # lvs -a -o name,segtype,devices
372         lvr0            raid0  lvr0_rimage_0(0),lvr0_rimage_1(0)
373         [lvr0_rimage_0] linear /dev/sda(...)
374         [lvr0_rimage_1] linear /dev/sdb(...)
375
376       raid1
377       Each rimage SubLV holds a complete copy of LV data.  No parity is used.
378       Each rmeta SubLV holds RAID metadata.
379
380       # lvcreate --type raid1 --mirrors 1 --name lvr1 ...
381
382       # lvs -a -o name,segtype,devices
383         lvr1            raid1  lvr1_rimage_0(0),lvr1_rimage_1(0)
384         [lvr1_rimage_0] linear /dev/sda(...)
385         [lvr1_rimage_1] linear /dev/sdb(...)
386         [lvr1_rmeta_0]  linear /dev/sda(...)
387         [lvr1_rmeta_1]  linear /dev/sdb(...)
388
389       raid4
390       At  least  three  rimage  SubLVs each hold a portion of LV data and one
391       rimage SubLV holds parity.  Each rmeta SubLV holds RAID metadata.
392
393       # lvcreate --type raid4 --stripes 2 --name lvr4 ...
394
395       # lvs -a -o name,segtype,devices
396         lvr4            raid4  lvr4_rimage_0(0),\
397                                lvr4_rimage_1(0),\
398                                lvr4_rimage_2(0)
399         [lvr4_rimage_0] linear /dev/sda(...)
400         [lvr4_rimage_1] linear /dev/sdb(...)
401         [lvr4_rimage_2] linear /dev/sdc(...)
402         [lvr4_rmeta_0]  linear /dev/sda(...)
403         [lvr4_rmeta_1]  linear /dev/sdb(...)
404         [lvr4_rmeta_2]  linear /dev/sdc(...)
405
406       raid5
407       At least three rimage SubLVs each typically hold a portion of  LV  data
408       and parity (see section on raid5) Each rmeta SubLV holds RAID metadata.
409
410       # lvcreate --type raid5 --stripes 2 --name lvr5 ...
411
412       # lvs -a -o name,segtype,devices
413         lvr5            raid5  lvr5_rimage_0(0),\
414                                lvr5_rimage_1(0),\
415                                lvr5_rimage_2(0)
416         [lvr5_rimage_0] linear /dev/sda(...)
417         [lvr5_rimage_1] linear /dev/sdb(...)
418         [lvr5_rimage_2] linear /dev/sdc(...)
419         [lvr5_rmeta_0]  linear /dev/sda(...)
420         [lvr5_rmeta_1]  linear /dev/sdb(...)
421         [lvr5_rmeta_2]  linear /dev/sdc(...)
422
423       raid6
424       At  least  five  rimage SubLVs each typically hold a portion of LV data
425       and parity.  (see section on raid6) Each rmeta SubLV holds  RAID  meta‐
426       data.
427
428       # lvcreate --type raid6 --stripes 3 --name lvr6
429
430       # lvs -a -o name,segtype,devices
431         lvr6            raid6  lvr6_rimage_0(0),\
432                                lvr6_rimage_1(0),\
433                                lvr6_rimage_2(0),\
434                                lvr6_rimage_3(0),\
435                                lvr6_rimage_4(0),\
436                                lvr6_rimage_5(0)
437         [lvr6_rimage_0] linear /dev/sda(...)
438         [lvr6_rimage_1] linear /dev/sdb(...)
439         [lvr6_rimage_2] linear /dev/sdc(...)
440         [lvr6_rimage_3] linear /dev/sdd(...)
441         [lvr6_rimage_4] linear /dev/sde(...)
442         [lvr6_rimage_5] linear /dev/sdf(...)
443         [lvr6_rmeta_0]  linear /dev/sda(...)
444         [lvr6_rmeta_1]  linear /dev/sdb(...)
445         [lvr6_rmeta_2]  linear /dev/sdc(...)
446         [lvr6_rmeta_3]  linear /dev/sdd(...)
447         [lvr6_rmeta_4]  linear /dev/sde(...)
448         [lvr6_rmeta_5]  linear /dev/sdf(...)
449
450       raid10
451       At  least four rimage SubLVs each hold a portion of LV data.  No parity
452       is used.  Each rmeta SubLV holds RAID metadata.
453
454       # lvcreate --type raid10 --stripes 2 --mirrors 1 --name lvr10
455
456       # lvs -a -o name,segtype,devices
457         lvr10            raid10 lvr10_rimage_0(0),\
458                                 lvr10_rimage_1(0),\
459                                 lvr10_rimage_2(0),\
460                                 lvr10_rimage_3(0)
461         [lvr10_rimage_0] linear /dev/sda(...)
462         [lvr10_rimage_1] linear /dev/sdb(...)
463         [lvr10_rimage_2] linear /dev/sdc(...)
464         [lvr10_rimage_3] linear /dev/sdd(...)
465         [lvr10_rmeta_0]  linear /dev/sda(...)
466         [lvr10_rmeta_1]  linear /dev/sdb(...)
467         [lvr10_rmeta_2]  linear /dev/sdc(...)
468         [lvr10_rmeta_3]  linear /dev/sdd(...)
469

DEVICE FAILURE

471       Physical devices in a RAID LV can fail or be lost for multiple reasons.
472       A device could be disconnected, permanently failed, or temporarily dis‐
473       connected.  The purpose of RAID LVs (levels 1 and higher)  is  to  con‐
474       tinue  operating in a degraded mode, without losing LV data, even after
475       a device fails.  The number of devices that can fail without  the  loss
476       of LV data depends on the RAID level:
477            • RAID0 (striped) LVs cannot tolerate losing any devices.  LV data
478              will be lost if any devices fail.
479            • RAID1 LVs can tolerate losing all but one device without LV data
480              loss.
481            • RAID4  and  RAID5  LVs can tolerate losing one device without LV
482              data loss.
483            • RAID6 LVs can tolerate losing two devices without LV data loss.
484            • RAID10 is variable, and depends on which devices are  lost.   It
485              stripes  across multiple mirror groups with raid1 layout thus it
486              can tolerate losing all but one device in each of  these  groups
487              without LV data loss.
488
489       If  a RAID LV is missing devices, or has other device-related problems,
490       lvs reports this in the health_status (and attr) fields:
491
492       lvs -o name,lv_health_status
493
494       partial
495              Devices are missing from the LV.  This is also indicated by  the
496              letter "p" (partial) in the 9th position of the lvs attr field.
497
498       refresh needed
499              A device was temporarily missing but has returned.  The LV needs
500              to be refreshed to use the device again (which will usually  re‐
501              quire  partial  synchronization).  This is also indicated by the
502              letter "r" (refresh needed) in the 9th position of the lvs  attr
503              field.   See Refreshing an LV.  This could also indicate a prob‐
504              lem with the device, in which case it should be be replaced, see
505              Replacing Devices.
506
507       mismatches exist
508              See Scrubbing.
509
510       Most commands will also print a warning if a device is missing, e.g.
511       WARNING: Device for PV uItL3Z-wBME-DQy0-... not found or rejected ...
512
513       This  warning will go away if the device returns or is removed from the
514       VG (see vgreduce --removemissing).
515
516   Activating an LV with missing devices
517       A RAID LV that is missing devices may be activated or not, depending on
518       the "activation mode" used in lvchange:
519
520       lvchange -ay --activationmode complete|degraded|partial LV
521
522       complete
523              The LV is only activated if all devices are present.
524
525       degraded
526              The  LV  is activated with missing devices if the RAID level can
527              tolerate the number of missing devices without LV data loss.
528
529       partial
530              The LV is always activated, even if portions of the LV data  are
531              missing  because  of the missing device(s).  This should only be
532              used to perform extreme recovery or repair operations.
533
534       Default activation mode when not specified by the command:
535       lvm.conf(5) activation/activation_mode
536
537       The default value is printed by:
538       # lvmconfig --type default activation/activation_mode
539
540   Replacing Devices
541       Devices in a RAID LV can be replaced by other devices in the VG.   When
542       replacing  devices that are no longer visible on the system, use lvcon‐
543       vert --repair.  When replacing devices that are still visible, use  lv‐
544       convert --replace.  The repair command will attempt to restore the same
545       number of data LVs that were previously in the LV.  The replace  option
546       can  be  repeated  to replace multiple PVs.  Replacement devices can be
547       optionally listed with either option.
548
549       lvconvert --repair LV [NewPVs]
550
551       lvconvert --replace OldPV LV [NewPV]
552
553       lvconvert --replace OldPV1 --replace OldPV2 LV [NewPVs]
554
555       New devices require synchronization with existing devices.
556       See Synchronization.
557
558   Refreshing an LV
559       Refreshing a RAID LV clears any transient device failures  (device  was
560       temporarily  disconnected)  and  returns  the LV to its fully redundant
561       mode.  Restoring a device will usually require at  least  partial  syn‐
562       chronization (see Synchronization).  Failure to clear a transient fail‐
563       ure results in the RAID LV operating in degraded mode until it is reac‐
564       tivated.  Use the lvchange command to refresh an LV:
565
566       lvchange --refresh LV
567
568       # lvs -o name,vgname,segtype,attr,size vg
569         LV VG   Type  Attr       LSize
570         lv vg   raid1 Rwi-a-r-r- 100.00g
571
572       # lvchange --refresh vg/lv
573
574       # lvs -o name,vgname,segtype,attr,size vg
575         LV VG   Type  Attr       LSize
576         lv vg   raid1 Rwi-a-r--- 100.00g
577
578   Automatic repair
579       If  a  device  in a RAID LV fails, device-mapper in the kernel notifies
580       the dmeventd(8) monitoring process (see Monitoring).  dmeventd  can  be
581       configured to automatically respond using:
582       lvm.conf(5) activation/raid_fault_policy
583
584       Possible settings are:
585
586       warn   A  warning  is  added to the system log indicating that a device
587              has failed in the RAID LV.  It is left to the user to repair the
588              LV, e.g.  replace failed devices.
589
590       allocate
591              dmeventd automatically attempts to repair the LV using spare de‐
592              vices in the VG.  Note that even a transient failure is  treated
593              as  a permanent failure under this setting.  A new device is al‐
594              located and full synchronization is started.
595
596       The specific command run by dmeventd(8) to warn or repair is:
597       lvconvert --repair --use-policies LV
598
599   Corrupted Data
600       Data on a device can be corrupted due to hardware  errors  without  the
601       device  ever  being  disconnected or there being any fault in the soft‐
602       ware.  This should be rare, and can be detected (see Scrubbing).
603
604   Rebuild specific PVs
605       If specific PVs in a RAID LV are known to have corrupt data,  the  data
606       on those PVs can be reconstructed with:
607
608       lvchange --rebuild PV LV
609
610       The  rebuild  option  can be repeated with different PVs to replace the
611       data on multiple PVs.
612

DATA INTEGRITY

614       The device mapper integrity target can be used in combination with RAID
615       levels 1,4,5,6,10 to detect and correct data corruption in RAID images.
616       A dm-integrity layer is placed above each RAID image, and an extra  sub
617       LV is created to hold integrity metadata (data checksums) for each RAID
618       image.  When data is read from an image, integrity checksums  are  used
619       to  detect corruption. If detected, dm-raid reads the data from another
620       (good) image to return to the caller.  dm-raid will also  automatically
621       write the good data back to the image with bad data to correct the cor‐
622       ruption.
623
624       When creating a RAID LV with integrity, or adding integrity,  space  is
625       required  for  integrity  metadata.  Every 500MB of LV data requires an
626       additional 4MB to be allocated for integrity metadata,  for  each  RAID
627       image.
628
629       Create a RAID LV with integrity:
630       lvcreate --type raidN --raidintegrity y
631
632       Add integrity to an existing RAID LV:
633       lvconvert --raidintegrity y LV
634
635       Remove integrity from a RAID LV:
636       lvconvert --raidintegrity n LV
637
638   Integrity options
639       --raidintegritymode journal|bitmap
640              Use  a  journal (default) or bitmap for keeping integrity check‐
641              sums consistent in case of a crash. The bitmap areas are  recal‐
642              culated after a crash, so corruption in those areas would not be
643              detected. A journal does not have  this  problem.   The  journal
644              mode  doubles writes to storage, but can improve performance for
645              scattered writes packed into a  single  journal  write.   bitmap
646              mode  can in theory achieve full write throughput of the device,
647              but would not benefit from the potential scattered  write  opti‐
648              mization.
649
650       --raidintegrityblocksize 512|1024|2048|4096
651              The  block size to use for dm-integrity on raid images.  The in‐
652              tegrity block size should usually match the device logical block
653              size,  or  the  file  system sector/block sizes.  It may be less
654              than the file system sector/block size, but not  less  than  the
655              device  logical  block  size.  Possible values: 512, 1024, 2048,
656              4096.
657
658   Integrity initialization
659       When integrity is added to an LV, the kernel needs  to  initialize  the
660       integrity metadata (checksums) for all blocks in the LV.  The data cor‐
661       ruption checking performed by dm-integrity will only operate  on  areas
662       of the LV that are already initialized.  The progress of integrity ini‐
663       tialization is reported by the "syncpercent" LV  reporting  field  (and
664       under the Cpy%Sync lvs column.)
665
666   Integrity limitations
667       To  work  around  some  limitations, it is possible to remove integrity
668       from the LV, make the change, then  add  integrity  again.   (Integrity
669       metadata would need to initialized when added again.)
670
671       LVM  must be able to allocate the integrity metadata sub LV on a single
672       PV that is already in use by the associated RAID image. This can poten‐
673       tially  cause  a problem during lvextend if the original PV holding the
674       image and integrity metadata is full.  To work around this  limitation,
675       remove integrity, extend the LV, and add integrity again.
676
677       Additional RAID images can be added to raid1 LVs, but not to other raid
678       levels.
679
680       A raid1 LV with integrity cannot be converted to linear (remove  integ‐
681       rity to do this.)
682
683       RAID  LVs  with  integrity  cannot yet be used as sub LVs with other LV
684       types.
685
686       The following are not yet permitted on RAID LVs with  integrity:  lvre‐
687       duce, pvmove, lvconvert --splitmirrors, lvchange --syncaction, lvchange
688       --rebuild.
689

RAID1 TUNING

691       A RAID1 LV can be tuned so that certain devices are avoided for reading
692       while all devices are still written to.
693
694       lvchange --[raid]writemostly PV[:y|n|t] LV
695
696       The specified device will be marked as "write mostly", which means that
697       reading from this device will be avoided, and  other  devices  will  be
698       preferred  for  reading  (unless no other devices are available.)  This
699       minimizes the I/O to the specified device.
700
701       If the PV name has no suffix, the write mostly attribute  is  set.   If
702       the  PV  name has the suffix :n, the write mostly attribute is cleared,
703       and the suffix :t toggles the current setting.
704
705       The write mostly option can be repeated on the command line  to  change
706       multiple devices at once.
707
708       To  report  the  current  write mostly setting, the lvs attr field will
709       show the letter "w" in the 9th position when write mostly is set:
710
711       lvs -a -o name,attr
712
713       When a device is marked write mostly, the maximum number of outstanding
714       writes  to that device can be configured.  Once the maximum is reached,
715       further writes become synchronous.  When synchronous, a write to the LV
716       will not complete until writes to all the mirror images are complete.
717
718       lvchange --[raid]writebehind Number LV
719
720       To report the current write behind setting, run:
721
722       lvs -o name,raid_write_behind
723
724       When  write  behind  is  not configured, or set to 0, all LV writes are
725       synchronous.
726

RAID TAKEOVER

728       RAID takeover is converting a RAID LV from one RAID level  to  another,
729       e.g.   raid5  to raid6.  Changing the RAID level is usually done to in‐
730       crease or decrease resilience to device failures or  to  restripe  LVs.
731       This  is  done using lvconvert and specifying the new RAID level as the
732       LV type:
733
734       lvconvert --type RaidLevel LV [PVs]
735
736       The most common and recommended RAID takeover conversions are:
737
738       linear to raid1
739              Linear is a single image of LV data, and converting it to  raid1
740              adds  a mirror image which is a direct copy of the original lin‐
741              ear image.
742
743       striped/raid0 to raid4/5/6
744              Adding parity devices to a striped volume results in raid4/5/6.
745
746       Unnatural conversions that are not recommended include  converting  be‐
747       tween  striped and non-striped types.  This is because file systems of‐
748       ten optimize I/O patterns based on device striping  values.   If  those
749       values change, it can decrease performance.
750
751       Converting  to  a  higher  RAID level requires allocating new SubLVs to
752       hold RAID metadata, and new SubLVs to hold parity blocks for  LV  data.
753       Converting  to a lower RAID level removes the SubLVs that are no longer
754       needed.
755
756       Conversion often requires full synchronization of the RAID LV (see Syn‐
757       chronization).  Converting to RAID1 requires copying all LV data blocks
758       to N new images on new devices.  Converting to a parity RAID level  re‐
759       quires  reading all LV data blocks, calculating parity, and writing the
760       new parity blocks.  Synchronization can take a long time  depending  on
761       the throughpout of the devices used and the size of the RaidLV.  It can
762       degrade performance. Rate controls also apply to conversion; see --min‐
763       recoveryrate and --maxrecoveryrate.
764
765       Warning:  though  it  is possible to create striped LVs  with up to 128
766       stripes, a maximum of 64 stripes can  be  converted  to  raid0,  63  to
767       raid4/5  and 62 to raid6 because of the added parity SubLVs.  A striped
768       LV with a maximum of 32 stripes can be converted to raid10.
769
770       The following takeover conversions are currently possible:
771            • between striped and raid0.
772            • between linear and raid1.
773            • between mirror and raid1.
774            • between raid1 with two images and raid4/5.
775            • between striped/raid0 and raid4.
776            • between striped/raid0 and raid5.
777            • between striped/raid0 and raid6.
778            • between raid4 and raid5.
779            • between raid4/raid5 and raid6.
780            • between striped/raid0 and raid10.
781            • between striped and raid4.
782
783   Indirect conversions
784       Converting from one raid level to another may require  multiple  steps,
785       converting first to intermediate raid levels.
786
787       linear to raid6
788
789       To convert an LV from linear to raid6:
790       1. convert to raid1 with two images
791       2. convert to raid5 (internally raid5_ls) with two images
792       3. convert to raid5 with three or more stripes (reshape)
793       4. convert to raid6 (internally raid6_ls_6)
794       5. convert to raid6 (internally raid6_zr, reshape)
795
796       The commands to perform the steps above are:
797       1. lvconvert --type raid1 --mirrors 1 LV
798       2. lvconvert --type raid5 LV
799       3. lvconvert --stripes 3 LV
800       4. lvconvert --type raid6 LV
801       5. lvconvert --type raid6 LV
802
803       The  final  conversion from raid6_ls_6 to raid6_zr is done to avoid the
804       potential write/recovery performance reduction in raid6_ls_6 because of
805       the  dedicated  parity device.  raid6_zr rotates data and parity blocks
806       to avoid this.
807
808       linear to striped
809
810       To convert an LV from linear to striped:
811       1. convert to raid1 with two images
812       2. convert to raid5_n
813       3. convert to raid5_n with five 128k stripes (reshape)
814       4. convert raid5_n to striped
815
816       The commands to perform the steps above are:
817       1. lvconvert --type raid1 --mirrors 1 LV
818       2. lvconvert --type raid5_n LV
819       3. lvconvert --stripes 5 --stripesize 128k LV
820       4. lvconvert --type striped LV
821
822       The raid5_n type in step 2 is used because it has dedicated parity Sub‐
823       LVs  at  the end, and can be converted to striped directly.  The stripe
824       size is increased in step 3 to  add  extra  space  for  the  conversion
825       process.   This step grows the LV size by a factor of five.  After con‐
826       version, this extra space can be reduced (or used to grow the file sys‐
827       tem using the LV).
828
829       Reversing these steps will convert a striped LV to linear.
830
831       raid6 to striped
832
833       To convert an LV from raid6_nr to striped:
834       1. convert to raid6_n_6
835       2. convert to striped
836
837       The commands to perform the steps above are:
838       1. lvconvert --type raid6_n_6 LV
839       2. lvconvert --type striped LV
840
841       Examples
842
843       Converting an LV from linear to raid1.
844
845       # lvs -a -o name,segtype,size vg
846         LV   Type   LSize
847         lv   linear 300.00g
848
849       # lvconvert --type raid1 --mirrors 1 vg/lv
850
851       # lvs -a -o name,segtype,size vg
852         LV            Type   LSize
853         lv            raid1  300.00g
854         [lv_rimage_0] linear 300.00g
855         [lv_rimage_1] linear 300.00g
856         [lv_rmeta_0]  linear   3.00m
857         [lv_rmeta_1]  linear   3.00m
858
859       Converting an LV from mirror to raid1.
860
861       # lvs -a -o name,segtype,size vg
862         LV            Type   LSize
863         lv            mirror 100.00g
864         [lv_mimage_0] linear 100.00g
865         [lv_mimage_1] linear 100.00g
866         [lv_mlog]     linear   3.00m
867
868       # lvconvert --type raid1 vg/lv
869
870       # lvs -a -o name,segtype,size vg
871         LV            Type   LSize
872         lv            raid1  100.00g
873         [lv_rimage_0] linear 100.00g
874         [lv_rimage_1] linear 100.00g
875         [lv_rmeta_0]  linear   3.00m
876         [lv_rmeta_1]  linear   3.00m
877
878       Converting an LV from linear to raid1 (with 3 images).
879
880       # lvconvert --type raid1 --mirrors 2 vg/lv
881
882       Converting an LV from striped (with 4 stripes) to raid6_n_6.
883
884       # lvcreate --stripes 4 -L64M -n lv vg
885
886       # lvconvert --type raid6 vg/lv
887
888       # lvs -a -o lv_name,segtype,sync_percent,data_copies
889         LV            Type      Cpy%Sync #Cpy
890         lv            raid6_n_6 100.00      3
891         [lv_rimage_0] linear
892         [lv_rimage_1] linear
893         [lv_rimage_2] linear
894         [lv_rimage_3] linear
895         [lv_rimage_4] linear
896         [lv_rimage_5] linear
897         [lv_rmeta_0]  linear
898         [lv_rmeta_1]  linear
899         [lv_rmeta_2]  linear
900         [lv_rmeta_3]  linear
901         [lv_rmeta_4]  linear
902         [lv_rmeta_5]  linear
903
904       This convert begins by allocating MetaLVs (rmeta_#) for each of the ex‐
905       isting stripe devices.  It  then  creates  2  additional  MetaLV/DataLV
906       pairs (rmeta_#/rimage_#) for dedicated raid6 parity.
907
908       If  rotating data/parity is required, such as with raid6_nr, it must be
909       done by reshaping (see below).
910

RAID RESHAPING

912       RAID reshaping is changing attributes of a RAID LV  while  keeping  the
913       same  RAID  level.  This includes changing RAID layout, stripe size, or
914       number of stripes.
915
916       When changing the RAID layout or stripe size, no new SubLVs (MetaLVs or
917       DataLVs)  need  to  be  allocated,  but DataLVs are extended by a small
918       amount (typically 1 extent).  The extra space allows blocks in a stripe
919       to  be  updated  safely, and not be corrupted in case of a crash.  If a
920       crash occurs, reshaping can just be restarted.
921
922       (If blocks in a stripe were updated in place, a crash could leave  them
923       partially  updated  and corrupted.  Instead, an existing stripe is qui‐
924       esced, read, changed in layout, and the  new  stripe  written  to  free
925       space.  Once that is done, the new stripe is unquiesced and used.)
926
927       Examples
928       (Command output shown in examples may change.)
929
930       Converting raid6_n_6 to raid6_nr with rotating data/parity.
931
932       This   conversion   naturally   follows   a  previous  conversion  from
933       striped/raid0 to raid6_n_6 (shown above).  It completes the  transition
934       to a more traditional RAID6.
935
936       # lvs -o lv_name,segtype,sync_percent,data_copies
937         LV            Type      Cpy%Sync #Cpy
938         lv            raid6_n_6 100.00      3
939         [lv_rimage_0] linear
940         [lv_rimage_1] linear
941         [lv_rimage_2] linear
942         [lv_rimage_3] linear
943         [lv_rimage_4] linear
944         [lv_rimage_5] linear
945         [lv_rmeta_0]  linear
946         [lv_rmeta_1]  linear
947         [lv_rmeta_2]  linear
948         [lv_rmeta_3]  linear
949         [lv_rmeta_4]  linear
950         [lv_rmeta_5]  linear
951
952       # lvconvert --type raid6_nr vg/lv
953
954       # lvs -a -o lv_name,segtype,sync_percent,data_copies
955         LV            Type     Cpy%Sync #Cpy
956         lv            raid6_nr 100.00      3
957         [lv_rimage_0] linear
958         [lv_rimage_0] linear
959         [lv_rimage_1] linear
960         [lv_rimage_1] linear
961         [lv_rimage_2] linear
962         [lv_rimage_2] linear
963         [lv_rimage_3] linear
964         [lv_rimage_3] linear
965         [lv_rimage_4] linear
966         [lv_rimage_5] linear
967         [lv_rmeta_0]  linear
968         [lv_rmeta_1]  linear
969         [lv_rmeta_2]  linear
970         [lv_rmeta_3]  linear
971         [lv_rmeta_4]  linear
972         [lv_rmeta_5]  linear
973
974       The  DataLVs  are  larger  (additional  segment in each) which provides
975       space for out-of-place reshaping.  The result is:
976
977       # lvs -a -o lv_name,segtype,seg_pe_ranges,dataoffset
978         LV            Type     PE Ranges          DOff
979         lv            raid6_nr lv_rimage_0:0-32 \
980                                lv_rimage_1:0-32 \
981                                lv_rimage_2:0-32 \
982                                lv_rimage_3:0-32
983         [lv_rimage_0] linear   /dev/sda:0-31      2048
984         [lv_rimage_0] linear   /dev/sda:33-33
985         [lv_rimage_1] linear   /dev/sdaa:0-31     2048
986         [lv_rimage_1] linear   /dev/sdaa:33-33
987         [lv_rimage_2] linear   /dev/sdab:1-33     2048
988         [lv_rimage_3] linear   /dev/sdac:1-33     2048
989         [lv_rmeta_0]  linear   /dev/sda:32-32
990         [lv_rmeta_1]  linear   /dev/sdaa:32-32
991         [lv_rmeta_2]  linear   /dev/sdab:0-0
992         [lv_rmeta_3]  linear   /dev/sdac:0-0
993
994       All segments with PE ranges '33-33' provide  the  out-of-place  reshape
995       space.   The  dataoffset column shows that the data was moved from ini‐
996       tial offset 0 to 2048 sectors on each component DataLV.
997
998       For performance reasons the raid6_nr RaidLV can be restriped.   Convert
999       it from 3-way striped to 5-way-striped.
1000
1001       # lvconvert --stripes 5 vg/lv
1002         Using default stripesize 64.00 KiB.
1003         WARNING: Adding stripes to active logical volume vg/lv will \
1004         grow it from 99 to 165 extents!
1005         Run "lvresize -l99 vg/lv" to shrink it or use the additional \
1006         capacity.
1007         Logical volume vg/lv successfully converted.
1008
1009       # lvs vg/lv
1010         LV   VG     Attr       LSize   Cpy%Sync
1011         lv   vg     rwi-a-r-s- 652.00m 52.94
1012
1013       # lvs -a -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1014         LV            Attr       Type     PE Ranges          DOff
1015         lv            rwi-a-r--- raid6_nr lv_rimage_0:0-33 \
1016                                           lv_rimage_1:0-33 \
1017                                           lv_rimage_2:0-33 ... \
1018                                           lv_rimage_5:0-33 \
1019                                           lv_rimage_6:0-33   0
1020         [lv_rimage_0] iwi-aor--- linear   /dev/sda:0-32      0
1021         [lv_rimage_0] iwi-aor--- linear   /dev/sda:34-34
1022         [lv_rimage_1] iwi-aor--- linear   /dev/sdaa:0-32     0
1023         [lv_rimage_1] iwi-aor--- linear   /dev/sdaa:34-34
1024         [lv_rimage_2] iwi-aor--- linear   /dev/sdab:0-32     0
1025         [lv_rimage_2] iwi-aor--- linear   /dev/sdab:34-34
1026         [lv_rimage_3] iwi-aor--- linear   /dev/sdac:1-34     0
1027         [lv_rimage_4] iwi-aor--- linear   /dev/sdad:1-34     0
1028         [lv_rimage_5] iwi-aor--- linear   /dev/sdae:1-34     0
1029         [lv_rimage_6] iwi-aor--- linear   /dev/sdaf:1-34     0
1030         [lv_rmeta_0]  ewi-aor--- linear   /dev/sda:33-33
1031         [lv_rmeta_1]  ewi-aor--- linear   /dev/sdaa:33-33
1032         [lv_rmeta_2]  ewi-aor--- linear   /dev/sdab:33-33
1033         [lv_rmeta_3]  ewi-aor--- linear   /dev/sdac:0-0
1034         [lv_rmeta_4]  ewi-aor--- linear   /dev/sdad:0-0
1035         [lv_rmeta_5]  ewi-aor--- linear   /dev/sdae:0-0
1036         [lv_rmeta_6]  ewi-aor--- linear   /dev/sdaf:0-0
1037
1038       Stripes  also  can  be  removed  from  raid5  and 6.  Convert the 5-way
1039       striped raid6_nr LV to 4-way-striped.  The force  option  needs  to  be
1040       used,  because  removing stripes (i.e. image SubLVs) from a RaidLV will
1041       shrink its size.
1042
1043       # lvconvert --stripes 4 vg/lv
1044         Using default stripesize 64.00 KiB.
1045         WARNING: Removing stripes from active logical volume vg/lv will \
1046         shrink it from 660.00 MiB to 528.00 MiB!
1047         THIS MAY DESTROY (PARTS OF) YOUR DATA!
1048         If that leaves the logical volume larger than 206 extents due \
1049         to stripe rounding,
1050         you may want to grow the content afterwards (filesystem etc.)
1051         WARNING: to remove freed stripes after the conversion has finished,\
1052         you have to run "lvconvert --stripes 4 vg/lv"
1053         Logical volume vg/lv successfully converted.
1054
1055       # lvs -a -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1056         LV            Attr       Type     PE Ranges          DOff
1057         lv            rwi-a-r-s- raid6_nr lv_rimage_0:0-33 \
1058                                           lv_rimage_1:0-33 \
1059                                           lv_rimage_2:0-33 ... \
1060                                           lv_rimage_5:0-33 \
1061                                           lv_rimage_6:0-33   0
1062         [lv_rimage_0] Iwi-aor--- linear   /dev/sda:0-32      0
1063         [lv_rimage_0] Iwi-aor--- linear   /dev/sda:34-34
1064         [lv_rimage_1] Iwi-aor--- linear   /dev/sdaa:0-32     0
1065         [lv_rimage_1] Iwi-aor--- linear   /dev/sdaa:34-34
1066         [lv_rimage_2] Iwi-aor--- linear   /dev/sdab:0-32     0
1067         [lv_rimage_2] Iwi-aor--- linear   /dev/sdab:34-34
1068         [lv_rimage_3] Iwi-aor--- linear   /dev/sdac:1-34     0
1069         [lv_rimage_4] Iwi-aor--- linear   /dev/sdad:1-34     0
1070         [lv_rimage_5] Iwi-aor--- linear   /dev/sdae:1-34     0
1071         [lv_rimage_6] Iwi-aor-R- linear   /dev/sdaf:1-34     0
1072         [lv_rmeta_0]  ewi-aor--- linear   /dev/sda:33-33
1073         [lv_rmeta_1]  ewi-aor--- linear   /dev/sdaa:33-33
1074         [lv_rmeta_2]  ewi-aor--- linear   /dev/sdab:33-33
1075         [lv_rmeta_3]  ewi-aor--- linear   /dev/sdac:0-0
1076         [lv_rmeta_4]  ewi-aor--- linear   /dev/sdad:0-0
1077         [lv_rmeta_5]  ewi-aor--- linear   /dev/sdae:0-0
1078         [lv_rmeta_6]  ewi-aor-R- linear   /dev/sdaf:0-0
1079
1080       The 's' in column 9 of the attribute field shows the  RaidLV  is  still
1081       reshaping.  The 'R' in the same column of the attribute field shows the
1082       freed image Sub LVs which will need removing once  the  reshaping  fin‐
1083       ished.
1084
1085       # lvs -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1086         LV   Attr       Type     PE Ranges          DOff
1087         lv   rwi-a-r-R- raid6_nr lv_rimage_0:0-33 \
1088                                  lv_rimage_1:0-33 \
1089                                  lv_rimage_2:0-33 ... \
1090                                  lv_rimage_5:0-33 \
1091                                  lv_rimage_6:0-33   8192
1092
1093       Now  that the reshape is finished the 'R' attribute on the RaidLV shows
1094       images can be removed.
1095
1096       # lvs -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1097         LV   Attr       Type     PE Ranges          DOff
1098         lv   rwi-a-r-R- raid6_nr lv_rimage_0:0-33 \
1099                                  lv_rimage_1:0-33 \
1100                                  lv_rimage_2:0-33 ... \
1101                                  lv_rimage_5:0-33 \
1102                                  lv_rimage_6:0-33   8192
1103
1104       This is achieved by  repeating  the  command  ("lvconvert  --stripes  4
1105       vg/lv" would be sufficient).
1106
1107       # lvconvert --stripes 4 vg/lv
1108         Using default stripesize 64.00 KiB.
1109         Logical volume vg/lv successfully converted.
1110
1111       # lvs -a -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1112         LV            Attr       Type     PE Ranges          DOff
1113         lv            rwi-a-r--- raid6_nr lv_rimage_0:0-33 \
1114                                           lv_rimage_1:0-33 \
1115                                           lv_rimage_2:0-33 ... \
1116                                           lv_rimage_5:0-33   8192
1117         [lv_rimage_0] iwi-aor--- linear   /dev/sda:0-32      8192
1118         [lv_rimage_0] iwi-aor--- linear   /dev/sda:34-34
1119         [lv_rimage_1] iwi-aor--- linear   /dev/sdaa:0-32     8192
1120         [lv_rimage_1] iwi-aor--- linear   /dev/sdaa:34-34
1121         [lv_rimage_2] iwi-aor--- linear   /dev/sdab:0-32     8192
1122         [lv_rimage_2] iwi-aor--- linear   /dev/sdab:34-34
1123         [lv_rimage_3] iwi-aor--- linear   /dev/sdac:1-34     8192
1124         [lv_rimage_4] iwi-aor--- linear   /dev/sdad:1-34     8192
1125         [lv_rimage_5] iwi-aor--- linear   /dev/sdae:1-34     8192
1126         [lv_rmeta_0]  ewi-aor--- linear   /dev/sda:33-33
1127         [lv_rmeta_1]  ewi-aor--- linear   /dev/sdaa:33-33
1128         [lv_rmeta_2]  ewi-aor--- linear   /dev/sdab:33-33
1129         [lv_rmeta_3]  ewi-aor--- linear   /dev/sdac:0-0
1130         [lv_rmeta_4]  ewi-aor--- linear   /dev/sdad:0-0
1131         [lv_rmeta_5]  ewi-aor--- linear   /dev/sdae:0-0
1132
1133       # lvs -a -o lv_name,attr,segtype,reshapelen vg
1134         LV            Attr       Type     RSize
1135         lv            rwi-a-r--- raid6_nr 24.00m
1136         [lv_rimage_0] iwi-aor--- linear    4.00m
1137         [lv_rimage_0] iwi-aor--- linear
1138         [lv_rimage_1] iwi-aor--- linear    4.00m
1139         [lv_rimage_1] iwi-aor--- linear
1140         [lv_rimage_2] iwi-aor--- linear    4.00m
1141         [lv_rimage_2] iwi-aor--- linear
1142         [lv_rimage_3] iwi-aor--- linear    4.00m
1143         [lv_rimage_4] iwi-aor--- linear    4.00m
1144         [lv_rimage_5] iwi-aor--- linear    4.00m
1145         [lv_rmeta_0]  ewi-aor--- linear
1146         [lv_rmeta_1]  ewi-aor--- linear
1147         [lv_rmeta_2]  ewi-aor--- linear
1148         [lv_rmeta_3]  ewi-aor--- linear
1149         [lv_rmeta_4]  ewi-aor--- linear
1150         [lv_rmeta_5]  ewi-aor--- linear
1151
1152       Future  developments  might  include automatic removal of the freed im‐
1153       ages.
1154
1155       If the reshape space shall be removed any lvconvert command not  chang‐
1156       ing the layout can be used:
1157
1158       # lvconvert --stripes 4 vg/lv
1159         Using default stripesize 64.00 KiB.
1160         No change in RAID LV vg/lv layout, freeing reshape space.
1161         Logical volume vg/lv successfully converted.
1162
1163       # lvs -a -o lv_name,attr,segtype,reshapelen vg
1164         LV            Attr       Type     RSize
1165         lv            rwi-a-r--- raid6_nr    0
1166         [lv_rimage_0] iwi-aor--- linear      0
1167         [lv_rimage_0] iwi-aor--- linear
1168         [lv_rimage_1] iwi-aor--- linear      0
1169         [lv_rimage_1] iwi-aor--- linear
1170         [lv_rimage_2] iwi-aor--- linear      0
1171         [lv_rimage_2] iwi-aor--- linear
1172         [lv_rimage_3] iwi-aor--- linear      0
1173         [lv_rimage_4] iwi-aor--- linear      0
1174         [lv_rimage_5] iwi-aor--- linear      0
1175         [lv_rmeta_0]  ewi-aor--- linear
1176         [lv_rmeta_1]  ewi-aor--- linear
1177         [lv_rmeta_2]  ewi-aor--- linear
1178         [lv_rmeta_3]  ewi-aor--- linear
1179         [lv_rmeta_4]  ewi-aor--- linear
1180         [lv_rmeta_5]  ewi-aor--- linear
1181
1182       In case the RaidLV should be converted to striped:
1183
1184       # lvconvert --type striped vg/lv
1185         Unable to convert LV vg/lv from raid6_nr to striped.
1186         Converting vg/lv from raid6_nr is directly possible to the \
1187         following layouts:
1188           raid6_nc
1189           raid6_zr
1190           raid6_la_6
1191           raid6_ls_6
1192           raid6_ra_6
1193           raid6_rs_6
1194           raid6_n_6
1195
1196       A  direct conversion isn't possible thus the command informed about the
1197       possible ones.  raid6_n_6 is suitable to convert to striped so  convert
1198       to  it first (this is a reshape changing the raid6 layout from raid6_nr
1199       to raid6_n_6).
1200
1201       # lvconvert --type raid6_n_6
1202         Using default stripesize 64.00 KiB.
1203         Converting raid6_nr LV vg/lv to raid6_n_6.
1204       Are you sure you want to convert raid6_nr LV vg/lv? [y/n]: y
1205         Logical volume vg/lv successfully converted.
1206
1207       Wait for the reshape to finish.
1208
1209       # lvconvert --type striped vg/lv
1210         Logical volume vg/lv successfully converted.
1211
1212       # lvs -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1213         LV   Attr       Type    PE Ranges  DOff
1214         lv   -wi-a----- striped /dev/sda:2-32 \
1215                                 /dev/sdaa:2-32 \
1216                                 /dev/sdab:2-32 \
1217                                 /dev/sdac:3-33
1218         lv   -wi-a----- striped /dev/sda:34-35 \
1219                                 /dev/sdaa:34-35 \
1220                                 /dev/sdab:34-35 \
1221                                 /dev/sdac:34-35
1222
1223       From striped we can convert to raid10
1224
1225       # lvconvert --type raid10 vg/lv
1226         Using default stripesize 64.00 KiB.
1227         Logical volume vg/lv successfully converted.
1228
1229       # lvs -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1230         LV   Attr       Type   PE Ranges          DOff
1231         lv   rwi-a-r--- raid10 lv_rimage_0:0-32 \
1232                                lv_rimage_4:0-32 \
1233                                lv_rimage_1:0-32 ... \
1234                                lv_rimage_3:0-32 \
1235                                lv_rimage_7:0-32   0
1236
1237       # lvs -a -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg
1238         WARNING: Cannot find matching striped segment for vg/lv_rimage_3.
1239         LV            Attr       Type   PE Ranges          DOff
1240         lv            rwi-a-r--- raid10 lv_rimage_0:0-32 \
1241                                         lv_rimage_4:0-32 \
1242                                         lv_rimage_1:0-32 ... \
1243                                         lv_rimage_3:0-32 \
1244                                         lv_rimage_7:0-32   0
1245         [lv_rimage_0] iwi-aor--- linear /dev/sda:2-32      0
1246         [lv_rimage_0] iwi-aor--- linear /dev/sda:34-35
1247         [lv_rimage_1] iwi-aor--- linear /dev/sdaa:2-32     0
1248         [lv_rimage_1] iwi-aor--- linear /dev/sdaa:34-35
1249         [lv_rimage_2] iwi-aor--- linear /dev/sdab:2-32     0
1250         [lv_rimage_2] iwi-aor--- linear /dev/sdab:34-35
1251         [lv_rimage_3] iwi-XXr--- linear /dev/sdac:3-35     0
1252         [lv_rimage_4] iwi-aor--- linear /dev/sdad:1-33     0
1253         [lv_rimage_5] iwi-aor--- linear /dev/sdae:1-33     0
1254         [lv_rimage_6] iwi-aor--- linear /dev/sdaf:1-33     0
1255         [lv_rimage_7] iwi-aor--- linear /dev/sdag:1-33     0
1256         [lv_rmeta_0]  ewi-aor--- linear /dev/sda:0-0
1257         [lv_rmeta_1]  ewi-aor--- linear /dev/sdaa:0-0
1258         [lv_rmeta_2]  ewi-aor--- linear /dev/sdab:0-0
1259         [lv_rmeta_3]  ewi-aor--- linear /dev/sdac:0-0
1260         [lv_rmeta_4]  ewi-aor--- linear /dev/sdad:0-0
1261         [lv_rmeta_5]  ewi-aor--- linear /dev/sdae:0-0
1262         [lv_rmeta_6]  ewi-aor--- linear /dev/sdaf:0-0
1263         [lv_rmeta_7]  ewi-aor--- linear /dev/sdag:0-0
1264
1265       raid10 allows to add stripes but can't remove them.
1266
1267       A more elaborate example to convert from linear to striped with interim
1268       conversions to raid1 then raid5 followed by restripe (4 steps).
1269
1270       We start with the linear LV.
1271
1272       # lvs -a -o name,size,segtype,syncpercent,datastripes,\
1273                   stripesize,reshapelenle,devices vg
1274         LV   LSize   Type   Cpy%Sync #DStr Stripe RSize Devices
1275         lv   128.00m linear              1     0        /dev/sda(0)
1276
1277       Then convert it to a 2-way raid1.
1278
1279       # lvconvert --mirrors 1 vg/lv
1280         Logical volume vg/lv successfully converted.
1281
1282       # lvs -a -o name,size,segtype,datastripes,\
1283                   stripesize,reshapelenle,devices vg
1284         LV            LSize   Type   #DStr Stripe RSize Devices
1285         lv            128.00m raid1      2     0        lv_rimage_0(0),\
1286                                                         lv_rimage_1(0)
1287         [lv_rimage_0] 128.00m linear     1     0        /dev/sda(0)
1288         [lv_rimage_1] 128.00m linear     1     0        /dev/sdhx(1)
1289         [lv_rmeta_0]    4.00m linear     1     0        /dev/sda(32)
1290         [lv_rmeta_1]    4.00m linear     1     0        /dev/sdhx(0)
1291
1292       Once  the raid1 LV is fully synchronized we convert it to raid5_n (only
1293       2-way raid1 LVs can be converted to raid5).  We select raid5_n here be‐
1294       cause it has dedicated parity SubLVs at the end and can be converted to
1295       striped directly without any additional conversion.
1296
1297       # lvconvert --type raid5_n vg/lv
1298         Using default stripesize 64.00 KiB.
1299         Logical volume vg/lv successfully converted.
1300
1301       # lvs -a -o name,size,segtype,syncpercent,datastripes,\
1302                   stripesize,reshapelenle,devices vg
1303         LV            LSize   Type    #DStr Stripe RSize Devices
1304         lv            128.00m raid5_n     1 64.00k     0 lv_rimage_0(0),\
1305                                                          lv_rimage_1(0)
1306         [lv_rimage_0] 128.00m linear      1     0      0 /dev/sda(0)
1307         [lv_rimage_1] 128.00m linear      1     0      0 /dev/sdhx(1)
1308         [lv_rmeta_0]    4.00m linear      1     0        /dev/sda(32)
1309         [lv_rmeta_1]    4.00m linear      1     0        /dev/sdhx(0)
1310
1311       Now we'll change the number of data stripes from 1  to  5  and  request
1312       128K  stripe size in one command.  This will grow the size of the LV by
1313       a factor of 5 (we add 4 data stripes to the  one  given).   That  addi‐
1314       tional  space  can  be used by e.g. growing any contained filesystem or
1315       the LV can be reduced in size after the reshaping conversion  has  fin‐
1316       ished.
1317
1318       # lvconvert --stripesize 128k --stripes 5 vg/lv
1319         Converting stripesize 64.00 KiB of raid5_n LV vg/lv to 128.00 KiB.
1320         WARNING: Adding stripes to active logical volume vg/lv will grow \
1321         it from 32 to 160 extents!
1322         Run "lvresize -l32 vg/lv" to shrink it or use the additional capacity.
1323         Logical volume vg/lv successfully converted.
1324
1325       # lvs -a -o name,size,segtype,datastripes,\
1326                   stripesize,reshapelenle,devices
1327         LV            LSize   Type    #DStr Stripe  RSize Devices
1328         lv            640.00m raid5_n     5 128.00k     6 lv_rimage_0(0),\
1329                                                           lv_rimage_1(0),\
1330                                                           lv_rimage_2(0),\
1331                                                           lv_rimage_3(0),\
1332                                                           lv_rimage_4(0),\
1333                                                           lv_rimage_5(0)
1334         [lv_rimage_0] 132.00m linear      1      0      1 /dev/sda(33)
1335         [lv_rimage_0] 132.00m linear      1      0        /dev/sda(0)
1336         [lv_rimage_1] 132.00m linear      1      0      1 /dev/sdhx(33)
1337         [lv_rimage_1] 132.00m linear      1      0        /dev/sdhx(1)
1338         [lv_rimage_2] 132.00m linear      1      0      1 /dev/sdhw(33)
1339         [lv_rimage_2] 132.00m linear      1      0        /dev/sdhw(1)
1340         [lv_rimage_3] 132.00m linear      1      0      1 /dev/sdhv(33)
1341         [lv_rimage_3] 132.00m linear      1      0        /dev/sdhv(1)
1342         [lv_rimage_4] 132.00m linear      1      0      1 /dev/sdhu(33)
1343         [lv_rimage_4] 132.00m linear      1      0        /dev/sdhu(1)
1344         [lv_rimage_5] 132.00m linear      1      0      1 /dev/sdht(33)
1345         [lv_rimage_5] 132.00m linear      1      0        /dev/sdht(1)
1346         [lv_rmeta_0]    4.00m linear      1      0        /dev/sda(32)
1347         [lv_rmeta_1]    4.00m linear      1      0        /dev/sdhx(0)
1348         [lv_rmeta_2]    4.00m linear      1      0        /dev/sdhw(0)
1349         [lv_rmeta_3]    4.00m linear      1      0        /dev/sdhv(0)
1350         [lv_rmeta_4]    4.00m linear      1      0        /dev/sdhu(0)
1351         [lv_rmeta_5]    4.00m linear      1      0        /dev/sdht(0)
1352
1353       Once the conversion has finished we can can convert to striped.
1354
1355       # lvconvert --type striped vg/lv
1356         Logical volume vg/lv successfully converted.
1357
1358       # lvs -a -o name,size,segtype,datastripes,\
1359                   stripesize,reshapelenle,devices vg
1360         LV   LSize   Type    #DStr Stripe  RSize Devices
1361         lv   640.00m striped     5 128.00k       /dev/sda(33),\
1362                                                  /dev/sdhx(33),\
1363                                                  /dev/sdhw(33),\
1364                                                  /dev/sdhv(33),\
1365                                                  /dev/sdhu(33)
1366         lv   640.00m striped     5 128.00k       /dev/sda(0),\
1367                                                  /dev/sdhx(1),\
1368                                                  /dev/sdhw(1),\
1369                                                  /dev/sdhv(1),\
1370                                                  /dev/sdhu(1)
1371
1372       Reversing these steps will convert a given striped LV to linear.
1373
1374       Mind the facts that stripes are removed thus the capacity of the RaidLV
1375       will shrink and that changing the RaidLV layout will influence its per‐
1376       formance.
1377
1378       "lvconvert  --stripes  1  vg/lv" for converting to 1 stripe will inform
1379       upfront about the reduced size to allow for  resizing  the  content  or
1380       growing the RaidLV before actually converting to 1 stripe.  The --force
1381       option is needed to allow stripe removing conversions to  prevent  data
1382       loss.
1383
1384       Of course any interim step can be the intended last one (e.g. striped →
1385       raid1).
1386

RAID5 VARIANTS

1388       raid5_ls
1389            • RAID5 left symmetric
1390            • Rotating parity N with data restart
1391
1392       raid5_la
1393            • RAID5 left asymmetric
1394            • Rotating parity N with data continuation
1395
1396       raid5_rs
1397            • RAID5 right symmetric
1398            • Rotating parity 0 with data restart
1399
1400       raid5_ra
1401            • RAID5 right asymmetric
1402            • Rotating parity 0 with data continuation
1403
1404       raid5_n
1405            • RAID5 parity n
1406            • Dedicated parity device n used for striped/raid0 conversions
1407            • Used for RAID Takeover
1408

RAID6 VARIANTS

1410       raid6
1411            • RAID6 zero restart (aka left symmetric)
1412            • Rotating parity 0 with data restart
1413            • Same as raid6_zr
1414
1415       raid6_zr
1416            • RAID6 zero restart (aka left symmetric)
1417            • Rotating parity 0 with data restart
1418
1419       raid6_nr
1420            • RAID6 N restart (aka right symmetric)
1421            • Rotating parity N with data restart
1422
1423       raid6_nc
1424            • RAID6 N continue
1425            • Rotating parity N with data continuation
1426
1427       raid6_n_6
1428            • RAID6 last parity devices
1429            • Fixed dedicated last devices (P-Syndrome N-1 and  Q-Syndrome  N)
1430              with striped data used for striped/raid0 conversions
1431            • Used for RAID Takeover
1432
1433       raid6_{ls,rs,la,ra}_6
1434            • RAID6 last parity device
1435            • Dedicated  last  parity  device  used  for  conversions  from/to
1436              raid5_{ls,rs,la,ra}
1437
1438       raid6_ls_6
1439            • RAID6 N continue
1440            • Same as raid5_ls for N-1 devices with fixed Q-Syndrome N
1441            • Used for RAID Takeover
1442
1443       raid6_la_6
1444            • RAID6 N continue
1445            • Same as raid5_la for N-1 devices with fixed Q-Syndrome N
1446            • Used forRAID Takeover
1447
1448       raid6_rs_6
1449            • RAID6 N continue
1450            • Same as raid5_rs for N-1 devices with fixed Q-Syndrome N
1451            • Used for RAID Takeover
1452
1453       raid6_ra_6
1454            • RAID6 N continue
1455            • Same as raid5_ra for N-1 devices with fixed Q-Syndrome N
1456            • Used for RAID Takeover
1457

HISTORY

1459       The 2.6.38-rc1 version of the Linux kernel introduced  a  device-mapper
1460       target  to  interface  with the software RAID (MD) personalities.  This
1461       provided device-mapper with RAID 4/5/6 capabilities and a larger devel‐
1462       opment  community.   Later, support for RAID1, RAID10, and RAID1E (RAID
1463       10 variants) were added.  Support for these new kernel RAID targets was
1464       added  to  LVM version 2.02.87.  The capabilities of the LVM raid1 type
1465       have surpassed the old mirror type.  raid1 is now  recommended  instead
1466       of  mirror.   raid1  became  the  default  for mirroring in LVM version
1467       2.02.100.
1468

SEE ALSO

1470       lvm(8), lvm.conf(5), lvcreate(8), lvconvert(8), lvchange(8),
1471       lvextend(8), dmeventd(8)
1472
1473
1474
1475Red Hat, Inc           LVM TOOLS 2.03.22(2) (2023-08-02)            LVMRAID(7)
Impressum