1LVCREATE(8)                 System Manager's Manual                LVCREATE(8)
2
3
4

NAME

6       lvcreate - create a logical volume in an existing volume group
7

SYNOPSIS

9       lvcreate [-a|--activate [a][e|l|s]{y|n}] [--addtag Tag] [--alloc
10       AllocationPolicy] [-A|--autobackup {y|n}] [-H|--cache] [--cachemode
11       {passthrough|writeback|writethrough}] [--cachepolicy policy]
12       [--cachepool CachePoolLogicalVolume] [--cachesettings key=value]
13       [-c|--chunksize ChunkSize] [--commandprofile ProfileName]
14       [-C|--contiguous {y|n}] [-d|--debug] [--discards
15       {ignore|nopassdown|passdown}] [--errorwhenfull {y|n}] [{-l|--extents
16       LogicalExtentsNumber[%{FREE|PVS|VG}] | -L|--size LogicalVolumeSize}
17       [-i|--stripes Stripes [-I|--stripesize StripeSize]]] [-h|-?|--help]
18       [-K|--ignoreactivationskip] [--ignoremonitoring] [--minor minor
19       [-j|--major major]] [--metadataprofile ProfileName] [-m|--mirrors Mir‐
20       rors [--corelog|--mirrorlog {disk|core|mirrored}] [--nosync]
21       [-R|--regionsize MirrorLogRegionSize]] [--monitor {y|n}] [-n|--name
22       LogicalVolume] [--noudevsync] [-p|--permission {r|rw}] [-M|--persistent
23       {y|n}] [--poolmetadatasize MetadataVolumeSize] [--poolmetadataspare
24       {y|n}] [--[raid]maxrecoveryrate Rate] [--[raid]minrecoveryrate Rate]
25       [-r|--readahead {ReadAheadSectors|auto|none}] [-k|--setactivationskip
26       {y|n}] [-s|--snapshot] [-V|--virtualsize VirtualSize] [-t|--test]
27       [-T|--thin] [--thinpool ThinPoolLogicalVolume] [--type SegmentType]
28       [-v|--verbose] [-W|--wipesignatures {y|n}] [-Z|--zero {y|n}] [Vol‐
29       umeGroup[/{ExternalOrigin|Origin|Pool}LogicalVolumeName [PhysicalVol‐
30       umePath[:PE[-PE]]...]]
31
32       lvcreate [-l|--extents LogicalExtentsNumber[%{FREE|ORIGIN|PVS|VG}] |
33       -L|--size LogicalVolumeSize] [-c|--chunksize ChunkSize]
34       [--commandprofile ProfileName] [--noudevsync] [--ignoremonitoring]
35       [--metadataprofile ProfileName] [--monitor {y|n}] [-n|--name Snapshot‐
36       LogicalVolume] -s|--snapshot|-H|--cache
37       {[VolumeGroup/]OriginalLogicalVolume [-V|--virtualsize VirtualSize]}
38

DESCRIPTION

40       lvcreate  creates  a  new  logical volume in a volume group (see vgcre‐
41       ate(8), vgchange(8)) by allocating logical extents from the free physi‐
42       cal  extent  pool  of  that volume group.  If there are not enough free
43       physical extents then the volume  group  can  be  extended  (see  vgex‐
44       tend(8))  with  other  physical volumes or by reducing existing logical
45       volumes of this volume group in size (see lvreduce(8)).  If you specify
46       one  or  more  PhysicalVolumes,  allocation of physical extents will be
47       restricted to these volumes.
48       The second form supports the creation of snapshot logical volumes which
49       keep the contents of the original logical volume for backup purposes.
50

OPTIONS

52       See lvm(8) for common options.
53
54       -a|--activate [a][l|e|s]{y|n}
55              Controls  the  availability of the Logical Volumes for immediate
56              use after the command finishes running.  By default, new Logical
57              Volumes are activated (-ay).  If it is possible technically, -an
58              will leave the new Logical Volume  inactive.  But  for  example,
59              snapshots  of  active  origin  can only be created in the active
60              state so -an cannot be used with --type snapshot. This does  not
61              apply  to  thin  volume  snapshots, which are by default created
62              with flag to skip their activation (-ky).  Normally the --zero n
63              argument has to be supplied too because zeroing (the default be‐
64              haviour) also requires activation.  If autoactivation option  is
65              used  (-aay), the logical volume is activated only if it matches
66              an item in  the  activation/auto_activation_volume_list  set  in
67              lvm.conf(5).   For  autoactivated  logical volumes, --zero n and
68              --wipesignatures n is always assumed and it can't be overridden.
69              If  the  clustered locking is enabled, -aey will activate exclu‐
70              sively on one node and -a{a|l}y will activate only on the  local
71              node.
72
73       -H|--cache
74              Creates  cache  or  cache  pool  logical volume.  Specifying the
75              optional argument --extents or --size will cause the creation of
76              the  cache logical volume.  When the Volume group name is speci‐
77              fied together with existing logical volume name which is  NOT  a
78              cache  pool  name, such volume is treated as cache origin volume
79              and cache pool is created. In this case the --extents or  --size
80              is  used  to specify size of cache pool volume.  See lvmcache(7)
81              for more info about caching support.  Note that the  cache  seg‐
82              ment  type  requires  a  dm-cache kernel module version 1.3.0 or
83              greater.
84
85       --cachemode {passthrough|writeback|writethrough}
86              Specifying a cache mode determines when the writes to a cache LV
87              are  considered  complete.  When writeback is specified, a write
88              is considered complete as soon as it is stored in the cache pool
89              LV.  If writethough is specified, a write is considered complete
90              only when it has been stored in the cache pool  LV  and  on  the
91              origin  LV.   While writethrough may be slower for writes, it is
92              more resilient if something should happen to a device associated
93              with the cache pool LV.
94
95       --cachepolicy policy
96              Only  applicable  to  cached LVs; see also lvmcache(7). Sets the
97              cache policy. mq is the basic policy name. smq is more  advanced
98              version available in newer kernels.
99
100       --cachepool CachePoolLogicalVolume{Name|Path}
101              Specifies  the  name of cache pool volume name. The other way to
102              specify pool name is to append name to Volume group  name  argu‐
103              ment.
104
105       --cachesettings key=value
106              Only  applicable  to  cached LVs; see also lvmcache(7). Sets the
107              cache tunable settings. In most use-cases, default values should
108              be adequate.  Special string value default switches setting back
109              to its default kernel value and removes it from the list of set‐
110              tings stored in lvm2 metadata.
111
112       -c|--chunksize ChunkSize[b|B|s|S|k|K|m|M|g|G]
113              Gives  the  size of chunk for snapshot, cache pool and thin pool
114              logical volumes.  Default unit is in kilobytes.
115              For snapshots the value must be power  of  2  between  4KiB  and
116              512KiB and the default value is 4KiB.
117              For cache pools the value must a multiple of 32KiB between 32KiB
118              and 1GiB. The default is 64KiB.
119              For thin pools the value must be a  multiple  of  64KiB  between
120              64KiB and 1GiB.  Default value starts with 64KiB and grows up to
121              fit the pool metadata size within 128MiB, if the  pool  metadata
122              size   is   not  specified.   See  lvm.conf(5)  setting  alloca‐
123              tion/thin_pool_chunk_size_policy to select different calculation
124              policy.  Thin pool target version <1.4 requires this value to be
125              a power of 2.  For target version <1.5 discard is not  supported
126              for non power of 2 values.
127
128       -C|--contiguous {y|n}
129              Sets or resets the contiguous allocation policy for logical vol‐
130              umes. Default is no contiguous allocation based on a  next  free
131              principle.
132
133       --corelog
134              This is shortcut for option --mirrorlog core.
135
136       --discards {ignore|nopassdown|passdown}
137              Sets discards behavior for thin pool.  Default is passdown.
138
139       --errorwhenfull {y|n}
140              Configures  thin  pool  behaviour  when data space is exhausted.
141              Default is no.  Device will queue I/O  operations  until  target
142              timeout (see dm-thin-pool kernel module option no_space_timeout)
143              expires. Thus configured system has a time to  i.e.  extend  the
144              size  of thin pool data device.  When set to yes, the I/O opera‐
145              tion is immeditelly errored.
146
147       -K|--ignoreactivationskip
148              Ignore the flag to skip Logical Volumes during activation.   Use
149              --setactivationskip  option  to set or reset activation skipping
150              flag persistently for logical volume.
151
152       --ignoremonitoring
153              Make no attempt to interact with dmeventd  unless  --monitor  is
154              specified.
155
156       -l|--extents LogicalExtentsNumber[%{VG|PVS|FREE|ORIGIN}]
157              Gives the number of logical extents to allocate for the new log‐
158              ical volume.  The total number  of  physical  extents  allocated
159              will  be  greater  than this, for example, if the volume is mir‐
160              rored.  The number can also be expressed as a percentage of  the
161              total  space  in the Volume Group with the suffix %VG, as a per‐
162              centage of the remaining free space in the Volume Group with the
163              suffix  %FREE,  as  a percentage of the remaining free space for
164              the specified PhysicalVolume(s) with the suffix %PVS, or (for  a
165              snapshot) as a percentage of the total space in the Origin Logi‐
166              cal Volume with the suffix  %ORIGIN  (i.e.  100%ORIGIN  provides
167              space  for  the  whole origin).  When expressed as a percentage,
168              the number is treated as an approximate upper limit for the num‐
169              ber  of physical extents to be allocated (including extents used
170              by any mirrors, for example).
171
172       -j|--major major
173              Sets the major number.  Major numbers  are  not  supported  with
174              pool  volumes.   This  option is supported only on older systems
175              (kernel version 2.4) and is  ignored  on  modern  Linux  systems
176              where major numbers are dynamically assigned.
177
178       --metadataprofile ProfileName
179              Uses  and  attaches the ProfileName configuration profile to the
180              logical volume metadata. Whenever the  logical  volume  is  pro‐
181              cessed  next  time, the profile is automatically applied. If the
182              volume group has another profile attached,  the  logical  volume
183              profile  is  preferred.   See  lvm.conf(5)  for more information
184              about metadata profiles.
185
186       --minor minor
187              Sets the minor number.  Minor numbers  are  not  supported  with
188              pool volumes.
189
190       -m|--mirrors mirrors
191              Creates  a  mirrored  logical  volume  with mirrors copies.  For
192              example, specifying -m 1 would result  in  a  mirror  with  two-
193              sides; that is, a linear volume plus one copy.
194
195              Specifying  the  optional  argument --nosync will cause the cre‐
196              ation of the mirror to skip the initial resynchronization.   Any
197              data  written afterwards will be mirrored, but the original con‐
198              tents will not be copied.  This is useful for skipping a  poten‐
199              tially  long  and  resource  intensive  initial sync of an empty
200              device.
201
202              There are two implementations of mirroring which can be used and
203              correspond  to  the  "raid1"  and  "mirror"  segment types.  The
204              default is "raid1".  See the --type option for more  information
205              if  you would like to use the legacy "mirror" segment type.  See
206              lvm.conf(5)    settings     global/mirror_segtype_default    and
207              global/raid10_segtype_default  to  configure default mirror seg‐
208              ment type.  The options --mirrorlog and --corelog apply  to  the
209              legacy "mirror" segment type only.
210
211       --mirrorlog {disk|core|mirrored}
212              Specifies the type of log to be used for logical volumes utiliz‐
213              ing the legacy "mirror" segment type.
214              The default is disk, which is persistent and  requires  a  small
215              amount  of  storage space, usually on a separate device from the
216              data being mirrored.
217              Using core means the mirror is regenerated by copying  the  data
218              from the first device each time the logical volume is activated,
219              like after every reboot.
220              Using mirrored will create a persistent log that is itself  mir‐
221              rored.
222
223       --monitor {y|n}
224              Starts  or  avoids  monitoring a mirrored, snapshot or thin pool
225              logical volume with dmeventd, if it is installed.  If  a  device
226              used  by a monitored mirror reports an I/O error, the failure is
227              handled according  to  activation/mirror_image_fault_policy  and
228              activation/mirror_log_fault_policy set in lvm.conf(5).
229
230       -n|--name LogicalVolume{Name|Path}
231              Sets the name for the new logical volume.
232              Without  this option a default name of "lvol#" will be generated
233              where # is the LVM internal number of the logical volume.
234
235       --nosync
236              Causes the creation of the mirror to skip the initial resynchro‐
237              nization.
238
239       --noudevsync
240              Disables  udev  synchronisation.  The  process will not wait for
241              notification from udev.  It will continue  irrespective  of  any
242              possible udev processing in the background.  You should only use
243              this if udev is not running or has rules that ignore the devices
244              LVM2 creates.
245
246       -p|--permission {r|rw}
247              Sets access permissions to read only (r) or read and write (rw).
248              Default is read and write.
249
250       -M|--persistent {y|n}
251              Set  to  y  to make the minor number specified persistent.  Pool
252              volumes  cannot  have  persistent  major  and   minor   numbers.
253              Defaults  to  yes  only when major or minor number is specified.
254              Otherwise it is no.
255
256       --poolmetadatasize MetadataVolumeSize[b|B|s|S|k|K|m|M|g|G]
257              Sets the size of pool's metadata logical volume.  Supported val‐
258              ues  are in range between 2MiB and 16GiB for thin pool, and upto
259              16GiB for cache pool. The minimum value is computed from  pool's
260              data  size.   Default  value  for  thin  pool is (Pool_LV_size /
261              Pool_LV_chunk_size * 64b).  Default unit is megabytes.
262
263       --poolmetadataspare {y|n}
264              Controls creation and maintanence of pool metadata spare logical
265              volume  that will be used for automated pool recovery.  Only one
266              such volume is maintained within a volume group with the size of
267              the biggest pool metadata volume.  Default is yes.
268
269       --[raid]maxrecoveryrate Rate[b|B|s|S|k|K|m|M|g|G]
270              Sets  the maximum recovery rate for a RAID logical volume.  Rate
271              is specified as an amount per second  for  each  device  in  the
272              array.   If  no suffix is given, then KiB/sec/device is assumed.
273              Setting the recovery rate to 0 means it will be unbounded.
274
275       --[raid]minrecoveryrate Rate[b|B|s|S|k|K|m|M|g|G]
276              Sets the minimum recovery rate for a RAID logical volume.   Rate
277              is  specified  as  an  amount  per second for each device in the
278              array.  If no suffix is given, then KiB/sec/device  is  assumed.
279              Setting the recovery rate to 0 means it will be unbounded.
280
281       -r|--readahead {ReadAheadSectors|auto|none}
282              Sets read ahead sector count of this logical volume.  For volume
283              groups with metadata in  lvm1  format,  this  must  be  a  value
284              between  2  and 120.  The default value is auto which allows the
285              kernel to choose a suitable value automatically.  none is equiv‐
286              alent to specifying zero.
287
288       -R|--regionsize MirrorLogRegionSize[b|B|s|S|k|K|m|M|g|G]
289              A  mirror is divided into regions of this size (in MiB), and the
290              mirror log uses this granularity to track which regions  are  in
291              sync.
292
293       -k|--setactivationskip {y|n}
294              Controls  whether Logical Volumes are persistently flagged to be
295              skipped during activation. By default, thin snapshot volumes are
296              flagged   for   activation   skip.    See   lvm.conf(5)  activa‐
297              tion/auto_set_activation_skip how to change its  default  behav‐
298              iour.  To activate such volumes, an extra --ignoreactivationskip
299              option must be used. The flag is not  applied  during  deactiva‐
300              tion.  Use  lvchange  --setactivationskip  command to change the
301              skip flag for existing volumes.  To  see  whether  the  flag  is
302              attached,  use  lvs  command  where  the  state  of  the flag is
303              reported within lv_attr bits.
304
305       -L|--size LogicalVolumeSize[b|B|s|S|k|K|m|M|g|G|t|T|p|P|e|E]
306              Gives the size to allocate for the new logical volume.   A  size
307              suffix  of  B for bytes, S for sectors as 512 bytes, K for kilo‐
308              bytes, M for megabytes, G for gigabytes, T for terabytes, P  for
309              petabytes or E for exabytes is optional.
310              Default unit is megabytes.
311
312       -s|--snapshot OriginalLogicalVolume{Name|Path}
313              Creates a snapshot logical volume (or snapshot) for an existing,
314              so called original logical volume (or origin).   Snapshots  pro‐
315              vide  a  'frozen  image' of the contents of the origin while the
316              origin can still be updated. They enable consistent backups  and
317              online recovery of removed/overwritten data/files.
318              Thin  snapshot  is  created when the origin is a thin volume and
319              the size IS NOT specified.  Thin  snapshot  shares  same  blocks
320              within  the thin pool volume.  The non thin volume snapshot with
321              the specified size does not need the same amount of storage  the
322              origin  has.  In  a typical scenario, 15-20% might be enough. In
323              case the snapshot runs out of storage, use lvextend(8)  to  grow
324              it.  Shrinking  a  snapshot is supported by lvreduce(8) as well.
325              Run lvs(8) on the snapshot in order to check how  much  data  is
326              allocated to it.  Note: a small amount of the space you allocate
327              to the snapshot is used to track the locations of the chunks  of
328              data,  so you should allocate slightly more space than you actu‐
329              ally need and monitor (--monitor) the rate at which the snapshot
330              data  is  growing  so  you  can  avoid running out of space.  If
331              --thinpool is specified, thin volume is created  that  will  use
332              given  original logical volume as an external origin that serves
333              unprovisioned blocks.  Only read-only volumes  can  be  used  as
334              external  origins.   To  make  the  volume  external origin, lvm
335              expects the volume to be inactive.  External origin  volume  can
336              be  used/shared  for  many thin volumes even from different thin
337              pools. See lvconvert(8) for online conversion  to  thin  volumes
338              with external origin.
339
340       -i|--stripes Stripes
341              Gives  the  number  of  stripes.  This is equal to the number of
342              physical volumes to scatter the logical volume.  When creating a
343              RAID 4/5/6 logical volume, the extra devices which are necessary
344              for parity are internally accounted for.  Specifying -i 3  would
345              use  3  devices  for striped logical volumes, 4 devices for RAID
346              4/5, and 5 devices for RAID 6.  Alternatively, RAID  4/5/6  will
347              stripe  across  all  PVs  in  the volume group or all of the PVs
348              specified if the -i argument is omitted.
349
350       -I|--stripesize StripeSize
351              Gives the  number  of  kilobytes  for  the  granularity  of  the
352              stripes.
353              StripeSize must be 2^n (n = 2 to 9) for metadata in LVM1 format.
354              For metadata in LVM2 format, the stripe size  may  be  a  larger
355              power of 2 but must not exceed the physical extent size.
356
357       -T|--thin
358              Creates  thin  pool  or thin logical volume or both.  Specifying
359              the optional argument --size or --extents will  cause  the  cre‐
360              ation  of the thin pool logical volume.  Specifying the optional
361              argument --virtualsize will cause the creation of the thin logi‐
362              cal  volume  from given thin pool volume.  Specifying both argu‐
363              ments will cause the creation of both thin pool and thin  volume
364              using this pool.  See lvmthin(7) for more info about thin provi‐
365              sioning support.  Thin provisioning requires device mapper  ker‐
366              nel driver from kernel 3.2 or greater.
367
368       --thinpool ThinPoolLogicalVolume{Name|Path}
369              Specifies  the  name  of thin pool volume name. The other way to
370              specify pool name is to append name to Volume group  name  argu‐
371              ment.
372
373       --type SegmentType
374              Creates  a logical volume with the specified segment type.  Sup‐
375              ported types are:  cache,  cache-pool,  error,  linear,  mirror,
376              raid1,  raid4, raid5_la, raid5_ls (= raid5), raid5_ra, raid5_rs,
377              raid6_nc,  raid6_nr,  raid6_zr  (=  raid6),  raid10,   snapshot,
378              striped,  thin, thin-pool or zero.  Segment type may have a com‐
379              mandline switch alias that will enable its use.  When  the  type
380              is  not  explicitly  specified an implicit type is selected from
381              combination of options: -H|--cache|--cachepool (cache or cachep‐
382              ool),  -T|--thin|--thinpool  (thin  or  thinpool),  -m|--mirrors
383              (raid1 or mirror), -s|--snapshot|-V|--virtualsize  (snapshot  or
384              thin), -i|--stripes (striped).  Default segment type is linear.
385
386       -V|--virtualsize VirtualSize[b|B|s|S|k|K|m|M|g|G|t|T|p|P|e|E]
387              Creates  a  thinly  provisioned device or a sparse device of the
388              given size  (in  MiB  by  default).   See  lvm.conf(5)  settings
389              global/sparse_segtype_default  to  configure default sparse seg‐
390              ment type.  See lvmthin(7) for more info about thin provisioning
391              support.  Anything written to a sparse snapshot will be returned
392              when reading from it.  Reading from other areas  of  the  device
393              will return blocks of zeros.  Virtual snapshot (sparse snapshot)
394              is implemented by  creating  a  hidden  virtual  device  of  the
395              requested  size  using the zero target.  A suffix of _vorigin is
396              used for this device.  Note: using sparse snapshots is not effi‐
397              cient for larger device sizes (GiB), thin provisioning should be
398              used for this case.
399
400       -W|--wipesignatures {y|n}
401              Controls wiping of detected signatures on newly created  Logical
402              Volume.  If this option is not specified, then by default signa‐
403              ture wiping is done each time the zeroing ( -Z|--zero ) is done.
404              This     default     behaviour     can    be    controlled    by
405              allocation/wipe_signatures_when_zeroing_new_lvs setting found in
406              lvm.conf(5).
407              If  blkid  wiping is used allocation/use_blkid_wiping setting in
408              lvm.conf(5)) and LVM2 is compiled  with  blkid  wiping  support,
409              then  blkid(8)  library  is  used  to detect the signatures (use
410              blkid -k command to list the signatures  that  are  recognized).
411              Otherwise,  native  LVM2  code  is used to detect signatures (MD
412              RAID, swap and LUKS signatures are detected only in this case).
413              Logical volume is not wiped if the read only flag is set.
414
415       -Z|--zero {y|n}
416              Controls zeroing of the first 4KiB of data in  the  new  logical
417              volume.   Default  is  yes.   Snapshot  COW  volumes  are always
418              zeroed.  Logical volume is not zeroed if the read only  flag  is
419              set.
420              Warning:  trying  to  mount an unzeroed logical volume can cause
421              the system to hang.
422

Examples

424       Creates a striped logical volume with 3 stripes, a stripe size of  8KiB
425       and  a size of 100MiB in the volume group named vg00.  The logical vol‐
426       ume name will be chosen by lvcreate:
427
428       lvcreate -i 3 -I 8 -L 100M vg00
429
430       Creates a mirror logical volume with 2 sides with a useable size of 500
431       MiB.   This  operation  would  require  3  devices  (or  option --alloc
432       anywhere) - two for the mirror devices and one for the disk log:
433
434       lvcreate -m1 -L 500M vg00
435
436       Creates a mirror logical volume with 2 sides with a useable size of 500
437       MiB.  This operation would require 2 devices - the log is "in-memory":
438
439       lvcreate -m1 --mirrorlog core -L 500M vg00
440
441       Creates a snapshot logical volume named "vg00/snap" which has access to
442       the contents of the original logical volume named "vg00/lvol1" at snap‐
443       shot  logical volume creation time. If the original logical volume con‐
444       tains a file system, you can mount the snapshot logical  volume  on  an
445       arbitrary  directory  in order to access the contents of the filesystem
446       to run a backup while the original filesystem continues to get updated:
447
448       lvcreate --size 100m --snapshot --name snap /dev/vg00/lvol1
449
450       Creates a snapshot logical volume named "vg00/snap" with size for over‐
451       writing 20% of the original logical volume named "vg00/lvol1".:
452
453       lvcreate -s -l 20%ORIGIN --name snap vg00/lvol1
454
455       Creates  a  sparse device named /dev/vg1/sparse of size 1TiB with space
456       for just under 100MiB of actual data on it:
457
458       lvcreate --virtualsize 1T --size 100M --snapshot --name sparse vg1
459
460       Creates a linear logical volume  "vg00/lvol1"  using  physical  extents
461       /dev/sda:0-7 and /dev/sdb:0-7 for allocation of extents:
462
463       lvcreate -L 64M -n lvol1 vg00 /dev/sda:0-7 /dev/sdb:0-7
464
465       Creates  a 5GiB RAID5 logical volume "vg00/my_lv", with 3 stripes (plus
466       a parity drive for a total of 4 devices) and a stripesize of 64KiB:
467
468       lvcreate --type raid5 -L 5G -i 3 -I 64 -n my_lv vg00
469
470       Creates a RAID5 logical volume "vg00/my_lv",  using  all  of  the  free
471       space in the VG and spanning all the PVs in the VG:
472
473       lvcreate --type raid5 -l 100%FREE -n my_lv vg00
474
475       Creates  a 5GiB RAID10 logical volume "vg00/my_lv", with 2 stripes on 2
476       2-way mirrors.  Note that the -i and -m arguments  behave  differently.
477       The -i specifies the number of stripes.  The -m specifies the number of
478       additional copies:
479
480       lvcreate --type raid10 -L 5G -i 2 -m 1 -n my_lv vg00
481
482       Creates 100MiB pool logical volume for thin provisioning build  with  2
483       stripes 64KiB and chunk size 256KiB together with 1TiB thin provisioned
484       logical volume "vg00/thin_lv":
485
486       lvcreate -i 2 -I 64 -c 256 -L100M -T vg00/pool -V 1T --name thin_lv
487
488       Creates a thin snapshot volume "thinsnap" of thin volume "thinvol" that
489       will  share  the same blocks within the thin pool.  Note: the size MUST
490       NOT be specified, otherwise the non-thin snapshot is created instead:
491
492       lvcreate -s vg00/thinvol --name thinsnap
493
494       Creates a thin snapshot volume of read-only  inactive  volume  "origin"
495       which  then becomes the thin external origin for the thin snapshot vol‐
496       ume in vg00 that will use an existing thin pool "vg00/pool":
497
498       lvcreate -s --thinpool vg00/pool origin
499
500       Create a cache pool LV that can later be used to cache one logical vol‐
501       ume.
502
503       lvcreate --type cache-pool -L 1G -n my_lv_cachepool vg /dev/fast1
504
505       If  there  is  an  existing cache pool LV, create the large slow device
506       (i.e. the origin LV) and link it to the supplied cache pool LV,  creat‐
507       ing a cache LV.
508
509       lvcreate --cache -L 100G -n my_lv vg/my_lv_cachepool /dev/slow1
510
511       If there is an existing logical volume, create the small and fast cache
512       pool LV and link it to the supplied existing logical volume  (i.e.  the
513       origin LV), creating a cache LV.
514
515       lvcreate --type cache -L 1G -n my_lv_cachepool vg/my_lv /dev/fast1
516
517

SEE ALSO

519       lvm(8),    lvm.conf(5),    lvmcache(7),    lvmthin(7),    lvconvert(8),
520       lvchange(8), lvextend(8), lvreduce(8), lvremove(8), lvrename(8) lvs(8),
521       lvscan(8), vgcreate(8), blkid(8)
522
523
524
525Sistina Software UKLVM TOOLS 2.02.143(2)-RHEL6 (2016-12-13)        LVCREATE(8)
Impressum