1LVMVDO(7)                                                            LVMVDO(7)
2
3
4

NAME

6       lvmvdo — Support for Virtual Data Optimizer in LVM
7

DESCRIPTION

9       VDO  is  software  that provides inline block-level deduplication, com‐
10       pression, and thin provisioning capabilities for primary storage.
11
12       Deduplication is a technique for reducing the  consumption  of  storage
13       resources  by eliminating multiple copies of duplicate blocks. Compres‐
14       sion takes the individual unique blocks and shrinks  them.   These  re‐
15       duced blocks are then efficiently packed together into physical blocks.
16       Thin provisioning manages the mapping from logical blocks presented  by
17       VDO  to  where  the  data has actually been physically stored, and also
18       eliminates any blocks of all zeroes.
19
20       With deduplication, instead of writing the same data  more  than  once,
21       VDO  detects  and  records  each  duplicate block as a reference to the
22       original block. VDO maintains a mapping from  Logical  Block  Addresses
23       (LBA) (used by the storage layer above VDO) to physical block addresses
24       (used by the storage layer under VDO).  After  deduplication,  multiple
25       logical  block  addresses  may be mapped to the same physical block ad‐
26       dress; these are called shared blocks and are reference-counted by  the
27       software.
28
29       With  compression,  VDO  compresses  multiple blocks (or shared blocks)
30       with the fast LZ4 algorithm, and bins them together where  possible  so
31       that multiple compressed blocks fit within a 4 KB block on the underly‐
32       ing storage. Mapping from LBA is to a physical block address and  index
33       within  it  for  the desired compressed data. All compressed blocks are
34       individually reference counted for correctness.
35
36       Block sharing and block compression are invisible to applications using
37       the  storage, which read and write blocks as they would if VDO were not
38       present. When a shared block is overwritten, a new  physical  block  is
39       allocated  for  storing the new block data to ensure that other logical
40       block addresses that are mapped to the shared physical  block  are  not
41       modified.
42
43       To  use  VDO  with lvm(8), you must install the standard VDO user-space
44       tools vdoformat(8) and the currently  non-standard  kernel  VDO  module
45       "kvdo".
46
47       The  "kvdo" module implements fine-grained storage virtualization, thin
48       provisioning, block sharing, and compression.  The  "uds"  module  pro‐
49       vides  memory-efficient  duplicate identification. The user-space tools
50       include vdostats(8) for extracting statistics from VDO volumes.
51

VDO TERMS

53       VDODataLV
54              VDO data LV
55              A large hidden LV with the _vdata suffix. It is created in a VG
56              used by the VDO kernel target to store  all  data  and  metadata
57              blocks.
58
59       VDOPoolLV
60              VDO pool LV
61              A  pool  for  virtual  VDOLV(s), which are the size of used VDO‐
62              DataLV.
63              Only a single VDOLV is currently supported.
64
65       VDOLV
66              VDO LV
67              Created from VDOPoolLV.
68              Appears blank after creation.
69

VDO USAGE

71       The primary methods for using VDO with lvm2:
72
73   1. Create a VDOPoolLV and a VDOLV
74       Create a VDOPoolLV that will hold VDO data, and a  virtual  size  VDOLV
75       that the user can use. If you do not specify the virtual size, then the
76       VDOLV is created with the maximum size that always fits into data  vol‐
77       ume  even  if  no  deduplication or compression can happen (i.e. it can
78       hold the incompressible content of /dev/urandom).  If you do not  spec‐
79       ify  the  name  of  VDOPoolLV, it is taken from the sequence of vpool0,
80       vpool1 ...
81
82       Note: The performance of TRIM/Discard operations is slow for large vol‐
83       umes  of  VDO type. Please try to avoid sending discard requests unless
84       necessary because it might take considerable amount of time  to  finish
85       the discard operation.
86
87       lvcreate --type vdo -n VDOLV -L DataSize -V LargeVirtualSize VG/VDOPoolLV
88       lvcreate --vdo -L DataSize VG
89
90       Example
91       # lvcreate --type vdo -n vdo0 -L 10G -V 100G vg/vdopool0
92       # mkfs.ext4 -E nodiscard /dev/vg/vdo0
93
94   2. Convert an existing LV into VDOPoolLV
95       Convert  an already created or existing LV into a VDOPoolLV, which is a
96       volume that can hold data and metadata.  You will be prompted  to  con‐
97       firm  such  conversion  because it IRREVERSIBLY DESTROYS the content of
98       such volume and the volume is immediately formatted by vdoformat(8)  as
99       a  VDO  pool data volume. You can specify the virtual size of the VDOLV
100       associated with this VDOPoolLV.  If you  do  not  specify  the  virtual
101       size, it will be set to the maximum size that can keep 100% incompress‐
102       ible data there.
103
104       lvconvert --type vdo-pool -n VDOLV -V VirtualSize VG/VDOPoolLV
105       lvconvert --vdopool VG/VDOPoolLV
106
107       Example
108       # lvconvert --type vdo-pool -n vdo0 -V10G vg/ExistingLV
109
110   3. Change the compression and deduplication of a VDOPoolLV
111       Disable or enable the compression and deduplication for VDOPoolLV  (the
112       volume that maintains all VDO LV(s) associated with it).
113
114       lvchange --compression y|n --deduplication y|n VG/VDOPoolLV
115
116       Example
117       # lvchange --compression n  vg/vdopool0
118       # lvchange --deduplication y vg/vdopool1
119
120   4. Change the default settings used for creating a VDOPoolLV
121       VDO  allows  to  set a large variety of options. Lots of these settings
122       can be specified in lvm.conf or profile settings.  You  can  prepare  a
123       number of different profiles in the /etc/lvm/profile directory and just
124       specify the profile file name.  Check the output  of  lvmconfig  --type
125       default --withcomments for a detailed description of all individual VDO
126       settings.
127
128       Example
129       # cat <<EOF > /etc/lvm/profile/vdo_create.profile
130       allocation {
131              vdo_use_compression=1
132              vdo_use_deduplication=1
133              vdo_use_metadata_hints=1
134              vdo_minimum_io_size=4096
135              vdo_block_map_cache_size_mb=128
136              vdo_block_map_period=16380
137              vdo_use_sparse_index=0
138              vdo_index_memory_size_mb=256
139              vdo_slab_size_mb=2048
140              vdo_ack_threads=1
141              vdo_bio_threads=1
142              vdo_bio_rotation=64
143              vdo_cpu_threads=2
144              vdo_hash_zone_threads=1
145              vdo_logical_threads=1
146              vdo_physical_threads=1
147              vdo_write_policy="auto"
148              vdo_max_discard=1
149       }
150       EOF
151
152       # lvcreate --vdo -L10G --metadataprofile vdo_create vg/vdopool0
153       # lvcreate --vdo -L10G --config 'allocation/vdo_cpu_threads=4' vg/vdopool1
154
155   5. Set or change VDO settings with option --vdosettings
156       Use the form 'option=value' or 'option1=value option2=value', or repeat
157       --vdosettings for each option being set.  Options are listed in the Ex‐
158       ample section above, for the full description see lvm.conf(5).  Options
159       can  omit  'vdo_'  and 'vdo_use_' prefixes and all its underscores.  So
160       i.e.  vdo_use_metadata_hints=1  and   metadatahints=1  are  equivalent.
161       To  change the option for an already existing VDOPoolLV use lvchange(8)
162       command. However not all option can be changed.  Only  compression  and
163       deduplication options can be also changed for an active VDO LV.  Lowest
164       priority options are  specified  with  configuration  file,  then  with
165       --vdosettings  and  highest  are  expliction  option  --compression and
166       --deduplication.
167
168       Example
169
170       # lvcreate --vdo -L10G --vdosettings 'ack_threads=1 hash_zone_threads=2' vg/vdopool0
171       # lvchange --vdosettings 'bio_threads=2 deduplication=1' vg/vdopool0
172
173   6. Checking the usage of VDOPoolLV
174       To quickly check how much data on a VDOPoolLV is already consumed,  use
175       lvs(8).  The  Data% field reports how much data is occupied in the con‐
176       tent of the virtual data for the VDOLV and how much  space  is  already
177       consumed with all the data and metadata blocks in the VDOPoolLV.  For a
178       detailed description, use the vdostats(8) command.
179
180       Note: vdostats(8) currently understands only /dev/mapper device names.
181
182       Example
183       # lvcreate --type vdo -L10G -V20G -n vdo0 vg/vdopool0
184       # mkfs.ext4 -E nodiscard /dev/vg/vdo0
185       # lvs -a vg
186
187         LV               VG Attr       LSize  Pool     Origin Data%
188         vdo0             vg vwi-a-v--- 20.00g vdopool0        0.01
189         vdopool0         vg dwi-ao---- 10.00g                 30.16
190         [vdopool0_vdata] vg Dwi-ao---- 10.00g
191
192       # vdostats --all /dev/mapper/vg-vdopool0-vpool
193       /dev/mapper/vg-vdopool0 :
194         version                             : 30
195         release version                     : 133524
196         data blocks used                    : 79
197         ...
198
199   7. Extending the VDOPoolLV size
200       You can add more space to hold VDO data and metadata by  extending  the
201       VDODataLV  using  the commands lvresize(8) and lvextend(8).  The exten‐
202       sion needs to add at least one new VDO slab. You can configure the slab
203       size with the allocation/vdo_slab_size_mb setting.
204
205       You  can  also enable automatic size extension of a monitored VDOPoolLV
206       with   the   activation/vdo_pool_autoextend_percent   and   activation/
207       vdo_pool_autoextend_threshold settings.
208
209       Note: You cannot reduce the size of a VDOPoolLV.
210
211       lvextend -L+AddingSize VG/VDOPoolLV
212
213       Example
214       # lvextend -L+50G vg/vdopool0
215       # lvresize -L300G vg/vdopool1
216
217   8. Extending or reducing the VDOLV size
218       You  can  extend  or  reduce a virtual VDO LV as a standard LV with the
219       lvresize(8), lvextend(8), and lvreduce(8) commands.
220
221       Note: The reduction needs to process TRIM for reduced disk area to  un‐
222       map used data blocks from the VDOPoolLV, which might take a long time.
223
224       lvextend -L+AddingSize VG/VDOLV
225       lvreduce -L-ReducingSize VG/VDOLV
226
227       Example
228       # lvextend -L+50G vg/vdo0
229       # lvreduce -L-50G vg/vdo1
230       # lvresize -L200G vg/vdo2
231
232   9. Component activation of a VDODataLV
233       You  can activate a VDODataLV separately as a component LV for examina‐
234       tion purposes. The activation of the VDODataLV activates the data LV in
235       read-only  mode,  and the data LV cannot be modified.  If the VDODataLV
236       is active as a component, any upper LV using this volume CANNOT be  ac‐
237       tivated.  You have to deactivate the VDODataLV first to continue to use
238       the VDOPoolLV.
239
240       Example
241       # lvchange -ay vg/vpool0_vdata
242       # lvchange -an vg/vpool0_vdata
243

VDO TOPICS

245   1. Stacking VDO
246       You can convert or stack a VDOPooLV with these currently supported vol‐
247       ume types: linear, stripe, raid and cache with cachepool.
248
249   1. Using multiple volumes using same VDOPoolLV
250       You  can convert existing VDO LV into a thin volume. After this conver‐
251       sion you can create a thin snapshot or you can add  more  thin  volumes
252       with thin-pool named after orignal LV name LV_tpool0.
253
254       Example
255       # lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
256       # lvconvert --type thin vg/vdo1
257       # lvcreate -V20 vg/vdo1_tpool0
258
259   2. VDOPoolLV on top of raid
260       Using a raid type LV for a VDODataLV.
261
262       Example
263       # lvcreate --type raid1 -L 5G -n vdopool vg
264       # lvconvert --type vdo-pool -V 10G vg/vdopool
265
266   3. Caching a VDOPoolLV
267       VDOPoolLV (accepts also VDODataLV volume name) caching provides a mech‐
268       anism to accelerate reads and writes of already compressed and dedupli‐
269       cated data blocks together with VDO metadata.
270
271       Example
272       # lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
273       # lvcreate --type cache-pool -L 1G -n cachepool vg
274       # lvconvert --cache --cachepool vg/cachepool vg/vdopool
275       # lvconvert --uncache vg/vdopool
276
277   4. Caching a VDOLV
278       VDO  LV  cache allow you to 'cache' a device for better performance be‐
279       fore it hits the processing of the VDO Pool LV layer.
280
281       Example
282       # lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
283       # lvcreate --type cache-pool -L 1G -n cachepool vg
284       # lvconvert --cache --cachepool vg/cachepool vg/vdo1
285       # lvconvert --uncache vg/vdo1
286
287   5. Usage of Discard/TRIM with a VDOLV
288       You can discard data on a VDO LV and reduce used blocks on a VDOPoolLV.
289       However, the current performance of discard operations is still not op‐
290       timal and takes a considerable amount of time and CPU.  Unless you  re‐
291       ally need it, you should avoid using discard.
292
293       When  a block device is going to be rewritten, its blocks will be auto‐
294       matically reused for new data.  Discard is useful  in  situations  when
295       user  knows  that the given portion of a VDO LV is not going to be used
296       and the discarded space can be used for block provisioning in other re‐
297       gions  of the VDO LV.  For the same reason, you should avoid using mkfs
298       with discard for a freshly created VDO LV to save a lot  of  time  that
299       this operation would take otherwise as device is already expected to be
300       empty.
301
302   6. Memory usage
303       The VDO target requires 38 MiB of RAM and several variable amounts:
304
305       • 1.15 MiB of RAM for each 1 MiB of configured block  map  cache  size.
306         The block map cache requires a minimum of 150 MiB RAM.
307
308       • 1.6 MiB of RAM for each 1 TiB of logical space.
309
310       • 268 MiB of RAM for each 1 TiB of physical storage managed by the vol‐
311         ume.
312
313       UDS requires a minimum of 250 MiB of RAM, which  is  also  the  default
314       amount that deduplication uses.
315
316       The  memory  required for the UDS index is determined by the index type
317       and the required size of the deduplication window and is controlled  by
318       the allocation/vdo_use_sparse_index setting.
319
320       With enabled UDS sparse indexing, it relies on the temporal locality of
321       data and attempts to retain only the most  relevant  index  entries  in
322       memory and can maintain a deduplication window that is ten times larger
323       than with dense while using the same amount of memory.
324
325       Although the sparse index provides the greatest coverage, the dense in‐
326       dex  provides more deduplication advice.  For most workloads, given the
327       same amount of memory, the difference in  deduplication  rates  between
328       dense and sparse indexes is negligible.
329
330       A dense index with 1 GiB of RAM maintains a 1 TiB deduplication window,
331       while a sparse index with 1 GiB of RAM maintains a 10 TiB deduplication
332       window.   In  general,  1 GiB is sufficient for 4 TiB of physical space
333       with a dense index and 40 TiB with a sparse index.
334
335   7. Storage space requirements
336       You can configure a VDOPoolLV to use up to 256 TiB of physical storage.
337       Only  a  certain  part of the physical storage is usable to store data.
338       This section provides the calculations to determine the usable size  of
339       a VDO-managed volume.
340
341       The  VDO  target requires storage for two types of VDO metadata and for
342       the UDS index:
343
344       • The first type of VDO metadata  uses  approximately  1 MiB  for  each
345         4 GiB of physical storage plus an additional 1 MiB per slab.
346
347       • The  second  type of VDO metadata consumes approximately 1.25 MiB for
348         each 1 GiB of logical storage, rounded up to the nearest slab.
349
350       • The amount of storage required for the UDS index depends on the  type
351         of index and the amount of RAM allocated to the index. For each 1 GiB
352         of RAM, a dense UDS index uses 17 GiB of storage and a sparse UDS in‐
353         dex will use 170 GiB of storage.
354

SEE ALSO

356       lvm(8), lvm.conf(5), lvmconfig(8), lvcreate(8), lvconvert(8),
357       lvchange(8), lvextend(8), lvreduce(8), lvresize(8), lvremove(8),
358       lvs(8),
359
360       vdo(8), vdoformat(8), vdostats(8),
361
362       mkfs(8)
363
364
365
366Red Hat, Inc           LVM TOOLS 2.03.22(2) (2023-08-02)             LVMVDO(7)
Impressum