1LVMVDO(7)                                                            LVMVDO(7)
2
3
4

NAME

6       lvmvdo — Support for Virtual Data Optimizer in LVM
7

DESCRIPTION

9       VDO  is  software  that provides inline block-level deduplication, com‐
10       pression, and thin provisioning capabilities for primary storage.
11
12       Deduplication is a technique for reducing the  consumption  of  storage
13       resources  by eliminating multiple copies of duplicate blocks. Compres‐
14       sion takes the individual unique blocks and shrinks them. These reduced
15       blocks  are then efficiently packed together into physical blocks. Thin
16       provisioning manages the mapping from logical blocks presented  by  VDO
17       to  where the data has actually been physically stored, and also elimi‐
18       nates any blocks of all zeroes.
19
20       With deduplication, instead of writing the same data  more  than  once,
21       VDO  detects  and  records  each  duplicate block as a reference to the
22       original block. VDO maintains a mapping from  Logical  Block  Addresses
23       (LBA) (used by the storage layer above VDO) to physical block addresses
24       (used by the storage layer under VDO).  After  deduplication,  multiple
25       logical  block  addresses  may be mapped to the same physical block ad‐
26       dress; these are called shared blocks and are reference-counted by  the
27       software.
28
29       With  compression,  VDO  compresses  multiple blocks (or shared blocks)
30       with the fast LZ4 algorithm, and bins them together where  possible  so
31       that multiple compressed blocks fit within a 4 KB block on the underly‐
32       ing storage. Mapping from LBA is to a physical block address and  index
33       within  it  for  the desired compressed data. All compressed blocks are
34       individually reference counted for correctness.
35
36       Block sharing and block compression are invisible to applications using
37       the  storage, which read and write blocks as they would if VDO were not
38       present. When a shared block is overwritten, a new  physical  block  is
39       allocated  for  storing the new block data to ensure that other logical
40       block addresses that are mapped to the shared physical  block  are  not
41       modified.
42
43       To  use  VDO  with lvm(8), you must install the standard VDO user-space
44       tools vdoformat(8) and the currently  non-standard  kernel  VDO  module
45       "kvdo".
46
47       The  "kvdo" module implements fine-grained storage virtualization, thin
48       provisioning, block sharing, and compression.  The  "uds"  module  pro‐
49       vides  memory-efficient  duplicate identification. The user-space tools
50       include vdostats(8) for extracting statistics from VDO volumes.
51

VDO TERMS

53       VDODataLV
54              VDO data LV
55              A large hidden LV with the _vdata suffix. It is created in a VG
56              used by the VDO kernel target to store  all  data  and  metadata
57              blocks.
58
59       VDOPoolLV
60              VDO pool LV
61              A  pool  for  virtual  VDOLV(s), which are the size of used VDO‐
62              DataLV.
63              Only a single VDOLV is currently supported.
64
65       VDOLV
66              VDO LV
67              Created from VDOPoolLV.
68              Appears blank after creation.
69

VDO USAGE

71       The primary methods for using VDO with lvm2:
72
73   1. Create a VDOPoolLV and a VDOLV
74       Create a VDOPoolLV that will hold VDO data, and a  virtual  size  VDOLV
75       that the user can use. If you do not specify the virtual size, then the
76       VDOLV is created with the maximum size that always fits into data  vol‐
77       ume  even  if  no  deduplication or compression can happen (i.e. it can
78       hold the incompressible content of /dev/urandom).  If you do not  spec‐
79       ify  the  name  of  VDOPoolLV, it is taken from the sequence of vpool0,
80       vpool1 ...
81
82       Note: The performance of TRIM/Discard operations is slow for large vol‐
83       umes  of  VDO type. Please try to avoid sending discard requests unless
84       necessary because it might take considerable amount of time  to  finish
85       the discard operation.
86
87       lvcreate --type vdo -n VDOLV -L DataSize -V LargeVirtualSize VG/VDOPoolLV
88       lvcreate --vdo -L DataSize VG
89
90       Example
91       # lvcreate --type vdo -n vdo0 -L 10G -V 100G vg/vdopool0
92       # mkfs.ext4 -E nodiscard /dev/vg/vdo0
93
94   2. Convert an existing LV into VDOPoolLV
95       Convert  an already created or existing LV into a VDOPoolLV, which is a
96       volume that can hold data and metadata.  You will be prompted  to  con‐
97       firm  such  conversion  because it IRREVERSIBLY DESTROYS the content of
98       such volume and the volume is immediately formatted by vdoformat(8)  as
99       a  VDO  pool data volume. You can specify the virtual size of the VDOLV
100       associated with this VDOPoolLV.  If you  do  not  specify  the  virtual
101       size, it will be set to the maximum size that can keep 100% incompress‐
102       ible data there.
103
104       lvconvert --type vdo-pool -n VDOLV -V VirtualSize VG/VDOPoolLV
105       lvconvert --vdopool VG/VDOPoolLV
106
107       Example
108       # lvconvert --type vdo-pool -n vdo0 -V10G vg/ExistingLV
109
110   3. Change the default settings used for creating a VDOPoolLV
111       VDO allows to set a large variety of options. Lots  of  these  settings
112       can  be  specified  in  lvm.conf or profile settings. You can prepare a
113       number of different profiles in the /etc/lvm/profile directory and just
114       specify  the  profile  file name.  Check the output of lvmconfig --type
115       full for a detailed description of all individual VDO settings.
116
117       Example
118       # cat <<EOF > /etc/lvm/profile/vdo_create.profile
119       allocation {
120            vdo_use_compression=1
121            vdo_use_deduplication=1
122            vdo_use_metadata_hints=1
123            vdo_minimum_io_size=4096
124            vdo_block_map_cache_size_mb=128
125            vdo_block_map_period=16380
126            vdo_check_point_frequency=0
127            vdo_use_sparse_index=0
128            vdo_index_memory_size_mb=256
129            vdo_slab_size_mb=2048
130            vdo_ack_threads=1
131            vdo_bio_threads=1
132            vdo_bio_rotation=64
133            vdo_cpu_threads=2
134            vdo_hash_zone_threads=1
135            vdo_logical_threads=1
136            vdo_physical_threads=1
137            vdo_write_policy="auto"
138            vdo_max_discard=1
139       }
140       EOF
141
142       # lvcreate --vdo -L10G --metadataprofile vdo_create vg/vdopool0
143       # lvcreate --vdo -L10G --config 'allocation/vdo_cpu_threads=4' vg/vdopool1
144
145   4. Change the compression and deduplication of a VDOPoolLV
146       Disable or enable the compression and deduplication for VDOPoolLV  (the
147       volume that maintains all VDO LV(s) associated with it).
148
149       lvchange --compression [y|n] --deduplication [y|n] VG/VDOPoolLV
150
151       Example
152       # lvchange --compression n  vg/vdopool0
153       # lvchange --deduplication y vg/vdopool1
154
155   5. Checking the usage of VDOPoolLV
156       To  quickly check how much data on a VDOPoolLV is already consumed, use
157       lvs(8). The Data% field reports how much data is occupied in  the  con‐
158       tent  of  the  virtual data for the VDOLV and how much space is already
159       consumed with all the data and metadata blocks in the VDOPoolLV.  For a
160       detailed description, use the vdostats(8) command.
161
162       Note: vdostats(8) currently understands only /dev/mapper device names.
163
164       Example
165       # lvcreate --type vdo -L10G -V20G -n vdo0 vg/vdopool0
166       # mkfs.ext4 -E nodiscard /dev/vg/vdo0
167       # lvs -a vg
168
169         LV               VG Attr       LSize  Pool     Origin Data%
170         vdo0             vg vwi-a-v--- 20.00g vdopool0        0.01
171         vdopool0         vg dwi-ao---- 10.00g                 30.16
172         [vdopool0_vdata] vg Dwi-ao---- 10.00g
173
174       # vdostats --all /dev/mapper/vg-vdopool0-vpool
175       /dev/mapper/vg-vdopool0 :
176         version                             : 30
177         release version                     : 133524
178         data blocks used                    : 79
179         ...
180
181   6. Extending the VDOPoolLV size
182       You  can  add more space to hold VDO data and metadata by extending the
183       VDODataLV using the commands lvresize(8) and lvextend(8).   The  exten‐
184       sion needs to add at least one new VDO slab. You can configure the slab
185       size with the allocation/vdo_slab_size_mb setting.
186
187       You can also enable automatic size extension of a  monitored  VDOPoolLV
188       with    the    activation/vdo_pool_autoextend_percent    and    activa‐
189       tion/vdo_pool_autoextend_threshold settings.
190
191       Note: You cannot reduce the size of a VDOPoolLV.
192
193       Note: You cannot change the size of a cached VDOPoolLV.
194
195       lvextend -L+AddingSize VG/VDOPoolLV
196
197       Example
198       # lvextend -L+50G vg/vdopool0
199       # lvresize -L300G vg/vdopool1
200
201   7. Extending or reducing the VDOLV size
202       You can extend or reduce a virtual VDO LV as a  standard  LV  with  the
203       lvresize(8), lvextend(8), and lvreduce(8) commands.
204
205       Note:  The reduction needs to process TRIM for reduced disk area to un‐
206       map used data blocks from the VDOPoolLV, which might take a long time.
207
208       lvextend -L+AddingSize VG/VDOLV
209       lvreduce -L-ReducingSize VG/VDOLV
210
211       Example
212       # lvextend -L+50G vg/vdo0
213       # lvreduce -L-50G vg/vdo1
214       # lvresize -L200G vg/vdo2
215
216   8. Component activation of a VDODataLV
217       You can activate a VDODataLV separately as a component LV for  examina‐
218       tion purposes. The activation of the VDODataLV activates the data LV in
219       read-only mode, and the data LV cannot be modified.  If  the  VDODataLV
220       is  active as a component, any upper LV using this volume CANNOT be ac‐
221       tivated. You have to deactivate the VDODataLV first to continue to  use
222       the VDOPoolLV.
223
224       Example
225       # lvchange -ay vg/vpool0_vdata
226       # lvchange -an vg/vpool0_vdata
227

VDO TOPICS

229   1. Stacking VDO
230       You can convert or stack a VDOPooLV with these currently supported vol‐
231       ume types: linear, stripe, raid, and cache with cachepool.
232
233   2. VDOPoolLV on top of raid
234       Using a raid type LV for a VDODataLV.
235
236       Example
237       # lvcreate --type raid1 -L 5G -n vdopool vg
238       # lvconvert --type vdo-pool -V 10G vg/vdopool
239
240   3. Caching a VDODataLV or a VDOPoolLV
241       VDODataLV (accepts also VDOPoolLV) caching provides a mechanism to  ac‐
242       celerate  reads  and writes of already compressed and deduplicated data
243       blocks together with VDO metadata.
244
245       A cached VDO data LV cannot be currently resized. Also,  the  threshold
246       based automatic resize will not work.
247
248       Example
249       # lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
250       # lvcreate --type cache-pool -L 1G -n cachepool vg
251       # lvconvert --cache --cachepool vg/cachepool vg/vdopool
252       # lvconvert --uncache vg/vdopool
253
254   4. Caching a VDOLV
255       VDO  LV  cache allow you to 'cache' a device for better performance be‐
256       fore it hits the processing of the VDO Pool LV layer.
257
258       Example
259       # lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
260       # lvcreate --type cache-pool -L 1G -n cachepool vg
261       # lvconvert --cache --cachepool vg/cachepool vg/vdo1
262       # lvconvert --uncache vg/vdo1
263
264   5. Usage of Discard/TRIM with a VDOLV
265       You can discard data on a VDO LV and reduce used blocks on a VDOPoolLV.
266       However, the current performance of discard operations is still not op‐
267       timal and takes a considerable amount of time and CPU.  Unless you  re‐
268       ally need it, you should avoid using discard.
269
270       When  a block device is going to be rewritten, its blocks will be auto‐
271       matically reused for new data.  Discard is useful  in  situations  when
272       user  knows  that the given portion of a VDO LV is not going to be used
273       and the discarded space can be used for block provisioning in other re‐
274       gions  of the VDO LV.  For the same reason, you should avoid using mkfs
275       with discard for a freshly created VDO LV to save a lot  of  time  that
276       this operation would take otherwise as device is already expected to be
277       empty.
278
279   6. Memory usage
280       The VDO target requires 370 MiB of RAM plus an additional 268  MiB  per
281       each 1 TiB of physical storage managed by the volume.
282
283       UDS  requires  a  minimum  of 250 MiB of RAM, which is also the default
284       amount that deduplication uses.
285
286       The memory required for the UDS index is determined by the  index  type
287       and  the required size of the deduplication window and is controlled by
288       the allocation/vdo_use_sparse_index setting.
289
290       With enabled UDS sparse indexing, it relies on the temporal locality of
291       data  and  attempts  to  retain only the most relevant index entries in
292       memory and can maintain a deduplication window that is ten times larger
293       than with dense while using the same amount of memory.
294
295       Although the sparse index provides the greatest coverage, the dense in‐
296       dex provides more deduplication advice.  For most workloads, given  the
297       same  amount  of  memory, the difference in deduplication rates between
298       dense and sparse indexes is negligible.
299
300       A dense index with 1 GiB of RAM maintains a 1 TiB deduplication window,
301       while a sparse index with 1 GiB of RAM maintains a 10 TiB deduplication
302       window.  In general, 1 GiB is sufficient for 4 TiB  of  physical  space
303       with a dense index and 40 TiB with a sparse index.
304
305   7. Storage space requirements
306       You can configure a VDOPoolLV to use up to 256 TiB of physical storage.
307       Only a certain part of the physical storage is usable  to  store  data.
308       This  section provides the calculations to determine the usable size of
309       a VDO-managed volume.
310
311       The VDO target requires storage for two types of VDO metadata  and  for
312       the UDS index:
313
314       •      The first type of VDO metadata uses approximately 1 MiB for each
315              4 GiB of physical storage plus an additional 1 MiB per slab.
316
317       •      The second type of VDO metadata consumes approximately 1.25  MiB
318              for  each  1  GiB  of logical storage, rounded up to the nearest
319              slab.
320
321       •      The amount of storage required for the UDS index depends on  the
322              type  of index and the amount of RAM allocated to the index. For
323              each 1 GiB of RAM, a dense UDS index uses 17 GiB of storage  and
324              a sparse UDS index will use 170 GiB of storage.
325
326

SEE ALSO

328       lvm(8),    lvm.conf(5),    lvmconfig(8),   lvcreate(8),   lvconvert(8),
329       lvchange(8),  lvextend(8),   lvreduce(8),   lvresize(8),   lvremove(8),
330       lvs(8), vdo(8), vdoformat(8), vdostats(8), mkfs(8)
331
332
333
334Red Hat, Inc           LVM TOOLS 2.03.11(2) (2021-01-08)             LVMVDO(7)
Impressum