1LVMVDO(7) LVMVDO(7)
2
3
4
6 lvmvdo — Support for Virtual Data Optimizer in LVM
7
9 VDO is software that provides inline block-level deduplication, com‐
10 pression, and thin provisioning capabilities for primary storage.
11
12 Deduplication is a technique for reducing the consumption of storage
13 resources by eliminating multiple copies of duplicate blocks. Compres‐
14 sion takes the individual unique blocks and shrinks them. These re‐
15 duced blocks are then efficiently packed together into physical blocks.
16 Thin provisioning manages the mapping from logical blocks presented by
17 VDO to where the data has actually been physically stored, and also
18 eliminates any blocks of all zeroes.
19
20 With deduplication, instead of writing the same data more than once,
21 VDO detects and records each duplicate block as a reference to the
22 original block. VDO maintains a mapping from Logical Block Addresses
23 (LBA) (used by the storage layer above VDO) to physical block addresses
24 (used by the storage layer under VDO). After deduplication, multiple
25 logical block addresses may be mapped to the same physical block ad‐
26 dress; these are called shared blocks and are reference-counted by the
27 software.
28
29 With compression, VDO compresses multiple blocks (or shared blocks)
30 with the fast LZ4 algorithm, and bins them together where possible so
31 that multiple compressed blocks fit within a 4 KB block on the underly‐
32 ing storage. Mapping from LBA is to a physical block address and index
33 within it for the desired compressed data. All compressed blocks are
34 individually reference counted for correctness.
35
36 Block sharing and block compression are invisible to applications using
37 the storage, which read and write blocks as they would if VDO were not
38 present. When a shared block is overwritten, a new physical block is
39 allocated for storing the new block data to ensure that other logical
40 block addresses that are mapped to the shared physical block are not
41 modified.
42
43 To use VDO with lvm(8), you must install the standard VDO user-space
44 tools vdoformat(8) and the currently non-standard kernel VDO module
45 "kvdo".
46
47 The "kvdo" module implements fine-grained storage virtualization, thin
48 provisioning, block sharing, and compression. The "uds" module pro‐
49 vides memory-efficient duplicate identification. The user-space tools
50 include vdostats(8) for extracting statistics from VDO volumes.
51
53 VDODataLV
54 VDO data LV
55 A large hidden LV with the _vdata suffix. It is created in a VG
56 used by the VDO kernel target to store all data and metadata
57 blocks.
58
59 VDOPoolLV
60 VDO pool LV
61 A pool for virtual VDOLV(s), which are the size of used VDO‐
62 DataLV.
63 Only a single VDOLV is currently supported.
64
65 VDOLV
66 VDO LV
67 Created from VDOPoolLV.
68 Appears blank after creation.
69
71 The primary methods for using VDO with lvm2:
72
73 1. Create a VDOPoolLV and a VDOLV
74 Create a VDOPoolLV that will hold VDO data, and a virtual size VDOLV
75 that the user can use. If you do not specify the virtual size, then the
76 VDOLV is created with the maximum size that always fits into data vol‐
77 ume even if no deduplication or compression can happen (i.e. it can
78 hold the incompressible content of /dev/urandom). If you do not spec‐
79 ify the name of VDOPoolLV, it is taken from the sequence of vpool0,
80 vpool1 ...
81
82 Note: The performance of TRIM/Discard operations is slow for large vol‐
83 umes of VDO type. Please try to avoid sending discard requests unless
84 necessary because it might take considerable amount of time to finish
85 the discard operation.
86
87 lvcreate --type vdo -n VDOLV -L DataSize -V LargeVirtualSize VG/VDOPoolLV
88 lvcreate --vdo -L DataSize VG
89
90 Example
91 # lvcreate --type vdo -n vdo0 -L 10G -V 100G vg/vdopool0
92 # mkfs.ext4 -E nodiscard /dev/vg/vdo0
93
94 2. Convert an existing LV into VDOPoolLV
95 Convert an already created or existing LV into a VDOPoolLV, which is a
96 volume that can hold data and metadata. You will be prompted to con‐
97 firm such conversion because it IRREVERSIBLY DESTROYS the content of
98 such volume and the volume is immediately formatted by vdoformat(8) as
99 a VDO pool data volume. You can specify the virtual size of the VDOLV
100 associated with this VDOPoolLV. If you do not specify the virtual
101 size, it will be set to the maximum size that can keep 100% incompress‐
102 ible data there.
103
104 lvconvert --type vdo-pool -n VDOLV -V VirtualSize VG/VDOPoolLV
105 lvconvert --vdopool VG/VDOPoolLV
106
107 Example
108 # lvconvert --type vdo-pool -n vdo0 -V10G vg/ExistingLV
109
110 3. Change the compression and deduplication of a VDOPoolLV
111 Disable or enable the compression and deduplication for VDOPoolLV (the
112 volume that maintains all VDO LV(s) associated with it).
113
114 lvchange --compression y|n --deduplication y|n VG/VDOPoolLV
115
116 Example
117 # lvchange --compression n vg/vdopool0
118 # lvchange --deduplication y vg/vdopool1
119
120 4. Change the default settings used for creating a VDOPoolLV
121 VDO allows to set a large variety of options. Lots of these settings
122 can be specified in lvm.conf or profile settings. You can prepare a
123 number of different profiles in the /etc/lvm/profile directory and just
124 specify the profile file name. Check the output of lvmconfig --type
125 default --withcomments for a detailed description of all individual VDO
126 settings.
127
128 Example
129 # cat <<EOF > /etc/lvm/profile/vdo_create.profile
130 allocation {
131 vdo_use_compression=1
132 vdo_use_deduplication=1
133 vdo_use_metadata_hints=1
134 vdo_minimum_io_size=4096
135 vdo_block_map_cache_size_mb=128
136 vdo_block_map_period=16380
137 vdo_check_point_frequency=0
138 vdo_use_sparse_index=0
139 vdo_index_memory_size_mb=256
140 vdo_slab_size_mb=2048
141 vdo_ack_threads=1
142 vdo_bio_threads=1
143 vdo_bio_rotation=64
144 vdo_cpu_threads=2
145 vdo_hash_zone_threads=1
146 vdo_logical_threads=1
147 vdo_physical_threads=1
148 vdo_write_policy="auto"
149 vdo_max_discard=1
150 }
151 EOF
152
153 # lvcreate --vdo -L10G --metadataprofile vdo_create vg/vdopool0
154 # lvcreate --vdo -L10G --config 'allocation/vdo_cpu_threads=4' vg/vdopool1
155
156 5. Set or change VDO settings with option --vdosettings
157 Use the form 'option=value' or 'option1=value option2=value', or repeat
158 --vdosettings for each option being set. Options are listed in the Ex‐
159 ample section above, for the full description see lvm.conf(5). Options
160 can omit 'vdo_' and 'vdo_use_' prefixes and all its underscores. So
161 i.e. vdo_use_metadata_hints=1 and metadatahints=1 are equivalent.
162 To change the option for an already existing VDOPoolLV use lvchange(8)
163 command. However not all option can be changed. Only compression and
164 deduplication options can be also changed for an active VDO LV. Lowest
165 priority options are specified with configuration file, then with
166 --vdosettings and highest are expliction option --compression and
167 --deduplication.
168
169 Example
170
171 # lvcreate --vdo -L10G --vdosettings 'ack_threads=1 hash_zone_threads=2' vg/vdopool0
172 # lvchange --vdosettings 'bio_threads=2 deduplication=1' vg/vdopool0
173
174 6. Checking the usage of VDOPoolLV
175 To quickly check how much data on a VDOPoolLV is already consumed, use
176 lvs(8). The Data% field reports how much data is occupied in the con‐
177 tent of the virtual data for the VDOLV and how much space is already
178 consumed with all the data and metadata blocks in the VDOPoolLV. For a
179 detailed description, use the vdostats(8) command.
180
181 Note: vdostats(8) currently understands only /dev/mapper device names.
182
183 Example
184 # lvcreate --type vdo -L10G -V20G -n vdo0 vg/vdopool0
185 # mkfs.ext4 -E nodiscard /dev/vg/vdo0
186 # lvs -a vg
187
188 LV VG Attr LSize Pool Origin Data%
189 vdo0 vg vwi-a-v--- 20.00g vdopool0 0.01
190 vdopool0 vg dwi-ao---- 10.00g 30.16
191 [vdopool0_vdata] vg Dwi-ao---- 10.00g
192
193 # vdostats --all /dev/mapper/vg-vdopool0-vpool
194 /dev/mapper/vg-vdopool0 :
195 version : 30
196 release version : 133524
197 data blocks used : 79
198 ...
199
200 7. Extending the VDOPoolLV size
201 You can add more space to hold VDO data and metadata by extending the
202 VDODataLV using the commands lvresize(8) and lvextend(8). The exten‐
203 sion needs to add at least one new VDO slab. You can configure the slab
204 size with the allocation/vdo_slab_size_mb setting.
205
206 You can also enable automatic size extension of a monitored VDOPoolLV
207 with the activation/vdo_pool_autoextend_percent and activation/
208 vdo_pool_autoextend_threshold settings.
209
210 Note: You cannot reduce the size of a VDOPoolLV.
211
212 lvextend -L+AddingSize VG/VDOPoolLV
213
214 Example
215 # lvextend -L+50G vg/vdopool0
216 # lvresize -L300G vg/vdopool1
217
218 8. Extending or reducing the VDOLV size
219 You can extend or reduce a virtual VDO LV as a standard LV with the
220 lvresize(8), lvextend(8), and lvreduce(8) commands.
221
222 Note: The reduction needs to process TRIM for reduced disk area to un‐
223 map used data blocks from the VDOPoolLV, which might take a long time.
224
225 lvextend -L+AddingSize VG/VDOLV
226 lvreduce -L-ReducingSize VG/VDOLV
227
228 Example
229 # lvextend -L+50G vg/vdo0
230 # lvreduce -L-50G vg/vdo1
231 # lvresize -L200G vg/vdo2
232
233 9. Component activation of a VDODataLV
234 You can activate a VDODataLV separately as a component LV for examina‐
235 tion purposes. The activation of the VDODataLV activates the data LV in
236 read-only mode, and the data LV cannot be modified. If the VDODataLV
237 is active as a component, any upper LV using this volume CANNOT be ac‐
238 tivated. You have to deactivate the VDODataLV first to continue to use
239 the VDOPoolLV.
240
241 Example
242 # lvchange -ay vg/vpool0_vdata
243 # lvchange -an vg/vpool0_vdata
244
246 1. Stacking VDO
247 You can convert or stack a VDOPooLV with these currently supported vol‐
248 ume types: linear, stripe, raid, and cache with cachepool.
249
250 2. VDOPoolLV on top of raid
251 Using a raid type LV for a VDODataLV.
252
253 Example
254 # lvcreate --type raid1 -L 5G -n vdopool vg
255 # lvconvert --type vdo-pool -V 10G vg/vdopool
256
257 3. Caching a VDOPoolLV
258 VDOPoolLV (accepts also VDODataLV volume name) caching provides a mech‐
259 anism to accelerate reads and writes of already compressed and dedupli‐
260 cated data blocks together with VDO metadata.
261
262 Example
263 # lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
264 # lvcreate --type cache-pool -L 1G -n cachepool vg
265 # lvconvert --cache --cachepool vg/cachepool vg/vdopool
266 # lvconvert --uncache vg/vdopool
267
268 4. Caching a VDOLV
269 VDO LV cache allow you to 'cache' a device for better performance be‐
270 fore it hits the processing of the VDO Pool LV layer.
271
272 Example
273 # lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
274 # lvcreate --type cache-pool -L 1G -n cachepool vg
275 # lvconvert --cache --cachepool vg/cachepool vg/vdo1
276 # lvconvert --uncache vg/vdo1
277
278 5. Usage of Discard/TRIM with a VDOLV
279 You can discard data on a VDO LV and reduce used blocks on a VDOPoolLV.
280 However, the current performance of discard operations is still not op‐
281 timal and takes a considerable amount of time and CPU. Unless you re‐
282 ally need it, you should avoid using discard.
283
284 When a block device is going to be rewritten, its blocks will be auto‐
285 matically reused for new data. Discard is useful in situations when
286 user knows that the given portion of a VDO LV is not going to be used
287 and the discarded space can be used for block provisioning in other re‐
288 gions of the VDO LV. For the same reason, you should avoid using mkfs
289 with discard for a freshly created VDO LV to save a lot of time that
290 this operation would take otherwise as device is already expected to be
291 empty.
292
293 6. Memory usage
294 The VDO target requires 38 MiB of RAM and several variable amounts:
295
296 • 1.15 MiB of RAM for each 1 MiB of configured block map cache size.
297 The block map cache requires a minimum of 150 MiB RAM.
298
299 • 1.6 MiB of RAM for each 1 TiB of logical space.
300
301 • 268 MiB of RAM for each 1 TiB of physical storage managed by the vol‐
302 ume.
303
304 UDS requires a minimum of 250 MiB of RAM, which is also the default
305 amount that deduplication uses.
306
307 The memory required for the UDS index is determined by the index type
308 and the required size of the deduplication window and is controlled by
309 the allocation/vdo_use_sparse_index setting.
310
311 With enabled UDS sparse indexing, it relies on the temporal locality of
312 data and attempts to retain only the most relevant index entries in
313 memory and can maintain a deduplication window that is ten times larger
314 than with dense while using the same amount of memory.
315
316 Although the sparse index provides the greatest coverage, the dense in‐
317 dex provides more deduplication advice. For most workloads, given the
318 same amount of memory, the difference in deduplication rates between
319 dense and sparse indexes is negligible.
320
321 A dense index with 1 GiB of RAM maintains a 1 TiB deduplication window,
322 while a sparse index with 1 GiB of RAM maintains a 10 TiB deduplication
323 window. In general, 1 GiB is sufficient for 4 TiB of physical space
324 with a dense index and 40 TiB with a sparse index.
325
326 7. Storage space requirements
327 You can configure a VDOPoolLV to use up to 256 TiB of physical storage.
328 Only a certain part of the physical storage is usable to store data.
329 This section provides the calculations to determine the usable size of
330 a VDO-managed volume.
331
332 The VDO target requires storage for two types of VDO metadata and for
333 the UDS index:
334
335 • The first type of VDO metadata uses approximately 1 MiB for each
336 4 GiB of physical storage plus an additional 1 MiB per slab.
337
338 • The second type of VDO metadata consumes approximately 1.25 MiB for
339 each 1 GiB of logical storage, rounded up to the nearest slab.
340
341 • The amount of storage required for the UDS index depends on the type
342 of index and the amount of RAM allocated to the index. For each 1 GiB
343 of RAM, a dense UDS index uses 17 GiB of storage and a sparse UDS in‐
344 dex will use 170 GiB of storage.
345
347 lvm(8), lvm.conf(5), lvmconfig(8), lvcreate(8), lvconvert(8),
348 lvchange(8), lvextend(8), lvreduce(8), lvresize(8), lvremove(8),
349 lvs(8),
350
351 vdo(8), vdoformat(8), vdostats(8),
352
353 mkfs(8)
354
355
356
357Red Hat, Inc LVM TOOLS 2.03.18(2)-git (2022-11-10) LVMVDO(7)