1LVMLOCKD(8) LVMLOCKD(8)
2
3
4
6 lvmlockd — LVM locking daemon
7
8
10 LVM commands use lvmlockd to coordinate access to shared storage.
11 When LVM is used on devices shared by multiple hosts, locks will:
12
13 • coordinate reading and writing of LVM metadata
14 • validate caching of LVM metadata
15 • prevent conflicting activation of logical volumes
16
17 lvmlockd uses an external lock manager to perform basic locking.
18 Lock manager (lock type) options are:
19
20 • sanlock: places locks on disk within LVM storage.
21 • dlm: uses network communication and a cluster manager.
22
23
25 lvmlockd [options]
26
27 For default settings, see lvmlockd -h.
28
29 --help | -h
30 Show this help information.
31
32 --version | -V
33 Show version of lvmlockd.
34
35 --test | -T
36 Test mode, do not call lock manager.
37
38 --foreground | -f
39 Don't fork.
40
41 --daemon-debug | -D
42 Don't fork and print debugging to stdout.
43
44 --pid-file | -p path
45 Set path to the pid file.
46
47 --socket-path | -s path
48 Set path to the socket to listen on.
49
50 --adopt-file path
51 Set path to the adopt file.
52
53 --syslog-priority | -S err|warning|debug
54 Write log messages from this level up to syslog.
55
56 --gl-type | -g sanlock|dlm
57 Set global lock type to be sanlock or dlm.
58
59 --host-id | -i num
60 Set the local sanlock host id.
61
62 --host-id-file | -F path
63 A file containing the local sanlock host_id.
64
65 --sanlock-timeout | -o seconds
66 Override the default sanlock I/O timeout.
67
68 --adopt | -A 0|1
69 Enable (1) or disable (0) lock adoption.
70
71
73 Initial set up
74 Setting up LVM to use lvmlockd and a shared VG for the first time in‐
75 cludes some one time set up steps:
76
77
78 1. choose a lock manager
79 dlm
80 If dlm (or corosync) are already being used by other cluster software,
81 then select dlm. dlm uses corosync which requires additional configu‐
82 ration beyond the scope of this document. See corosync and dlm docu‐
83 mentation for instructions on configuration, set up and usage.
84
85 sanlock
86 Choose sanlock if dlm/corosync are not otherwise required. sanlock
87 does not depend on any clustering software or configuration.
88
89
90 2. configure hosts to use lvmlockd
91 On all hosts running lvmlockd, configure lvm.conf:
92 use_lvmlockd = 1
93
94 sanlock
95 Assign each host a unique host_id in the range 1-2000 by setting
96 /etc/lvm/lvmlocal.conf local/host_id
97
98
99 3. start lvmlockd
100 Start the lvmlockd daemon.
101 Use systemctl, a cluster resource agent, or run directly, e.g.
102 systemctl start lvmlockd
103
104
105 4. start lock manager
106 sanlock
107 Start the sanlock and wdmd daemons.
108 Use systemctl or run directly, e.g.
109 systemctl start wdmd sanlock
110
111 dlm
112 Start the dlm and corosync daemons.
113 Use systemctl, a cluster resource agent, or run directly, e.g.
114 systemctl start corosync dlm
115
116
117 5. create VG on shared devices
118 vgcreate --shared <vgname> <devices>
119
120 The shared option sets the VG lock type to sanlock or dlm depending on
121 which lock manager is running. LVM commands acquire locks from lvm‐
122 lockd, and lvmlockd uses the chosen lock manager.
123
124
125 6. start VG on all hosts
126 vgchange --lock-start
127
128 Shared VGs must be started before they are used. Starting the VG per‐
129 forms lock manager initialization that is necessary to begin using
130 locks (i.e. creating and joining a lockspace). Starting the VG may
131 take some time, and until the start completes the VG may not be modi‐
132 fied or activated.
133
134
135 7. create and activate LVs
136 Standard lvcreate and lvchange commands are used to create and activate
137 LVs in a shared VG.
138
139 An LV activated exclusively on one host cannot be activated on another.
140 When multiple hosts need to use the same LV concurrently, the LV can be
141 activated with a shared lock (see lvchange options -aey vs -asy.)
142 (Shared locks are disallowed for certain LV types that cannot be used
143 from multiple hosts.)
144
145
146
147 Normal start up and shut down
148 After initial set up, start up and shut down include the following
149 steps. They can be performed directly or may be automated using sys‐
150 temd or a cluster resource manager/agents.
151
152 • start lvmlockd
153 • start lock manager
154 • vgchange --lock-start
155 • activate LVs in shared VGs
156
157 The shut down sequence is the reverse:
158
159 • deactivate LVs in shared VGs
160 • vgchange --lock-stop
161 • stop lock manager
162 • stop lvmlockd
163
164
166 Protecting VGs on shared devices
167 The following terms are used to describe the different ways of access‐
168 ing VGs on shared devices.
169
170 shared VG
171
172 A shared VG exists on shared storage that is visible to multiple hosts.
173 LVM acquires locks through lvmlockd to coordinate access to shared VGs.
174 A shared VG has lock_type "dlm" or "sanlock", which specifies the lock
175 manager lvmlockd will use.
176
177 When the lock manager for the lock type is not available (e.g. not
178 started or failed), lvmlockd is unable to acquire locks for LVM com‐
179 mands. In this situation, LVM commands are only allowed to read and
180 display the VG; changes and activation will fail.
181
182 local VG
183
184 A local VG is meant to be used by a single host. It has no lock type
185 or lock type "none". A local VG typically exists on local (non-shared)
186 devices and cannot be used concurrently from different hosts.
187
188 If a local VG does exist on shared devices, it should be owned by a
189 single host by having the system ID set, see lvmsystemid(7). The host
190 with a matching system ID can use the local VG and other hosts will ig‐
191 nore it. A VG with no lock type and no system ID should be excluded
192 from all but one host using lvm.conf filters. Without any of these
193 protections, a local VG on shared devices can be easily damaged or de‐
194 stroyed.
195
196 clvm VG
197
198 A clvm VG (or clustered VG) is a VG on shared storage (like a shared
199 VG) that requires clvmd for clustering and locking. See below for con‐
200 verting a clvm/clustered VG to a shared VG.
201
202
203
204 shared VGs from hosts not using lvmlockd
205 Hosts that do not use shared VGs will not be running lvmlockd. In this
206 case, shared VGs that are still visible to the host will be ignored
207 (like foreign VGs, see lvmsystemid(7).)
208
209 The --shared option for reporting and display commands causes shared
210 VGs to be displayed on a host not using lvmlockd, like the --foreign
211 option does for foreign VGs.
212
213
214
215 creating the first sanlock VG
216 When use_lvmlockd is first enabled in lvm.conf, and before the first
217 sanlock VG is created, no global lock will exist. In this initial
218 state, LVM commands try and fail to acquire the global lock, producing
219 a warning, and some commands are disallowed. Once the first sanlock VG
220 is created, the global lock will be available, and LVM will be fully
221 operational.
222
223 When a new sanlock VG is created, its lockspace is automatically
224 started on the host that creates it. Other hosts need to run 'vgchange
225 --lock-start' to start the new VG before they can use it.
226
227 Creating the first sanlock VG is not protected by locking, so it re‐
228 quires special attention. This is because sanlock locks exist on stor‐
229 age within the VG, so they are not available until after the VG is cre‐
230 ated. The first sanlock VG that is created will automatically contain
231 the "global lock". Be aware of the following special considerations:
232
233
234 • The first vgcreate command needs to be given the path to a device
235 that has not yet been initialized with pvcreate. The pvcreate ini‐
236 tialization will be done by vgcreate. This is because the pvcreate
237 command requires the global lock, which will not be available until
238 after the first sanlock VG is created.
239
240
241 • Because the first sanlock VG will contain the global lock, this VG
242 needs to be accessible to all hosts that will use sanlock shared VGs.
243 All hosts will need to use the global lock from the first sanlock VG.
244
245
246 • The device and VG name used by the initial vgcreate will not be pro‐
247 tected from concurrent use by another vgcreate on another host.
248
249 See below for more information about managing the sanlock global
250 lock.
251
252
253
254 using shared VGs
255 In the 'vgs' command, shared VGs are indicated by "s" (for shared) in
256 the sixth attr field, and by "shared" in the "--options shared" report
257 field. The specific lock type and lock args for a shared VG can be
258 displayed with 'vgs -o+locktype,lockargs'.
259
260 Shared VGs need to be "started" and "stopped", unlike other types of
261 VGs. See the following section for a full description of starting and
262 stopping.
263
264 Removing a shared VG will fail if other hosts have the VG started. Run
265 vgchange --lock-stop <vgname> on all other hosts before vgremove. (It
266 may take several seconds before vgremove recognizes that all hosts have
267 stopped a sanlock VG.)
268
269
270 starting and stopping VGs
271 Starting a shared VG (vgchange --lock-start) causes the lock manager to
272 start (join) the lockspace for the VG on the host where it is run.
273 This makes locks for the VG available to LVM commands on the host. Be‐
274 fore a VG is started, only LVM commands that read/display the VG are
275 allowed to continue without locks (and with a warning).
276
277 Stopping a shared VG (vgchange --lock-stop) causes the lock manager to
278 stop (leave) the lockspace for the VG on the host where it is run.
279 This makes locks for the VG inaccessible to the host. A VG cannot be
280 stopped while it has active LVs.
281
282 When using the lock type sanlock, starting a VG can take a long time
283 (potentially minutes if the host was previously shut down without
284 cleanly stopping the VG.)
285
286 A shared VG can be started after all the following are true:
287 • lvmlockd is running
288 • the lock manager is running
289 • the VG's devices are visible on the system
290
291 A shared VG can be stopped if all LVs are deactivated.
292
293 All shared VGs can be started/stopped using:
294 vgchange --lock-start
295 vgchange --lock-stop
296
297
298 Individual VGs can be started/stopped using:
299 vgchange --lock-start <vgname> ...
300 vgchange --lock-stop <vgname> ...
301
302 To make vgchange not wait for start to complete:
303 vgchange --lock-start --lock-opt nowait ...
304
305 lvmlockd can be asked directly to stop all lockspaces:
306 lvmlockctl -S--stop-lockspaces
307
308 To start only selected shared VGs, use the lvm.conf activa‐
309 tion/lock_start_list. When defined, only VG names in this list are
310 started by vgchange. If the list is not defined (the default), all
311 visible shared VGs are started. To start only "vg1", use the following
312 lvm.conf configuration:
313
314 activation {
315 lock_start_list = [ "vg1" ]
316 ...
317 }
318
319
320
321 internal command locking
322 To optimize the use of LVM with lvmlockd, be aware of the three kinds
323 of locks and when they are used:
324
325 Global lock
326
327 The global lock is associated with global information, which is infor‐
328 mation not isolated to a single VG. This includes:
329
330 • The global VG namespace.
331 • The set of orphan PVs and unused devices.
332 • The properties of orphan PVs, e.g. PV size.
333
334 The global lock is acquired in shared mode by commands that read this
335 information, or in exclusive mode by commands that change it. For ex‐
336 ample, the command 'vgs' acquires the global lock in shared mode be‐
337 cause it reports the list of all VG names, and the vgcreate command ac‐
338 quires the global lock in exclusive mode because it creates a new VG
339 name, and it takes a PV from the list of unused PVs.
340
341 When an LVM command is given a tag argument, or uses select, it must
342 read all VGs to match the tag or selection, which causes the global
343 lock to be acquired.
344
345 VG lock
346
347 A VG lock is associated with each shared VG. The VG lock is acquired
348 in shared mode to read the VG and in exclusive mode to change the VG or
349 activate LVs. This lock serializes access to a VG with all other LVM
350 commands accessing the VG from all hosts.
351
352 The command 'vgs <vgname>' does not acquire the global lock (it does
353 not need the list of all VG names), but will acquire the VG lock on
354 each VG name argument.
355
356 LV lock
357
358 An LV lock is acquired before the LV is activated, and is released af‐
359 ter the LV is deactivated. If the LV lock cannot be acquired, the LV
360 is not activated. (LV locks are persistent and remain in place when
361 the activation command is done. Global and VG locks are transient, and
362 are held only while an LVM command is running.)
363
364 lock retries
365
366 If a request for a global or VG lock fails due to a lock conflict with
367 another host, lvmlockd automatically retries for a short time before
368 returning a failure to the LVM command. If those retries are insuffi‐
369 cient, the LVM command will retry the entire lock request a number of
370 times specified by global/lvmlockd_lock_retries before failing. If a
371 request for an LV lock fails due to a lock conflict, the command fails
372 immediately.
373
374
375
376 managing the global lock in sanlock VGs
377 The global lock exists in one of the sanlock VGs. The first sanlock VG
378 created will contain the global lock. Subsequent sanlock VGs will each
379 contain a disabled global lock that can be enabled later if necessary.
380
381 The VG containing the global lock must be visible to all hosts using
382 sanlock VGs. For this reason, it can be useful to create a small san‐
383 lock VG, visible to all hosts, and dedicated to just holding the global
384 lock. While not required, this strategy can help to avoid difficulty
385 in the future if VGs are moved or removed.
386
387 The vgcreate command typically acquires the global lock, but in the
388 case of the first sanlock VG, there will be no global lock to acquire
389 until the first vgcreate is complete. So, creating the first sanlock
390 VG is a special case that skips the global lock.
391
392 vgcreate determines that it's creating the first sanlock VG when no
393 other sanlock VGs are visible on the system. It is possible that other
394 sanlock VGs do exist, but are not visible when vgcreate checks for
395 them. In this case, vgcreate will create a new sanlock VG with the
396 global lock enabled. When the another VG containing a global lock ap‐
397 pears, lvmlockd will then see more than one VG with a global lock en‐
398 abled. LVM commands will report that there are duplicate global locks.
399
400 If the situation arises where more than one sanlock VG contains a
401 global lock, the global lock should be manually disabled in all but one
402 of them with the command:
403
404 lvmlockctl --gl-disable <vgname>
405
406 (The one VG with the global lock enabled must be visible to all hosts.)
407
408 An opposite problem can occur if the VG holding the global lock is re‐
409 moved. In this case, no global lock will exist following the vgremove,
410 and subsequent LVM commands will fail to acquire it. In this case, the
411 global lock needs to be manually enabled in one of the remaining san‐
412 lock VGs with the command:
413
414 lvmlockctl --gl-enable <vgname>
415
416 (Using a small sanlock VG dedicated to holding the global lock can
417 avoid the case where the global lock must be manually enabled after a
418 vgremove.)
419
420
421
422 internal lvmlock LV
423 A sanlock VG contains a hidden LV called "lvmlock" that holds the san‐
424 lock locks. vgreduce cannot yet remove the PV holding the lvmlock LV.
425 To remove this PV, change the VG lock type to "none", run vgreduce,
426 then change the VG lock type back to "sanlock". Similarly, pvmove can‐
427 not be used on a PV used by the lvmlock LV.
428
429 To place the lvmlock LV on a specific device, create the VG with only
430 that device, then use vgextend to add other devices.
431
432
433
434 LV activation
435 In a shared VG, LV activation involves locking through lvmlockd, and
436 the following values are possible with lvchange/vgchange -a:
437
438
439 y|ey The command activates the LV in exclusive mode, allowing a sin‐
440 gle host to activate the LV. Before activating the LV, the com‐
441 mand uses lvmlockd to acquire an exclusive lock on the LV. If
442 the lock cannot be acquired, the LV is not activated and an er‐
443 ror is reported. This would happen if the LV is active on an‐
444 other host.
445
446
447 sy The command activates the LV in shared mode, allowing multiple
448 hosts to activate the LV concurrently. Before activating the
449 LV, the command uses lvmlockd to acquire a shared lock on the
450 LV. If the lock cannot be acquired, the LV is not activated and
451 an error is reported. This would happen if the LV is active ex‐
452 clusively on another host. If the LV type prohibits shared ac‐
453 cess, such as a snapshot, the command will report an error and
454 fail. The shared mode is intended for a multi-host/cluster ap‐
455 plication or file system. LV types that cannot be used concur‐
456 rently from multiple hosts include thin, cache, raid, mirror,
457 and snapshot.
458
459
460 n The command deactivates the LV. After deactivating the LV, the
461 command uses lvmlockd to release the current lock on the LV.
462
463
464
465 manually repairing a shared VG
466 Some failure conditions may not be repairable while the VG has a shared
467 lock type. In these cases, it may be possible to repair the VG by
468 forcibly changing the lock type to "none". This is done by adding
469 "--lock-opt force" to the normal command for changing the lock type:
470 vgchange --lock-type none VG. The VG lockspace should first be stopped
471 on all hosts, and be certain that no hosts are using the VG before this
472 is done.
473
474
475
476 recover from lost PV holding sanlock locks
477 In a sanlock VG, the sanlock locks are held on the hidden "lvmlock" LV.
478 If the PV holding this LV is lost, a new lvmlock LV needs to be cre‐
479 ated. To do this, ensure no hosts are using the VG, then forcibly
480 change the lock type to "none" (see above). Then change the lock type
481 back to "sanlock" with the normal command for changing the lock type:
482 vgchange --lock-type sanlock VG. This recreates the internal lvmlock
483 LV with the necessary locks.
484
485
486
487 locking system failures
488 lvmlockd failure
489
490 If lvmlockd fails or is killed while holding locks, the locks are or‐
491 phaned in the lock manager. Orphaned locks must be cleared or adopted
492 before the associated resources can be accessed normally. If lock
493 adoption is enabled, lvmlockd keeps a record of locks in the adopt-
494 file. A subsequent instance of lvmlockd will then adopt locks orphaned
495 by the previous instance. Adoption must be enabled in both instances
496 (--adopt|-A 1). Without adoption, the lock manager or host would re‐
497 quire a reset to clear orphaned lock state.
498
499 dlm/corosync failure
500
501 If dlm or corosync fail, the clustering system will fence the host us‐
502 ing a method configured within the dlm/corosync clustering environment.
503
504 LVM commands on other hosts will be blocked from acquiring any locks
505 until the dlm/corosync recovery process is complete.
506
507 sanlock lease storage failure
508
509 If the PV under a sanlock VG's lvmlock LV is disconnected, unresponsive
510 or too slow, sanlock cannot renew the lease for the VG's locks. After
511 some time, the lease will expire, and locks that the host owns in the
512 VG can be acquired by other hosts. The VG must be forcibly deactivated
513 on the host with the expiring lease before other hosts can acquire its
514 locks.
515
516 When the sanlock daemon detects that the lease storage is lost, it runs
517 the command lvmlockctl --kill <vgname>. This command emits a syslog
518 message stating that lease storage is lost for the VG, and LVs must be
519 immediately deactivated.
520
521 If no LVs are active in the VG, then the lockspace with an expiring
522 lease will be removed, and errors will be reported when trying to use
523 the VG. Use the lvmlockctl --drop command to clear the stale lockspace
524 from lvmlockd.
525
526 If the VG has active LVs when the lock storage is lost, the LVs must be
527 quickly deactivated before the lockspace lease expires. After all LVs
528 are deactivated, run lvmlockctl --drop <vgname> to clear the expiring
529 lockspace from lvmlockd. If all LVs in the VG are not deactivated
530 within about 40 seconds, sanlock uses wdmd and the local watchdog to
531 reset the host. The machine reset is effectively a severe form of "de‐
532 activating" LVs before they can be activated on other hosts. The reset
533 is considered a better alternative than having LVs used by multiple
534 hosts at once, which could easily damage or destroy their content.
535
536 In the future, the lvmlockctl kill command may automatically attempt to
537 forcibly deactivate LVs before the sanlock lease expires. Until then,
538 the user must notice the syslog message and manually deactivate the VG
539 before sanlock resets the machine.
540
541 sanlock daemon failure
542
543 If the sanlock daemon fails or exits while a lockspace is started, the
544 local watchdog will reset the host. This is necessary to protect any
545 application resources that depend on sanlock leases.
546
547
548
549 changing dlm cluster name
550 When a dlm VG is created, the cluster name is saved in the VG metadata.
551 To use the VG, a host must be in the named dlm cluster. If the dlm
552 cluster name changes, or the VG is moved to a new cluster, the dlm
553 cluster name saved in the VG must also be changed.
554
555 To see the dlm cluster name saved in the VG, use the command:
556 vgs -o+locktype,lockargs <vgname>
557
558 To change the dlm cluster name in the VG when the VG is still used by
559 the original cluster:
560
561
562 • Start the VG on the host changing the lock type
563 vgchange --lock-start <vgname>
564
565
566 • Stop the VG on all other hosts:
567 vgchange --lock-stop <vgname>
568
569
570 • Change the VG lock type to none on the host where the VG is started:
571 vgchange --lock-type none <vgname>
572
573
574 • Change the dlm cluster name on the hosts or move the VG to the new
575 cluster. The new dlm cluster must now be running on the host. Ver‐
576 ify the new name by:
577 cat /sys/kernel/config/dlm/cluster/cluster_name
578
579
580 • Change the VG lock type back to dlm which sets the new cluster name:
581 vgchange --lock-type dlm <vgname>
582
583
584 • Start the VG on hosts to use it:
585 vgchange --lock-start <vgname>
586
587
588 To change the dlm cluster name in the VG when the dlm cluster name has
589 already been changed on the hosts, or the VG has already moved to a
590 different cluster:
591
592
593 • Ensure the VG is not being used by any hosts.
594
595
596 • The new dlm cluster must be running on the host making the change.
597 The current dlm cluster name can be seen by:
598 cat /sys/kernel/config/dlm/cluster/cluster_name
599
600
601 • Change the VG lock type to none:
602 vgchange --lock-type none --lock-opt force <vgname>
603
604
605 • Change the VG lock type back to dlm which sets the new cluster name:
606 vgchange --lock-type dlm <vgname>
607
608
609 • Start the VG on hosts to use it:
610 vgchange --lock-start <vgname>
611
612
613
614 changing a local VG to a shared VG
615 All LVs must be inactive to change the lock type.
616
617 lvmlockd must be configured and running as described in USAGE.
618
619
620 • Change a local VG to a shared VG with the command:
621 vgchange --lock-type sanlock|dlm <vgname>
622
623
624 • Start the VG on hosts to use it:
625 vgchange --lock-start <vgname>
626
627
628 changing a shared VG to a local VG
629 All LVs must be inactive to change the lock type.
630
631
632 • Start the VG on the host making the change:
633 vgchange --lock-start <vgname>
634
635
636 • Stop the VG on all other hosts:
637 vgchange --lock-stop <vgname>
638
639
640 • Change the VG lock type to none on the host where the VG is started:
641 vgchange --lock-type none <vgname>
642
643
644 If the VG cannot be started with the previous lock type, then the lock
645 type can be forcibly changed to none with:
646
647 vgchange --lock-type none --lock-opt force <vgname>
648
649 To change a VG from one lock type to another (i.e. between sanlock and
650 dlm), first change it to a local VG, then to the new type.
651
652
653
654 changing a clvm/clustered VG to a shared VG
655 All LVs must be inactive to change the lock type.
656
657 First change the clvm/clustered VG to a local VG. Within a running
658 clvm cluster, change a clustered VG to a local VG with the command:
659
660 vgchange -cn <vgname>
661
662 If the clvm cluster is no longer running on any nodes, then extra op‐
663 tions can be used to forcibly make the VG local. Caution: this is only
664 safe if all nodes have stopped using the VG:
665
666 vgchange --lock-type none --lock-opt force <vgname>
667
668 After the VG is local, follow the steps described in "changing a local
669 VG to a shared VG".
670
671
672 extending an LV active on multiple hosts
673 With lvmlockd and dlm, a special clustering procedure is used to re‐
674 fresh a shared LV on remote cluster nodes after it has been extended on
675 one node.
676
677 When an LV holding gfs2 or ocfs2 is active on multiple hosts with a
678 shared lock, lvextend is permitted to run with an existing shared LV
679 lock in place of the normal exclusive LV lock.
680
681 After lvextend has finished extending the LV, it sends a remote request
682 to other nodes running the dlm to run 'lvchange --refresh' on the LV.
683 This uses dlm_controld and corosync features.
684
685 Some special --lockopt values can be used to modify this process.
686 "shupdate" permits the lvextend update with an existing shared lock if
687 it isn't otherwise permitted. "norefresh" prevents the remote refresh
688 operation.
689
690
691
692 limitations of shared VGs
693 Things that do not yet work in shared VGs:
694 • using external origins for thin LVs
695 • splitting snapshots from LVs
696 • splitting mirrors in sanlock VGs
697 • pvmove of entire PVs, or under LVs activated with shared locks
698 • vgsplit and vgmerge (convert to a local VG to do this)
699
700
701
702 lvmlockd changes from clvmd
703 (See above for converting an existing clvm VG to a shared VG.)
704
705 While lvmlockd and clvmd are entirely different systems, LVM command
706 usage remains similar. Differences are more notable when using lvm‐
707 lockd's sanlock option.
708
709 Visible usage differences between shared VGs (using lvmlockd) and
710 clvm/clustered VGs (using clvmd):
711
712
713 • lvm.conf is configured to use lvmlockd by setting use_lvmlockd=1.
714 clvmd used locking_type=3.
715
716
717 • vgcreate --shared creates a shared VG. vgcreate --clustered y cre‐
718 ated a clvm/clustered VG.
719
720
721 • lvmlockd adds the option of using sanlock for locking, avoiding the
722 need for network clustering.
723
724
725 • lvmlockd defaults to the exclusive activation mode whenever the acti‐
726 vation mode is unspecified, i.e. -ay means -aey, not -asy.
727
728
729 • lvmlockd commands always apply to the local host, and never have an
730 effect on a remote host. (The activation option 'l' is not used.)
731
732
733 • lvmlockd saves the cluster name for a shared VG using dlm. Only
734 hosts in the matching cluster can use the VG.
735
736
737 • lvmlockd requires starting/stopping shared VGs with vgchange
738 --lock-start and --lock-stop.
739
740
741 • vgremove of a sanlock VG may fail indicating that all hosts have not
742 stopped the VG lockspace. Stop the VG on all hosts using vgchange
743 --lock-stop.
744
745
746 • vgreduce or pvmove of a PV in a sanlock VG will fail if it holds the
747 internal "lvmlock" LV that holds the sanlock locks.
748
749
750 • lvmlockd uses lock retries instead of lock queueing, so high lock
751 contention may require increasing global/lvmlockd_lock_retries to
752 avoid transient lock failures.
753
754
755 • lvmlockd includes VG reporting options lock_type and lock_args, and
756 LV reporting option lock_args to view the corresponding metadata
757 fields.
758
759
760 • In the 'vgs' command's sixth VG attr field, "s" for "shared" is dis‐
761 played for shared VGs.
762
763
764 • If lvmlockd fails or is killed while in use, locks it held remain but
765 are orphaned in the lock manager. lvmlockd can be restarted with an
766 option to adopt the orphan locks from the previous instance of lvm‐
767 lockd.
768
769
770Red Hat, Inc LVM TOOLS 2.03.11(2) (2021-01-08) LVMLOCKD(8)