1LVMLOCKD(8) LVMLOCKD(8)
2
3
4
6 lvmlockd — LVM locking daemon
7
8
10 LVM commands use lvmlockd to coordinate access to shared storage.
11 When LVM is used on devices shared by multiple hosts, locks will:
12
13 · coordinate reading and writing of LVM metadata
14 · validate caching of LVM metadata
15 · prevent conflicting activation of logical volumes
16
17 lvmlockd uses an external lock manager to perform basic locking.
18 Lock manager (lock type) options are:
19
20 · sanlock: places locks on disk within LVM storage.
21 · dlm: uses network communication and a cluster manager.
22
23
25 lvmlockd [options]
26
27 For default settings, see lvmlockd -h.
28
29 --help | -h
30 Show this help information.
31
32 --version | -V
33 Show version of lvmlockd.
34
35 --test | -T
36 Test mode, do not call lock manager.
37
38 --foreground | -f
39 Don't fork.
40
41 --daemon-debug | -D
42 Don't fork and print debugging to stdout.
43
44 --pid-file | -p path
45 Set path to the pid file.
46
47 --socket-path | -s path
48 Set path to the socket to listen on.
49
50 --adopt-file path
51 Set path to the adopt file.
52
53 --syslog-priority | -S err|warning|debug
54 Write log messages from this level up to syslog.
55
56 --gl-type | -g sanlock|dlm
57 Set global lock type to be sanlock or dlm.
58
59 --host-id | -i num
60 Set the local sanlock host id.
61
62 --host-id-file | -F path
63 A file containing the local sanlock host_id.
64
65 --sanlock-timeout | -o seconds
66 Override the default sanlock I/O timeout.
67
68 --adopt | -A 0|1
69 Enable (1) or disable (0) lock adoption.
70
71
73 Initial set up
74 Setting up LVM to use lvmlockd and a shared VG for the first time
75 includes some one time set up steps:
76
77
78 1. choose a lock manager
79 dlm
80 If dlm (or corosync) are already being used by other cluster software,
81 then select dlm. dlm uses corosync which requires additional configu‐
82 ration beyond the scope of this document. See corosync and dlm docu‐
83 mentation for instructions on configuration, set up and usage.
84
85 sanlock
86 Choose sanlock if dlm/corosync are not otherwise required. sanlock
87 does not depend on any clustering software or configuration.
88
89
90 2. configure hosts to use lvmlockd
91 On all hosts running lvmlockd, configure lvm.conf:
92 use_lvmlockd = 1
93
94 sanlock
95 Assign each host a unique host_id in the range 1-2000 by setting
96 /etc/lvm/lvmlocal.conf local/host_id
97
98
99 3. start lvmlockd
100 Start the lvmlockd daemon.
101 Use systemctl, a cluster resource agent, or run directly, e.g.
102 systemctl start lvmlockd
103
104
105 4. start lock manager
106 sanlock
107 Start the sanlock and wdmd daemons.
108 Use systemctl or run directly, e.g.
109 systemctl start wdmd sanlock
110
111 dlm
112 Start the dlm and corosync daemons.
113 Use systemctl, a cluster resource agent, or run directly, e.g.
114 systemctl start corosync dlm
115
116
117 5. create VG on shared devices
118 vgcreate --shared <vgname> <devices>
119
120 The shared option sets the VG lock type to sanlock or dlm depending on
121 which lock manager is running. LVM commands acquire locks from lvm‐
122 lockd, and lvmlockd uses the chosen lock manager.
123
124
125 6. start VG on all hosts
126 vgchange --lock-start
127
128 Shared VGs must be started before they are used. Starting the VG per‐
129 forms lock manager initialization that is necessary to begin using
130 locks (i.e. creating and joining a lockspace). Starting the VG may
131 take some time, and until the start completes the VG may not be modi‐
132 fied or activated.
133
134
135 7. create and activate LVs
136 Standard lvcreate and lvchange commands are used to create and activate
137 LVs in a shared VG.
138
139 An LV activated exclusively on one host cannot be activated on another.
140 When multiple hosts need to use the same LV concurrently, the LV can be
141 activated with a shared lock (see lvchange options -aey vs -asy.)
142 (Shared locks are disallowed for certain LV types that cannot be used
143 from multiple hosts.)
144
145
146
147 Normal start up and shut down
148 After initial set up, start up and shut down include the following
149 steps. They can be performed directly or may be automated using sys‐
150 temd or a cluster resource manager/agents.
151
152 · start lvmlockd
153 · start lock manager
154 · vgchange --lock-start
155 · activate LVs in shared VGs
156
157 The shut down sequence is the reverse:
158
159 · deactivate LVs in shared VGs
160 · vgchange --lock-stop
161 · stop lock manager
162 · stop lvmlockd
163
164
166 Protecting VGs on shared devices
167 The following terms are used to describe the different ways of access‐
168 ing VGs on shared devices.
169
170 shared VG
171
172 A shared VG exists on shared storage that is visible to multiple hosts.
173 LVM acquires locks through lvmlockd to coordinate access to shared VGs.
174 A shared VG has lock_type "dlm" or "sanlock", which specifies the lock
175 manager lvmlockd will use.
176
177 When the lock manager for the lock type is not available (e.g. not
178 started or failed), lvmlockd is unable to acquire locks for LVM com‐
179 mands. In this situation, LVM commands are only allowed to read and
180 display the VG; changes and activation will fail.
181
182 local VG
183
184 A local VG is meant to be used by a single host. It has no lock type
185 or lock type "none". A local VG typically exists on local (non-shared)
186 devices and cannot be used concurrently from different hosts.
187
188 If a local VG does exist on shared devices, it should be owned by a
189 single host by having the system ID set, see lvmsystemid(7). The host
190 with a matching system ID can use the local VG and other hosts will
191 ignore it. A VG with no lock type and no system ID should be excluded
192 from all but one host using lvm.conf filters. Without any of these
193 protections, a local VG on shared devices can be easily damaged or
194 destroyed.
195
196 clvm VG
197
198 A clvm VG (or clustered VG) is a VG on shared storage (like a shared
199 VG) that requires clvmd for clustering and locking. See below for con‐
200 verting a clvm/clustered VG to a shared VG.
201
202
203
204 shared VGs from hosts not using lvmlockd
205 Hosts that do not use shared VGs will not be running lvmlockd. In this
206 case, shared VGs that are still visible to the host will be ignored
207 (like foreign VGs, see lvmsystemid(7).)
208
209 The --shared option for reporting and display commands causes shared
210 VGs to be displayed on a host not using lvmlockd, like the --foreign
211 option does for foreign VGs.
212
213
214
215 creating the first sanlock VG
216 When use_lvmlockd is first enabled in lvm.conf, and before the first
217 sanlock VG is created, no global lock will exist. In this initial
218 state, LVM commands try and fail to acquire the global lock, producing
219 a warning, and some commands are disallowed. Once the first sanlock VG
220 is created, the global lock will be available, and LVM will be fully
221 operational.
222
223 When a new sanlock VG is created, its lockspace is automatically
224 started on the host that creates it. Other hosts need to run 'vgchange
225 --lock-start' to start the new VG before they can use it.
226
227 Creating the first sanlock VG is not protected by locking, so it
228 requires special attention. This is because sanlock locks exist on
229 storage within the VG, so they are not available until after the VG is
230 created. The first sanlock VG that is created will automatically con‐
231 tain the "global lock". Be aware of the following special considera‐
232 tions:
233
234
235 · The first vgcreate command needs to be given the path to a device
236 that has not yet been initialized with pvcreate. The pvcreate ini‐
237 tialization will be done by vgcreate. This is because the pvcreate
238 command requires the global lock, which will not be available until
239 after the first sanlock VG is created.
240
241
242 · Because the first sanlock VG will contain the global lock, this VG
243 needs to be accessible to all hosts that will use sanlock shared VGs.
244 All hosts will need to use the global lock from the first sanlock VG.
245
246
247 · The device and VG name used by the initial vgcreate will not be pro‐
248 tected from concurrent use by another vgcreate on another host.
249
250 See below for more information about managing the sanlock global
251 lock.
252
253
254
255 using shared VGs
256 In the 'vgs' command, shared VGs are indicated by "s" (for shared) in
257 the sixth attr field, and by "shared" in the "--options shared" report
258 field. The specific lock type and lock args for a shared VG can be
259 displayed with 'vgs -o+locktype,lockargs'.
260
261 Shared VGs need to be "started" and "stopped", unlike other types of
262 VGs. See the following section for a full description of starting and
263 stopping.
264
265 Removing a shared VG will fail if other hosts have the VG started. Run
266 vgchange --lock-stop <vgname> on all other hosts before vgremove. (It
267 may take several seconds before vgremove recognizes that all hosts have
268 stopped a sanlock VG.)
269
270
271 starting and stopping VGs
272 Starting a shared VG (vgchange --lock-start) causes the lock manager to
273 start (join) the lockspace for the VG on the host where it is run.
274 This makes locks for the VG available to LVM commands on the host.
275 Before a VG is started, only LVM commands that read/display the VG are
276 allowed to continue without locks (and with a warning).
277
278 Stopping a shared VG (vgchange --lock-stop) causes the lock manager to
279 stop (leave) the lockspace for the VG on the host where it is run.
280 This makes locks for the VG inaccessible to the host. A VG cannot be
281 stopped while it has active LVs.
282
283 When using the lock type sanlock, starting a VG can take a long time
284 (potentially minutes if the host was previously shut down without
285 cleanly stopping the VG.)
286
287 A shared VG can be started after all the following are true:
288 · lvmlockd is running
289 · the lock manager is running
290 · the VG's devices are visible on the system
291
292 A shared VG can be stopped if all LVs are deactivated.
293
294 All shared VGs can be started/stopped using:
295 vgchange --lock-start
296 vgchange --lock-stop
297
298
299 Individual VGs can be started/stopped using:
300 vgchange --lock-start <vgname> ...
301 vgchange --lock-stop <vgname> ...
302
303 To make vgchange not wait for start to complete:
304 vgchange --lock-start --lock-opt nowait ...
305
306 lvmlockd can be asked directly to stop all lockspaces:
307 lvmlockctl -S--stop-lockspaces
308
309 To start only selected shared VGs, use the lvm.conf activa‐
310 tion/lock_start_list. When defined, only VG names in this list are
311 started by vgchange. If the list is not defined (the default), all
312 visible shared VGs are started. To start only "vg1", use the following
313 lvm.conf configuration:
314
315 activation {
316 lock_start_list = [ "vg1" ]
317 ...
318 }
319
320
321
322 internal command locking
323 To optimize the use of LVM with lvmlockd, be aware of the three kinds
324 of locks and when they are used:
325
326 Global lock
327
328 The global lock is associated with global information, which is infor‐
329 mation not isolated to a single VG. This includes:
330
331 · The global VG namespace.
332 · The set of orphan PVs and unused devices.
333 · The properties of orphan PVs, e.g. PV size.
334
335 The global lock is acquired in shared mode by commands that read this
336 information, or in exclusive mode by commands that change it. For
337 example, the command 'vgs' acquires the global lock in shared mode
338 because it reports the list of all VG names, and the vgcreate command
339 acquires the global lock in exclusive mode because it creates a new VG
340 name, and it takes a PV from the list of unused PVs.
341
342 When an LVM command is given a tag argument, or uses select, it must
343 read all VGs to match the tag or selection, which causes the global
344 lock to be acquired.
345
346 VG lock
347
348 A VG lock is associated with each shared VG. The VG lock is acquired
349 in shared mode to read the VG and in exclusive mode to change the VG or
350 activate LVs. This lock serializes access to a VG with all other LVM
351 commands accessing the VG from all hosts.
352
353 The command 'vgs <vgname>' does not acquire the global lock (it does
354 not need the list of all VG names), but will acquire the VG lock on
355 each VG name argument.
356
357 LV lock
358
359 An LV lock is acquired before the LV is activated, and is released
360 after the LV is deactivated. If the LV lock cannot be acquired, the LV
361 is not activated. (LV locks are persistent and remain in place when
362 the activation command is done. Global and VG locks are transient, and
363 are held only while an LVM command is running.)
364
365 lock retries
366
367 If a request for a global or VG lock fails due to a lock conflict with
368 another host, lvmlockd automatically retries for a short time before
369 returning a failure to the LVM command. If those retries are insuffi‐
370 cient, the LVM command will retry the entire lock request a number of
371 times specified by global/lvmlockd_lock_retries before failing. If a
372 request for an LV lock fails due to a lock conflict, the command fails
373 immediately.
374
375
376
377 managing the global lock in sanlock VGs
378 The global lock exists in one of the sanlock VGs. The first sanlock VG
379 created will contain the global lock. Subsequent sanlock VGs will each
380 contain a disabled global lock that can be enabled later if necessary.
381
382 The VG containing the global lock must be visible to all hosts using
383 sanlock VGs. For this reason, it can be useful to create a small san‐
384 lock VG, visible to all hosts, and dedicated to just holding the global
385 lock. While not required, this strategy can help to avoid difficulty
386 in the future if VGs are moved or removed.
387
388 The vgcreate command typically acquires the global lock, but in the
389 case of the first sanlock VG, there will be no global lock to acquire
390 until the first vgcreate is complete. So, creating the first sanlock
391 VG is a special case that skips the global lock.
392
393 vgcreate determines that it's creating the first sanlock VG when no
394 other sanlock VGs are visible on the system. It is possible that other
395 sanlock VGs do exist, but are not visible when vgcreate checks for
396 them. In this case, vgcreate will create a new sanlock VG with the
397 global lock enabled. When the another VG containing a global lock
398 appears, lvmlockd will then see more than one VG with a global lock
399 enabled. LVM commands will report that there are duplicate global
400 locks.
401
402 If the situation arises where more than one sanlock VG contains a
403 global lock, the global lock should be manually disabled in all but one
404 of them with the command:
405
406 lvmlockctl --gl-disable <vgname>
407
408 (The one VG with the global lock enabled must be visible to all hosts.)
409
410 An opposite problem can occur if the VG holding the global lock is
411 removed. In this case, no global lock will exist following the vgre‐
412 move, and subsequent LVM commands will fail to acquire it. In this
413 case, the global lock needs to be manually enabled in one of the
414 remaining sanlock VGs with the command:
415
416 lvmlockctl --gl-enable <vgname>
417
418 (Using a small sanlock VG dedicated to holding the global lock can
419 avoid the case where the global lock must be manually enabled after a
420 vgremove.)
421
422
423
424 internal lvmlock LV
425 A sanlock VG contains a hidden LV called "lvmlock" that holds the san‐
426 lock locks. vgreduce cannot yet remove the PV holding the lvmlock LV.
427 To remove this PV, change the VG lock type to "none", run vgreduce,
428 then change the VG lock type back to "sanlock". Similarly, pvmove can‐
429 not be used on a PV used by the lvmlock LV.
430
431 To place the lvmlock LV on a specific device, create the VG with only
432 that device, then use vgextend to add other devices.
433
434
435
436 LV activation
437 In a shared VG, LV activation involves locking through lvmlockd, and
438 the following values are possible with lvchange/vgchange -a:
439
440
441 y|ey The command activates the LV in exclusive mode, allowing a sin‐
442 gle host to activate the LV. Before activating the LV, the com‐
443 mand uses lvmlockd to acquire an exclusive lock on the LV. If
444 the lock cannot be acquired, the LV is not activated and an
445 error is reported. This would happen if the LV is active on
446 another host.
447
448
449 sy The command activates the LV in shared mode, allowing multiple
450 hosts to activate the LV concurrently. Before activating the
451 LV, the command uses lvmlockd to acquire a shared lock on the
452 LV. If the lock cannot be acquired, the LV is not activated and
453 an error is reported. This would happen if the LV is active
454 exclusively on another host. If the LV type prohibits shared
455 access, such as a snapshot, the command will report an error and
456 fail. The shared mode is intended for a multi-host/cluster
457 application or file system. LV types that cannot be used con‐
458 currently from multiple hosts include thin, cache, raid, mirror,
459 and snapshot.
460
461
462 n The command deactivates the LV. After deactivating the LV, the
463 command uses lvmlockd to release the current lock on the LV.
464
465
466
467 manually repairing a shared VG
468 Some failure conditions may not be repairable while the VG has a shared
469 lock type. In these cases, it may be possible to repair the VG by
470 forcibly changing the lock type to "none". This is done by adding
471 "--lock-opt force" to the normal command for changing the lock type:
472 vgchange --lock-type none VG. The VG lockspace should first be stopped
473 on all hosts, and be certain that no hosts are using the VG before this
474 is done.
475
476
477
478 recover from lost PV holding sanlock locks
479 In a sanlock VG, the sanlock locks are held on the hidden "lvmlock" LV.
480 If the PV holding this LV is lost, a new lvmlock LV needs to be cre‐
481 ated. To do this, ensure no hosts are using the VG, then forcibly
482 change the lock type to "none" (see above). Then change the lock type
483 back to "sanlock" with the normal command for changing the lock type:
484 vgchange --lock-type sanlock VG. This recreates the internal lvmlock
485 LV with the necessary locks.
486
487
488
489 locking system failures
490 lvmlockd failure
491
492 If lvmlockd fails or is killed while holding locks, the locks are
493 orphaned in the lock manager. Orphaned locks must be cleared or
494 adopted before the associated resources can be accessed normally. If
495 lock adoption is enabled, lvmlockd keeps a record of locks in the
496 adopt-file. A subsequent instance of lvmlockd will then adopt locks
497 orphaned by the previous instance. Adoption must be enabled in both
498 instances (--adopt|-A 1). Without adoption, the lock manager or host
499 would require a reset to clear orphaned lock state.
500
501 dlm/corosync failure
502
503 If dlm or corosync fail, the clustering system will fence the host
504 using a method configured within the dlm/corosync clustering environ‐
505 ment.
506
507 LVM commands on other hosts will be blocked from acquiring any locks
508 until the dlm/corosync recovery process is complete.
509
510 sanlock lease storage failure
511
512 If the PV under a sanlock VG's lvmlock LV is disconnected, unresponsive
513 or too slow, sanlock cannot renew the lease for the VG's locks. After
514 some time, the lease will expire, and locks that the host owns in the
515 VG can be acquired by other hosts. The VG must be forcibly deactivated
516 on the host with the expiring lease before other hosts can acquire its
517 locks.
518
519 When the sanlock daemon detects that the lease storage is lost, it runs
520 the command lvmlockctl --kill <vgname>. This command emits a syslog
521 message stating that lease storage is lost for the VG, and LVs must be
522 immediately deactivated.
523
524 If no LVs are active in the VG, then the lockspace with an expiring
525 lease will be removed, and errors will be reported when trying to use
526 the VG. Use the lvmlockctl --drop command to clear the stale lockspace
527 from lvmlockd.
528
529 If the VG has active LVs when the lock storage is lost, the LVs must be
530 quickly deactivated before the lockspace lease expires. After all LVs
531 are deactivated, run lvmlockctl --drop <vgname> to clear the expiring
532 lockspace from lvmlockd. If all LVs in the VG are not deactivated
533 within about 40 seconds, sanlock uses wdmd and the local watchdog to
534 reset the host. The machine reset is effectively a severe form of
535 "deactivating" LVs before they can be activated on other hosts. The
536 reset is considered a better alternative than having LVs used by multi‐
537 ple hosts at once, which could easily damage or destroy their content.
538
539 In the future, the lvmlockctl kill command may automatically attempt to
540 forcibly deactivate LVs before the sanlock lease expires. Until then,
541 the user must notice the syslog message and manually deactivate the VG
542 before sanlock resets the machine.
543
544 sanlock daemon failure
545
546 If the sanlock daemon fails or exits while a lockspace is started, the
547 local watchdog will reset the host. This is necessary to protect any
548 application resources that depend on sanlock leases.
549
550
551
552 changing dlm cluster name
553 When a dlm VG is created, the cluster name is saved in the VG metadata.
554 To use the VG, a host must be in the named dlm cluster. If the dlm
555 cluster name changes, or the VG is moved to a new cluster, the dlm
556 cluster name saved in the VG must also be changed.
557
558 To see the dlm cluster name saved in the VG, use the command:
559 vgs -o+locktype,lockargs <vgname>
560
561 To change the dlm cluster name in the VG when the VG is still used by
562 the original cluster:
563
564
565 · Start the VG on the host changing the lock type
566 vgchange --lock-start <vgname>
567
568
569 · Stop the VG on all other hosts:
570 vgchange --lock-stop <vgname>
571
572
573 · Change the VG lock type to none on the host where the VG is started:
574 vgchange --lock-type none <vgname>
575
576
577 · Change the dlm cluster name on the hosts or move the VG to the new
578 cluster. The new dlm cluster must now be running on the host. Ver‐
579 ify the new name by:
580 cat /sys/kernel/config/dlm/cluster/cluster_name
581
582
583 · Change the VG lock type back to dlm which sets the new cluster name:
584 vgchange --lock-type dlm <vgname>
585
586
587 · Start the VG on hosts to use it:
588 vgchange --lock-start <vgname>
589
590
591 To change the dlm cluster name in the VG when the dlm cluster name has
592 already been changed on the hosts, or the VG has already moved to a
593 different cluster:
594
595
596 · Ensure the VG is not being used by any hosts.
597
598
599 · The new dlm cluster must be running on the host making the change.
600 The current dlm cluster name can be seen by:
601 cat /sys/kernel/config/dlm/cluster/cluster_name
602
603
604 · Change the VG lock type to none:
605 vgchange --lock-type none --lock-opt force <vgname>
606
607
608 · Change the VG lock type back to dlm which sets the new cluster name:
609 vgchange --lock-type dlm <vgname>
610
611
612 · Start the VG on hosts to use it:
613 vgchange --lock-start <vgname>
614
615
616
617 changing a local VG to a shared VG
618 All LVs must be inactive to change the lock type.
619
620 lvmlockd must be configured and running as described in USAGE.
621
622
623 · Change a local VG to a shared VG with the command:
624 vgchange --lock-type sanlock|dlm <vgname>
625
626
627 · Start the VG on hosts to use it:
628 vgchange --lock-start <vgname>
629
630
631 changing a shared VG to a local VG
632 All LVs must be inactive to change the lock type.
633
634
635 · Start the VG on the host making the change:
636 vgchange --lock-start <vgname>
637
638
639 · Stop the VG on all other hosts:
640 vgchange --lock-stop <vgname>
641
642
643 · Change the VG lock type to none on the host where the VG is started:
644 vgchange --lock-type none <vgname>
645
646
647 If the VG cannot be started with the previous lock type, then the lock
648 type can be forcibly changed to none with:
649
650 vgchange --lock-type none --lock-opt force <vgname>
651
652 To change a VG from one lock type to another (i.e. between sanlock and
653 dlm), first change it to a local VG, then to the new type.
654
655
656
657 changing a clvm/clustered VG to a shared VG
658 All LVs must be inactive to change the lock type.
659
660 First change the clvm/clustered VG to a local VG. Within a running
661 clvm cluster, change a clustered VG to a local VG with the command:
662
663 vgchange -cn <vgname>
664
665 If the clvm cluster is no longer running on any nodes, then extra
666 options can be used to forcibly make the VG local. Caution: this is
667 only safe if all nodes have stopped using the VG:
668
669 vgchange --lock-type none --lock-opt force <vgname>
670
671 After the VG is local, follow the steps described in "changing a local
672 VG to a shared VG".
673
674
675 extending an LV active on multiple hosts
676 With lvmlockd and dlm, a special clustering procedure is used to
677 refresh a shared LV on remote cluster nodes after it has been extended
678 on one node.
679
680 When an LV holding gfs2 or ocfs2 is active on multiple hosts with a
681 shared lock, lvextend is permitted to run with an existing shared LV
682 lock in place of the normal exclusive LV lock.
683
684 After lvextend has finished extending the LV, it sends a remote request
685 to other nodes running the dlm to run 'lvchange --refresh' on the LV.
686 This uses dlm_controld and corosync features.
687
688 Some special --lockopt values can be used to modify this process.
689 "shupdate" permits the lvextend update with an existing shared lock if
690 it isn't otherwise permitted. "norefresh" prevents the remote refresh
691 operation.
692
693
694
695 limitations of shared VGs
696 Things that do not yet work in shared VGs:
697 · using external origins for thin LVs
698 · splitting snapshots from LVs
699 · splitting mirrors in sanlock VGs
700 · pvmove of entire PVs, or under LVs activated with shared locks
701 · vgsplit and vgmerge (convert to a local VG to do this)
702
703
704
705 lvmlockd changes from clvmd
706 (See above for converting an existing clvm VG to a shared VG.)
707
708 While lvmlockd and clvmd are entirely different systems, LVM command
709 usage remains similar. Differences are more notable when using lvm‐
710 lockd's sanlock option.
711
712 Visible usage differences between shared VGs (using lvmlockd) and
713 clvm/clustered VGs (using clvmd):
714
715
716 · lvm.conf is configured to use lvmlockd by setting use_lvmlockd=1.
717 clvmd used locking_type=3.
718
719
720 · vgcreate --shared creates a shared VG. vgcreate --clustered y cre‐
721 ated a clvm/clustered VG.
722
723
724 · lvmlockd adds the option of using sanlock for locking, avoiding the
725 need for network clustering.
726
727
728 · lvmlockd defaults to the exclusive activation mode whenever the acti‐
729 vation mode is unspecified, i.e. -ay means -aey, not -asy.
730
731
732 · lvmlockd commands always apply to the local host, and never have an
733 effect on a remote host. (The activation option 'l' is not used.)
734
735
736 · lvmlockd saves the cluster name for a shared VG using dlm. Only
737 hosts in the matching cluster can use the VG.
738
739
740 · lvmlockd requires starting/stopping shared VGs with vgchange
741 --lock-start and --lock-stop.
742
743
744 · vgremove of a sanlock VG may fail indicating that all hosts have not
745 stopped the VG lockspace. Stop the VG on all hosts using vgchange
746 --lock-stop.
747
748
749 · vgreduce or pvmove of a PV in a sanlock VG will fail if it holds the
750 internal "lvmlock" LV that holds the sanlock locks.
751
752
753 · lvmlockd uses lock retries instead of lock queueing, so high lock
754 contention may require increasing global/lvmlockd_lock_retries to
755 avoid transient lock failures.
756
757
758 · lvmlockd includes VG reporting options lock_type and lock_args, and
759 LV reporting option lock_args to view the corresponding metadata
760 fields.
761
762
763 · In the 'vgs' command's sixth VG attr field, "s" for "shared" is dis‐
764 played for shared VGs.
765
766
767 · If lvmlockd fails or is killed while in use, locks it held remain but
768 are orphaned in the lock manager. lvmlockd can be restarted with an
769 option to adopt the orphan locks from the previous instance of lvm‐
770 lockd.
771
772
773Red Hat, Inc LVM TOOLS 2.03.10(2) (2020-08-09) LVMLOCKD(8)