1LVMLOCKD(8)                                                        LVMLOCKD(8)
2
3
4

NAME

6       lvmlockd — LVM locking daemon
7
8

DESCRIPTION

10       LVM commands use lvmlockd to coordinate access to shared storage.
11       When LVM is used on devices shared by multiple hosts, locks will:
12
13       · coordinate reading and writing of LVM metadata
14       · validate caching of LVM metadata
15       · prevent conflicting activation of logical volumes
16
17       lvmlockd uses an external lock manager to perform basic locking.
18       Lock manager (lock type) options are:
19
20       · sanlock: places locks on disk within LVM storage.
21       · dlm: uses network communication and a cluster manager.
22
23

OPTIONS

25       lvmlockd [options]
26
27       For default settings, see lvmlockd -h.
28
29       --help | -h
30               Show this help information.
31
32       --version | -V
33               Show version of lvmlockd.
34
35       --test | -T
36               Test mode, do not call lock manager.
37
38       --foreground | -f
39               Don't fork.
40
41       --daemon-debug | -D
42               Don't fork and print debugging to stdout.
43
44       --pid-file | -p path
45               Set path to the pid file.
46
47       --socket-path | -s path
48               Set path to the socket to listen on.
49
50       --syslog-priority | -S err|warning|debug
51               Write log messages from this level up to syslog.
52
53       --gl-type | -g sanlock|dlm
54               Set global lock type to be sanlock or dlm.
55
56       --host-id | -i num
57               Set the local sanlock host id.
58
59       --host-id-file | -F path
60               A file containing the local sanlock host_id.
61
62       --sanlock-timeout | -o seconds
63               Override the default sanlock I/O timeout.
64
65
66

USAGE

68   Initial set up
69       Setting  up  LVM  to  use  lvmlockd  and a shared VG for the first time
70       includes some one time set up steps:
71
72
73   1. choose a lock manager
74       dlm
75       If dlm (or corosync) are already being used by other cluster  software,
76       then  select dlm.  dlm uses corosync which requires additional configu‐
77       ration beyond the scope of this document.  See corosync and  dlm  docu‐
78       mentation for instructions on configuration, set up and usage.
79
80       sanlock
81       Choose  sanlock  if  dlm/corosync  are not otherwise required.  sanlock
82       does not depend on any clustering software or configuration.
83
84
85   2. configure hosts to use lvmlockd
86       On all hosts running lvmlockd, configure lvm.conf:
87       use_lvmlockd = 1
88
89       sanlock
90       Assign each host a unique host_id in the range 1-2000 by setting
91       /etc/lvm/lvmlocal.conf local/host_id
92
93
94   3. start lvmlockd
95       Start the lvmlockd daemon.
96       Use systemctl, a cluster resource agent, or run directly, e.g.
97       systemctl start lvmlockd
98
99
100   4. start lock manager
101       sanlock
102       Start the sanlock and wdmd daemons.
103       Use systemctl or run directly, e.g.
104       systemctl start wdmd sanlock
105
106       dlm
107       Start the dlm and corosync daemons.
108       Use systemctl, a cluster resource agent, or run directly, e.g.
109       systemctl start corosync dlm
110
111
112   5. create VG on shared devices
113       vgcreate --shared <vgname> <devices>
114
115       The shared option sets the VG lock type to sanlock or dlm depending  on
116       which  lock  manager  is running.  LVM commands acquire locks from lvm‐
117       lockd, and lvmlockd uses the chosen lock manager.
118
119
120   6. start VG on all hosts
121       vgchange --lock-start
122
123       Shared VGs must be started before they are used.  Starting the VG  per‐
124       forms  lock  manager  initialization  that  is necessary to begin using
125       locks (i.e.  creating and joining a lockspace).  Starting  the  VG  may
126       take  some  time, and until the start completes the VG may not be modi‐
127       fied or activated.
128
129
130   7. create and activate LVs
131       Standard lvcreate and lvchange commands are used to create and activate
132       LVs in a shared VG.
133
134       An LV activated exclusively on one host cannot be activated on another.
135       When multiple hosts need to use the same LV concurrently, the LV can be
136       activated  with  a  shared  lock  (see  lvchange options -aey vs -asy.)
137       (Shared locks are disallowed for certain LV types that cannot  be  used
138       from multiple hosts.)
139
140
141
142   Normal start up and shut down
143       After  initial  set  up,  start  up and shut down include the following
144       steps.  They can be performed directly or may be automated  using  sys‐
145       temd or a cluster resource manager/agents.
146
147       · start lvmlockd
148       · start lock manager
149       · vgchange --lock-start
150       · activate LVs in shared VGs
151
152       The shut down sequence is the reverse:
153
154       · deactivate LVs in shared VGs
155       · vgchange --lock-stop
156       · stop lock manager
157       · stop lvmlockd
158
159

TOPICS

161   Protecting VGs on shared devices
162       The  following terms are used to describe the different ways of access‐
163       ing VGs on shared devices.
164
165       shared VG
166
167       A shared VG exists on shared storage that is visible to multiple hosts.
168       LVM acquires locks through lvmlockd to coordinate access to shared VGs.
169       A shared VG has lock_type "dlm" or "sanlock", which specifies the  lock
170       manager lvmlockd will use.
171
172       When  the  lock  manager  for  the lock type is not available (e.g. not
173       started or failed), lvmlockd is unable to acquire locks  for  LVM  com‐
174       mands.   In  this  situation, LVM commands are only allowed to read and
175       display the VG; changes and activation will fail.
176
177       local VG
178
179       A local VG is meant to be used by a single host.  It has no  lock  type
180       or lock type "none".  A local VG typically exists on local (non-shared)
181       devices and cannot be used concurrently from different hosts.
182
183       If a local VG does exist on shared devices, it should  be  owned  by  a
184       single  host by having the system ID set, see lvmsystemid(7).  The host
185       with a matching system ID can use the local VG  and  other  hosts  will
186       ignore  it.  A VG with no lock type and no system ID should be excluded
187       from all but one host using lvm.conf filters.   Without  any  of  these
188       protections,  a  local  VG  on  shared devices can be easily damaged or
189       destroyed.
190
191       clvm VG
192
193       A clvm VG (or clustered VG) is a VG on shared storage  (like  a  shared
194       VG) that requires clvmd for clustering and locking.  See below for con‐
195       verting a clvm/clustered VG to a shared VG.
196
197
198
199   shared VGs from hosts not using lvmlockd
200       Hosts that do not use shared VGs will not be running lvmlockd.  In this
201       case,  shared  VGs  that  are still visible to the host will be ignored
202       (like foreign VGs, see lvmsystemid(7).)
203
204       The --shared option for reporting and display  commands  causes  shared
205       VGs  to  be  displayed on a host not using lvmlockd, like the --foreign
206       option does for foreign VGs.
207
208
209
210   creating the first sanlock VG
211       When use_lvmlockd is first enabled in lvm.conf, and  before  the  first
212       sanlock  VG  is  created,  no  global lock will exist.  In this initial
213       state, LVM commands try and fail to acquire the global lock,  producing
214       a warning, and some commands are disallowed.  Once the first sanlock VG
215       is created, the global lock will be available, and LVM  will  be  fully
216       operational.
217
218       When  a  new  sanlock  VG  is  created,  its lockspace is automatically
219       started on the host that creates it.  Other hosts need to run 'vgchange
220       --lock-start' to start the new VG before they can use it.
221
222       Creating  the  first  sanlock  VG  is  not  protected by locking, so it
223       requires special attention.  This is because  sanlock  locks  exist  on
224       storage  within the VG, so they are not available until after the VG is
225       created.  The first sanlock VG that is created will automatically  con‐
226       tain  the  "global lock".  Be aware of the following special considera‐
227       tions:
228
229
230       · The first vgcreate command needs to be given the  path  to  a  device
231         that  has  not yet been initialized with pvcreate.  The pvcreate ini‐
232         tialization will be done by vgcreate.  This is because  the  pvcreate
233         command  requires  the global lock, which will not be available until
234         after the first sanlock VG is created.
235
236
237       · Because the first sanlock VG will contain the global  lock,  this  VG
238         needs to be accessible to all hosts that will use sanlock shared VGs.
239         All hosts will need to use the global lock from the first sanlock VG.
240
241
242       · The device and VG name used by the initial vgcreate will not be  pro‐
243         tected from concurrent use by another vgcreate on another host.
244
245         See  below  for  more  information  about managing the sanlock global
246         lock.
247
248
249
250   using shared VGs
251       In the 'vgs' command, shared VGs are indicated by "s" (for  shared)  in
252       the  sixth attr field, and by "shared" in the "--options shared" report
253       field.  The specific lock type and lock args for a  shared  VG  can  be
254       displayed with 'vgs -o+locktype,lockargs'.
255
256       Shared  VGs  need  to be "started" and "stopped", unlike other types of
257       VGs.  See the following section for a full description of starting  and
258       stopping.
259
260       Removing a shared VG will fail if other hosts have the VG started.  Run
261       vgchange --lock-stop <vgname> on all other hosts before vgremove.   (It
262       may take several seconds before vgremove recognizes that all hosts have
263       stopped a sanlock VG.)
264
265
266   starting and stopping VGs
267       Starting a shared VG (vgchange --lock-start) causes the lock manager to
268       start  (join)  the  lockspace  for  the VG on the host where it is run.
269       This makes locks for the VG available to  LVM  commands  on  the  host.
270       Before  a VG is started, only LVM commands that read/display the VG are
271       allowed to continue without locks (and with a warning).
272
273       Stopping a shared VG (vgchange --lock-stop) causes the lock manager  to
274       stop  (leave)  the  lockspace  for  the VG on the host where it is run.
275       This makes locks for the VG inaccessible to the host.  A VG  cannot  be
276       stopped while it has active LVs.
277
278       When  using  the  lock type sanlock, starting a VG can take a long time
279       (potentially minutes if the  host  was  previously  shut  down  without
280       cleanly stopping the VG.)
281
282       A shared VG can be started after all the following are true:
283       · lvmlockd is running
284       · the lock manager is running
285       · the VG's devices are visible on the system
286
287       A shared VG can be stopped if all LVs are deactivated.
288
289       All shared VGs can be started/stopped using:
290       vgchange --lock-start
291       vgchange --lock-stop
292
293
294       Individual VGs can be started/stopped using:
295       vgchange --lock-start <vgname> ...
296       vgchange --lock-stop <vgname> ...
297
298       To make vgchange not wait for start to complete:
299       vgchange --lock-start --lock-opt nowait ...
300
301       lvmlockd can be asked directly to stop all lockspaces:
302       lvmlockctl -S--stop-lockspaces
303
304       To   start   only   selected  shared  VGs,  use  the  lvm.conf  activa‐
305       tion/lock_start_list.  When defined, only VG names  in  this  list  are
306       started  by  vgchange.   If  the list is not defined (the default), all
307       visible shared VGs are started.  To start only "vg1", use the following
308       lvm.conf configuration:
309
310       activation {
311           lock_start_list = [ "vg1" ]
312           ...
313       }
314
315
316
317   internal command locking
318       To  optimize  the use of LVM with lvmlockd, be aware of the three kinds
319       of locks and when they are used:
320
321       Global lock
322
323       The global lock is associated with global information, which is  infor‐
324       mation not isolated to a single VG.  This includes:
325
326       · The global VG namespace.
327       · The set of orphan PVs and unused devices.
328       · The properties of orphan PVs, e.g. PV size.
329
330       The  global  lock is acquired in shared mode by commands that read this
331       information, or in exclusive mode by  commands  that  change  it.   For
332       example,  the  command  'vgs'  acquires  the global lock in shared mode
333       because it reports the list of all VG names, and the  vgcreate  command
334       acquires  the global lock in exclusive mode because it creates a new VG
335       name, and it takes a PV from the list of unused PVs.
336
337       When an LVM command is given a tag argument, or uses  select,  it  must
338       read  all  VGs  to  match the tag or selection, which causes the global
339       lock to be acquired.
340
341       VG lock
342
343       A VG lock is associated with each shared VG.  The VG lock  is  acquired
344       in shared mode to read the VG and in exclusive mode to change the VG or
345       activate LVs.  This lock serializes access to a VG with all  other  LVM
346       commands accessing the VG from all hosts.
347
348       The  command  'vgs  <vgname>' does not acquire the global lock (it does
349       not need the list of all VG names), but will acquire  the  VG  lock  on
350       each VG name argument.
351
352       LV lock
353
354       An  LV  lock  is  acquired  before the LV is activated, and is released
355       after the LV is deactivated.  If the LV lock cannot be acquired, the LV
356       is  not  activated.   (LV locks are persistent and remain in place when
357       the activation command is done.  Global and VG locks are transient, and
358       are held only while an LVM command is running.)
359
360       lock retries
361
362       If  a request for a global or VG lock fails due to a lock conflict with
363       another host, lvmlockd automatically retries for a  short  time  before
364       returning  a failure to the LVM command.  If those retries are insuffi‐
365       cient, the LVM command will retry the entire lock request a  number  of
366       times  specified  by global/lvmlockd_lock_retries before failing.  If a
367       request for an LV lock fails due to a lock conflict, the command  fails
368       immediately.
369
370
371
372   managing the global lock in sanlock VGs
373       The global lock exists in one of the sanlock VGs.  The first sanlock VG
374       created will contain the global lock.  Subsequent sanlock VGs will each
375       contain a disabled global lock that can be enabled later if necessary.
376
377       The  VG  containing  the global lock must be visible to all hosts using
378       sanlock VGs.  For this reason, it can be useful to create a small  san‐
379       lock VG, visible to all hosts, and dedicated to just holding the global
380       lock.  While not required, this strategy can help to  avoid  difficulty
381       in the future if VGs are moved or removed.
382
383       The  vgcreate  command  typically  acquires the global lock, but in the
384       case of the first sanlock VG, there will be no global lock  to  acquire
385       until  the  first vgcreate is complete.  So, creating the first sanlock
386       VG is a special case that skips the global lock.
387
388       vgcreate determines that it's creating the first  sanlock  VG  when  no
389       other sanlock VGs are visible on the system.  It is possible that other
390       sanlock VGs do exist, but are not  visible  when  vgcreate  checks  for
391       them.   In  this  case,  vgcreate will create a new sanlock VG with the
392       global lock enabled.  When the another  VG  containing  a  global  lock
393       appears,  lvmlockd  will  then  see more than one VG with a global lock
394       enabled.  LVM commands will report  that  there  are  duplicate  global
395       locks.
396
397       If  the  situation  arises  where  more  than one sanlock VG contains a
398       global lock, the global lock should be manually disabled in all but one
399       of them with the command:
400
401       lvmlockctl --gl-disable <vgname>
402
403       (The one VG with the global lock enabled must be visible to all hosts.)
404
405       An  opposite  problem  can  occur  if the VG holding the global lock is
406       removed.  In this case, no global lock will exist following  the  vgre‐
407       move,  and  subsequent  LVM  commands will fail to acquire it.  In this
408       case, the global lock needs to  be  manually  enabled  in  one  of  the
409       remaining sanlock VGs with the command:
410
411       lvmlockctl --gl-enable <vgname>
412
413       (Using  a  small  sanlock  VG  dedicated to holding the global lock can
414       avoid the case where the global lock must be manually enabled  after  a
415       vgremove.)
416
417
418
419   internal lvmlock LV
420       A  sanlock VG contains a hidden LV called "lvmlock" that holds the san‐
421       lock locks.  vgreduce cannot yet remove the PV holding the lvmlock  LV.
422       To  remove  this  PV,  change the VG lock type to "none", run vgreduce,
423       then change the VG lock type back to "sanlock".  Similarly, pvmove can‐
424       not be used on a PV used by the lvmlock LV.
425
426       To  place  the lvmlock LV on a specific device, create the VG with only
427       that device, then use vgextend to add other devices.
428
429
430
431   LV activation
432       In a shared VG, LV activation involves locking  through  lvmlockd,  and
433       the following values are possible with lvchange/vgchange -a:
434
435
436       y|ey   The  command activates the LV in exclusive mode, allowing a sin‐
437              gle host to activate the LV.  Before activating the LV, the com‐
438              mand  uses  lvmlockd to acquire an exclusive lock on the LV.  If
439              the lock cannot be acquired, the LV  is  not  activated  and  an
440              error  is  reported.   This  would happen if the LV is active on
441              another host.
442
443
444       sy     The command activates the LV in shared mode,  allowing  multiple
445              hosts  to  activate  the LV concurrently.  Before activating the
446              LV, the command uses lvmlockd to acquire a shared  lock  on  the
447              LV.  If the lock cannot be acquired, the LV is not activated and
448              an error is reported.  This would happen if  the  LV  is  active
449              exclusively  on  another  host.  If the LV type prohibits shared
450              access, such as a snapshot, the command will report an error and
451              fail.   The  shared  mode  is  intended for a multi-host/cluster
452              application or file system.  LV types that cannot be  used  con‐
453              currently from multiple hosts include thin, cache, raid, mirror,
454              and snapshot.
455
456
457       n      The command deactivates the LV.  After deactivating the LV,  the
458              command uses lvmlockd to release the current lock on the LV.
459
460
461
462   manually repairing a shared VG
463       Some failure conditions may not be repairable while the VG has a shared
464       lock type.  In these cases, it may be possible  to  repair  the  VG  by
465       forcibly  changing  the  lock  type  to "none".  This is done by adding
466       "--lock-opt force" to the normal command for changing  the  lock  type:
467       vgchange --lock-type none VG.  The VG lockspace should first be stopped
468       on all hosts, and be certain that no hosts are using the VG before this
469       is done.
470
471
472
473   recover from lost PV holding sanlock locks
474       In a sanlock VG, the sanlock locks are held on the hidden "lvmlock" LV.
475       If the PV holding this LV is lost, a new lvmlock LV needs  to  be  cre‐
476       ated.   To  do  this,  ensure  no hosts are using the VG, then forcibly
477       change the lock type to "none" (see above).  Then change the lock  type
478       back  to  "sanlock" with the normal command for changing the lock type:
479       vgchange --lock-type sanlock VG.  This recreates the  internal  lvmlock
480       LV with the necessary locks.
481
482
483
484   locking system failures
485       lvmlockd failure
486
487       If  lvmlockd  fails  or  is  killed  while holding locks, the locks are
488       orphaned in the lock manager.
489
490       dlm/corosync failure
491
492       If dlm or corosync fail, the clustering  system  will  fence  the  host
493       using  a  method configured within the dlm/corosync clustering environ‐
494       ment.
495
496       LVM commands on other hosts will be blocked from  acquiring  any  locks
497       until the dlm/corosync recovery process is complete.
498
499       sanlock lease storage failure
500
501       If the PV under a sanlock VG's lvmlock LV is disconnected, unresponsive
502       or too slow, sanlock cannot renew the lease for the VG's locks.   After
503       some  time,  the lease will expire, and locks that the host owns in the
504       VG can be acquired by other hosts.  The VG must be forcibly deactivated
505       on  the host with the expiring lease before other hosts can acquire its
506       locks.
507
508       When the sanlock daemon detects that the lease storage is lost, it runs
509       the  command  lvmlockctl  --kill <vgname>.  This command emits a syslog
510       message stating that lease storage is lost for the VG, and LVs must  be
511       immediately deactivated.
512
513       If  no  LVs  are  active in the VG, then the lockspace with an expiring
514       lease will be removed, and errors will be reported when trying  to  use
515       the VG.  Use the lvmlockctl --drop command to clear the stale lockspace
516       from lvmlockd.
517
518       If the VG has active LVs when the lock storage is lost, the LVs must be
519       quickly  deactivated before the lockspace lease expires.  After all LVs
520       are deactivated, run lvmlockctl --drop <vgname> to clear  the  expiring
521       lockspace  from  lvmlockd.   If  all  LVs in the VG are not deactivated
522       within about 40 seconds, sanlock uses wdmd and the  local  watchdog  to
523       reset  the  host.   The  machine  reset is effectively a severe form of
524       "deactivating" LVs before they can be activated on  other  hosts.   The
525       reset is considered a better alternative than having LVs used by multi‐
526       ple hosts at once, which could easily damage or destroy their content.
527
528       In the future, the lvmlockctl kill command may automatically attempt to
529       forcibly  deactivate LVs before the sanlock lease expires.  Until then,
530       the user must notice the syslog message and manually deactivate the  VG
531       before sanlock resets the machine.
532
533       sanlock daemon failure
534
535       If  the sanlock daemon fails or exits while a lockspace is started, the
536       local watchdog will reset the host.  This is necessary to  protect  any
537       application resources that depend on sanlock leases.
538
539
540
541   changing dlm cluster name
542       When a dlm VG is created, the cluster name is saved in the VG metadata.
543       To use the VG, a host must be in the named dlm  cluster.   If  the  dlm
544       cluster  name  changes,  or  the  VG is moved to a new cluster, the dlm
545       cluster name saved in the VG must also be changed.
546
547       To see the dlm cluster name saved in the VG, use the command:
548       vgs -o+locktype,lockargs <vgname>
549
550       To change the dlm cluster name in the VG when the VG is still  used  by
551       the original cluster:
552
553
554       · Start the VG on the host changing the lock type
555         vgchange --lock-start <vgname>
556
557
558       · Stop the VG on all other hosts:
559         vgchange --lock-stop <vgname>
560
561
562       · Change the VG lock type to none on the host where the VG is started:
563         vgchange --lock-type none <vgname>
564
565
566       · Change  the  dlm  cluster name on the hosts or move the VG to the new
567         cluster.  The new dlm cluster must now be running on the host.   Ver‐
568         ify the new name by:
569         cat /sys/kernel/config/dlm/cluster/cluster_name
570
571
572       · Change the VG lock type back to dlm which sets the new cluster name:
573         vgchange --lock-type dlm <vgname>
574
575
576       · Start the VG on hosts to use it:
577         vgchange --lock-start <vgname>
578
579
580       To  change the dlm cluster name in the VG when the dlm cluster name has
581       already been changed on the hosts, or the VG has  already  moved  to  a
582       different cluster:
583
584
585       · Ensure the VG is not being used by any hosts.
586
587
588       · The  new  dlm  cluster must be running on the host making the change.
589         The current dlm cluster name can be seen by:
590         cat /sys/kernel/config/dlm/cluster/cluster_name
591
592
593       · Change the VG lock type to none:
594         vgchange --lock-type none --lock-opt force <vgname>
595
596
597       · Change the VG lock type back to dlm which sets the new cluster name:
598         vgchange --lock-type dlm <vgname>
599
600
601       · Start the VG on hosts to use it:
602         vgchange --lock-start <vgname>
603
604
605
606   changing a local VG to a shared VG
607       All LVs must be inactive to change the lock type.
608
609       lvmlockd must be configured and running as described in USAGE.
610
611
612       · Change a local VG to a shared VG with the command:
613         vgchange --lock-type sanlock|dlm <vgname>
614
615
616       · Start the VG on hosts to use it:
617         vgchange --lock-start <vgname>
618
619
620   changing a shared VG to a local VG
621       All LVs must be inactive to change the lock type.
622
623
624       · Start the VG on the host making the change:
625         vgchange --lock-start <vgname>
626
627
628       · Stop the VG on all other hosts:
629         vgchange --lock-stop <vgname>
630
631
632       · Change the VG lock type to none on the host where the VG is started:
633         vgchange --lock-type none <vgname>
634
635
636       If the VG cannot be started with the previous lock type, then the  lock
637       type can be forcibly changed to none with:
638
639       vgchange --lock-type none --lock-opt force <vgname>
640
641       To  change a VG from one lock type to another (i.e. between sanlock and
642       dlm), first change it to a local VG, then to the new type.
643
644
645
646   changing a clvm/clustered VG to a shared VG
647       All LVs must be inactive to change the lock type.
648
649       First change the clvm/clustered VG to a local  VG.   Within  a  running
650       clvm cluster, change a clustered VG to a local VG with the command:
651
652       vgchange -cn <vgname>
653
654       If  the  clvm  cluster  is  no  longer running on any nodes, then extra
655       options can be used to forcibly make the VG local.   Caution:  this  is
656       only safe if all nodes have stopped using the VG:
657
658       vgchange --lock-type none --lock-opt force <vgname>
659
660       After  the VG is local, follow the steps described in "changing a local
661       VG to a shared VG".
662
663
664   extending an LV active on multiple hosts
665       With lvmlockd and dlm,  a  special  clustering  procedure  is  used  to
666       refresh  a shared LV on remote cluster nodes after it has been extended
667       on one node.
668
669       When an LV holding gfs2 or ocfs2 is active on  multiple  hosts  with  a
670       shared  lock,  lvextend  is permitted to run with an existing shared LV
671       lock in place of the normal exclusive LV lock.
672
673       After lvextend has finished extending the LV, it sends a remote request
674       to  other  nodes running the dlm to run 'lvchange --refresh' on the LV.
675       This uses dlm_controld and corosync features.
676
677       Some special --lockopt values can  be  used  to  modify  this  process.
678       "shupdate"  permits the lvextend update with an existing shared lock if
679       it isn't otherwise permitted.  "norefresh" prevents the remote  refresh
680       operation.
681
682
683
684   limitations of shared VGs
685       Things that do not yet work in shared VGs:
686       · using external origins for thin LVs
687       · splitting snapshots from LVs
688       · splitting mirrors in sanlock VGs
689       · pvmove of entire PVs, or under LVs activated with shared locks
690       · vgsplit and vgmerge (convert to a local VG to do this)
691
692
693
694   lvmlockd changes from clvmd
695       (See above for converting an existing clvm VG to a shared VG.)
696
697       While  lvmlockd  and  clvmd are entirely different systems, LVM command
698       usage remains similar.  Differences are more notable  when  using  lvm‐
699       lockd's sanlock option.
700
701       Visible  usage  differences  between  shared  VGs  (using lvmlockd) and
702       clvm/clustered VGs (using clvmd):
703
704
705       · lvm.conf is configured to use  lvmlockd  by  setting  use_lvmlockd=1.
706         clvmd used locking_type=3.
707
708
709       · vgcreate  --shared  creates a shared VG.  vgcreate --clustered y cre‐
710         ated a clvm/clustered VG.
711
712
713       · lvmlockd adds the option of using sanlock for locking,  avoiding  the
714         need for network clustering.
715
716
717       · lvmlockd defaults to the exclusive activation mode whenever the acti‐
718         vation mode is unspecified, i.e. -ay means -aey, not -asy.
719
720
721       · lvmlockd commands always apply to the local host, and never  have  an
722         effect on a remote host.  (The activation option 'l' is not used.)
723
724
725       · lvmlockd  saves  the  cluster  name  for a shared VG using dlm.  Only
726         hosts in the matching cluster can use the VG.
727
728
729       · lvmlockd  requires  starting/stopping  shared   VGs   with   vgchange
730         --lock-start and --lock-stop.
731
732
733       · vgremove  of a sanlock VG may fail indicating that all hosts have not
734         stopped the VG lockspace.  Stop the VG on all  hosts  using  vgchange
735         --lock-stop.
736
737
738       · vgreduce  or pvmove of a PV in a sanlock VG will fail if it holds the
739         internal "lvmlock" LV that holds the sanlock locks.
740
741
742       · lvmlockd uses lock retries instead of lock  queueing,  so  high  lock
743         contention  may  require  increasing  global/lvmlockd_lock_retries to
744         avoid transient lock failures.
745
746
747       · lvmlockd includes VG reporting options lock_type and  lock_args,  and
748         LV  reporting  option  lock_args  to  view the corresponding metadata
749         fields.
750
751
752       · In the 'vgs' command's sixth VG attr field, "s" for "shared" is  dis‐
753         played for shared VGs.
754
755
756       · If lvmlockd fails or is killed while in use, locks it held remain but
757         are orphaned in the lock manager.  lvmlockd can be restarted with  an
758         option  to  adopt the orphan locks from the previous instance of lvm‐
759         lockd.
760
761
762Red Hat, Inc           LVM TOOLS 2.03.06(2) (2019-10-23)           LVMLOCKD(8)
Impressum