1LVMLOCKD(8)                                                        LVMLOCKD(8)
2
3
4

NAME

6       lvmlockd — LVM locking daemon
7
8

DESCRIPTION

10       LVM commands use lvmlockd to coordinate access to shared storage.
11       When LVM is used on devices shared by multiple hosts, locks will:
12
13       · coordinate reading and writing of LVM metadata
14       · validate caching of LVM metadata
15       · prevent conflicting activation of logical volumes
16
17       lvmlockd uses an external lock manager to perform basic locking.
18       Lock manager (lock type) options are:
19
20       · sanlock: places locks on disk within LVM storage.
21       · dlm: uses network communication and a cluster manager.
22
23

OPTIONS

25       lvmlockd [options]
26
27       For default settings, see lvmlockd -h.
28
29       --help | -h
30               Show this help information.
31
32       --version | -V
33               Show version of lvmlockd.
34
35       --test | -T
36               Test mode, do not call lock manager.
37
38       --foreground | -f
39               Don't fork.
40
41       --daemon-debug | -D
42               Don't fork and print debugging to stdout.
43
44       --pid-file | -p path
45               Set path to the pid file.
46
47       --socket-path | -s path
48               Set path to the socket to listen on.
49
50       --syslog-priority | -S err|warning|debug
51               Write log messages from this level up to syslog.
52
53       --gl-type | -g sanlock|dlm
54               Set global lock type to be sanlock or dlm.
55
56       --host-id | -i num
57               Set the local sanlock host id.
58
59       --host-id-file | -F path
60               A file containing the local sanlock host_id.
61
62       --sanlock-timeout | -o seconds
63               Override the default sanlock I/O timeout.
64
65       --adopt | -A 0|1
66               Adopt locks from a previous instance of lvmlockd.
67
68
69

USAGE

71   Initial set up
72       Setting  up  LVM  to  use  lvmlockd  and a shared VG for the first time
73       includes some one time set up steps:
74
75
76   1. choose a lock manager
77       dlm
78       If dlm (or corosync) are already being used by other cluster  software,
79       then  select dlm.  dlm uses corosync which requires additional configu‐
80       ration beyond the scope of this document.  See corosync and  dlm  docu‐
81       mentation for instructions on configuration, set up and usage.
82
83       sanlock
84       Choose  sanlock  if  dlm/corosync  are not otherwise required.  sanlock
85       does not depend on any clustering software or configuration.
86
87
88   2. configure hosts to use lvmlockd
89       On all hosts running lvmlockd, configure lvm.conf:
90       locking_type = 1
91       use_lvmlockd = 1
92
93       sanlock
94       Assign each host a unique host_id in the range 1-2000 by setting
95       /etc/lvm/lvmlocal.conf local/host_id
96
97
98   3. start lvmlockd
99       Start the lvmlockd daemon.
100       Use systemctl, a cluster resource agent, or run directly, e.g.
101       systemctl start lvmlockd
102
103
104   4. start lock manager
105       sanlock
106       Start the sanlock and wdmd daemons.
107       Use systemctl or run directly, e.g.
108       systemctl start wdmd sanlock
109
110       dlm
111       Start the dlm and corosync daemons.
112       Use systemctl, a cluster resource agent, or run directly, e.g.
113       systemctl start corosync dlm
114
115
116   5. create VG on shared devices
117       vgcreate --shared <vgname> <devices>
118
119       The shared option sets the VG lock type to sanlock or dlm depending  on
120       which  lock  manager  is running.  LVM commands acquire locks from lvm‐
121       lockd, and lvmlockd uses the chosen lock manager.
122
123
124   6. start VG on all hosts
125       vgchange --lock-start
126
127       Shared VGs must be started before they are used.  Starting the VG  per‐
128       forms  lock  manager  initialization  that  is necessary to begin using
129       locks (i.e.  creating and joining a lockspace).  Starting  the  VG  may
130       take  some  time, and until the start completes the VG may not be modi‐
131       fied or activated.
132
133
134   7. create and activate LVs
135       Standard lvcreate and lvchange commands are used to create and activate
136       LVs in a shared VG.
137
138       An LV activated exclusively on one host cannot be activated on another.
139       When multiple hosts need to use the same LV concurrently, the LV can be
140       activated  with  a  shared  lock  (see  lvchange options -aey vs -asy.)
141       (Shared locks are disallowed for certain LV types that cannot  be  used
142       from multiple hosts.)
143
144
145
146   Normal start up and shut down
147       After  initial  set  up,  start  up and shut down include the following
148       steps.  They can be performed directly or may be automated  using  sys‐
149       temd or a cluster resource manager/agents.
150
151       · start lvmlockd
152       · start lock manager
153       · vgchange --lock-start
154       · activate LVs in shared VGs
155
156       The shut down sequence is the reverse:
157
158       · deactivate LVs in shared VGs
159       · vgchange --lock-stop
160       · stop lock manager
161       · stop lvmlockd
162
163

TOPICS

165   Protecting VGs on shared devices
166       The  following terms are used to describe the different ways of access‐
167       ing VGs on shared devices.
168
169       shared VG
170
171       A shared VG exists on shared storage that is visible to multiple hosts.
172       LVM acquires locks through lvmlockd to coordinate access to shared VGs.
173       A shared VG has lock_type "dlm" or "sanlock", which specifies the  lock
174       manager lvmlockd will use.
175
176       When  the  lock  manager  for  the lock type is not available (e.g. not
177       started or failed), lvmlockd is unable to acquire locks  for  LVM  com‐
178       mands.   In  this  situation, LVM commands are only allowed to read and
179       display the VG; changes and activation will fail.
180
181       local VG
182
183       A local VG is meant to be used by a single host.  It has no  lock  type
184       or lock type "none".  A local VG typically exists on local (non-shared)
185       devices and cannot be used concurrently from different hosts.
186
187       If a local VG does exist on shared devices, it should  be  owned  by  a
188       single  host by having the system ID set, see lvmsystemid(7).  The host
189       with a matching system ID can use the local VG  and  other  hosts  will
190       ignore  it.  A VG with no lock type and no system ID should be excluded
191       from all but one host using lvm.conf filters.   Without  any  of  these
192       protections,  a  local  VG  on  shared devices can be easily damaged or
193       destroyed.
194
195       clvm VG
196
197       A clvm VG (or clustered VG) is a VG on shared storage  (like  a  shared
198       VG) that requires clvmd for clustering and locking.  See below for con‐
199       verting a clvm/clustered VG to a shared VG.
200
201
202
203   shared VGs from hosts not using lvmlockd
204       Hosts that do not use shared VGs will not be running lvmlockd.  In this
205       case,  shared  VGs  that  are still visible to the host will be ignored
206       (like foreign VGs, see lvmsystemid(7).)
207
208       The --shared option for reporting and display  commands  causes  shared
209       VGs  to  be  displayed on a host not using lvmlockd, like the --foreign
210       option does for foreign VGs.
211
212
213
214   creating the first sanlock VG
215       Creating the first sanlock VG  is  not  protected  by  locking,  so  it
216       requires  special  attention.   This  is because sanlock locks exist on
217       storage within the VG, so they are not available until after the VG  is
218       created.   The first sanlock VG that is created will automatically con‐
219       tain the "global lock".  Be aware of the following  special  considera‐
220       tions:
221
222
223       · The  first  vgcreate  command  needs to be given the path to a device
224         that has not yet been initialized with pvcreate.  The  pvcreate  ini‐
225         tialization  will  be done by vgcreate.  This is because the pvcreate
226         command requires the global lock, which will not be  available  until
227         after the first sanlock VG is created.
228
229
230       · Because  the  first  sanlock VG will contain the global lock, this VG
231         needs to be accessible to all hosts that will use sanlock shared VGs.
232         All hosts will need to use the global lock from the first sanlock VG.
233
234
235       · The  device and VG name used by the initial vgcreate will not be pro‐
236         tected from concurrent use by another vgcreate on another host.
237
238         See below for more information  about  managing  the  sanlock  global
239         lock.
240
241
242
243   using shared VGs
244       There are some special considerations when using shared VGs.
245
246       When  use_lvmlockd  is  first enabled in lvm.conf, and before the first
247       shared VG is created, no global  lock  will  exist.   In  this  initial
248       state,  LVM commands try and fail to acquire the global lock, producing
249       a warning, and some commands are disallowed.  Once the first shared  VG
250       is  created,  the  global lock will be available, and LVM will be fully
251       operational.
252
253       When a new shared VG is created, its lockspace is automatically started
254       on  the  host  that  creates  it.   Other  hosts  need to run 'vgchange
255       --lock-start' to start the new VG before they can use it.
256
257       From the 'vgs' command, shared VGs are indicated by "s" (for shared) in
258       the  sixth attr field, and by "shared" in the "--options shared" report
259       field.  The specific lock type and lock args for a  shared  VG  can  be
260       displayed with 'vgs -o+locktype,lockargs'.
261
262       Shared  VGs  need  to be "started" and "stopped", unlike other types of
263       VGs.  See the following section for a full description of starting  and
264       stopping.
265
266       Removing a shared VG will fail if other hosts have the VG started.  Run
267       vgchange --lock-stop <vgname> on all other hosts before vgremove.   (It
268       may take several seconds before vgremove recognizes that all hosts have
269       stopped a sanlock VG.)
270
271
272   starting and stopping VGs
273       Starting a shared VG (vgchange --lock-start) causes the lock manager to
274       start  (join)  the  lockspace  for  the VG on the host where it is run.
275       This makes locks for the VG available to  LVM  commands  on  the  host.
276       Before  a VG is started, only LVM commands that read/display the VG are
277       allowed to continue without locks (and with a warning).
278
279       Stopping a shared VG (vgchange --lock-stop) causes the lock manager  to
280       stop  (leave)  the  lockspace  for  the VG on the host where it is run.
281       This makes locks for the VG inaccessible to the host.  A VG  cannot  be
282       stopped while it has active LVs.
283
284       When  using  the  lock type sanlock, starting a VG can take a long time
285       (potentially minutes if the  host  was  previously  shut  down  without
286       cleanly stopping the VG.)
287
288       A shared VG can be started after all the following are true:
289       · lvmlockd is running
290       · the lock manager is running
291       · the VG's devices are visible on the system
292
293       A shared VG can be stopped if all LVs are deactivated.
294
295       All shared VGs can be started/stopped using:
296       vgchange --lock-start
297       vgchange --lock-stop
298
299
300       Individual VGs can be started/stopped using:
301       vgchange --lock-start <vgname> ...
302       vgchange --lock-stop <vgname> ...
303
304       To make vgchange not wait for start to complete:
305       vgchange --lock-start --lock-opt nowait ...
306
307       lvmlockd can be asked directly to stop all lockspaces:
308       lvmlockctl -S--stop-lockspaces
309
310       To   start   only   selected  shared  VGs,  use  the  lvm.conf  activa‐
311       tion/lock_start_list.  When defined, only VG names  in  this  list  are
312       started  by  vgchange.   If  the list is not defined (the default), all
313       visible shared VGs are started.  To start only "vg1", use the following
314       lvm.conf configuration:
315
316       activation {
317           lock_start_list = [ "vg1" ]
318           ...
319       }
320
321
322
323   automatic starting and automatic activation
324       When system-level scripts/programs automatically start VGs, they should
325       use the "auto" option.  This option indicates that the command is being
326       run automatically by the system:
327
328       vgchange --lock-start --lock-opt auto [<vgname> ...]
329
330       The  "auto"  option  causes  the command to follow the lvm.conf activa‐
331       tion/auto_lock_start_list.  If auto_lock_start_list is  undefined,  all
332       VGs are started, just as if the auto option was not used.
333
334       When  auto_lock_start_list  is  defined,  it  lists the shared VGs that
335       should be started by the auto command.  VG names that do not  match  an
336       item in the list will be ignored by the auto start command.
337
338       (The  lock_start_list  is  also  still used to filter VG names from all
339       start commands, i.e.  with  or  without  the  auto  option.   When  the
340       lock_start_list  is  defined,  only  VGs  matching  a  list item can be
341       started with vgchange.)
342
343       The auto_lock_start_list allows a user to  select  certain  shared  VGs
344       that  should  be  automatically  started  by the system (or indirectly,
345       those that should not).
346
347
348
349   internal command locking
350       To optimize the use of LVM with lvmlockd, be aware of the  three  kinds
351       of locks and when they are used:
352
353       Global lock
354
355       The global lock s associated with global information, which is informa‐
356       tion not isolated to a single VG.  This includes:
357
358       · The global VG namespace.
359       · The set of orphan PVs and unused devices.
360       · The properties of orphan PVs, e.g. PV size.
361
362       The global lock is acquired in shared mode by commands that  read  this
363       information,  or  in  exclusive  mode  by commands that change it.  For
364       example, the command 'vgs' acquires the  global  lock  in  shared  mode
365       because  it  reports the list of all VG names, and the vgcreate command
366       acquires the global lock in exclusive mode because it creates a new  VG
367       name, and it takes a PV from the list of unused PVs.
368
369       When  an  LVM  command is given a tag argument, or uses select, it must
370       read all VGs to match the tag or selection,  which  causes  the  global
371       lock to be acquired.
372
373       VG lock
374
375       A  VG  lock is associated with each shared VG.  The VG lock is acquired
376       in shared mode to read the VG and in exclusive mode to change the VG or
377       activate  LVs.   This lock serializes access to a VG with all other LVM
378       commands accessing the VG from all hosts.
379
380       The command 'vgs <vgname>' does not acquire the global  lock  (it  does
381       not  need  the  list  of all VG names), but will acquire the VG lock on
382       each VG name argument.
383
384       LV lock
385
386       An LV lock is acquired before the LV  is  activated,  and  is  released
387       after the LV is deactivated.  If the LV lock cannot be acquired, the LV
388       is not activated.  (LV locks are persistent and remain  in  place  when
389       the activation command is done.  Global and VG locks are transient, and
390       are held only while an LVM command is running.)
391
392       lock retries
393
394       If a request for a Global or VG lock fails due to a lock conflict  with
395       another  host,  lvmlockd  automatically retries for a short time before
396       returning a failure to the LVM command.  If those retries are  insuffi‐
397       cient,  the  LVM command will retry the entire lock request a number of
398       times specified by global/lvmlockd_lock_retries before failing.   If  a
399       request  for an LV lock fails due to a lock conflict, the command fails
400       immediately.
401
402
403
404   managing the global lock in sanlock VGs
405       The global lock exists in one of the sanlock VGs.  The first sanlock VG
406       created will contain the global lock.  Subsequent sanlock VGs will each
407       contain a disabled global lock that can be enabled later if necessary.
408
409       The VG containing the global lock must be visible to  all  hosts  using
410       sanlock  VGs.  For this reason, it can be useful to create a small san‐
411       lock VG, visible to all hosts, and dedicated to just holding the global
412       lock.   While  not required, this strategy can help to avoid difficulty
413       in the future if VGs are moved or removed.
414
415       The vgcreate command typically acquires the global  lock,  but  in  the
416       case  of  the first sanlock VG, there will be no global lock to acquire
417       until the first vgcreate is complete.  So, creating the  first  sanlock
418       VG is a special case that skips the global lock.
419
420       vgcreate  determines  that  it's  creating the first sanlock VG when no
421       other sanlock VGs are visible on the system.  It is possible that other
422       sanlock  VGs  do  exist,  but  are not visible when vgcreate checks for
423       them.  In this case, vgcreate will create a new  sanlock  VG  with  the
424       global  lock  enabled.   When  the  another VG containing a global lock
425       appears, lvmlockd will then see more than one VG  with  a  global  lock
426       enabled.   LVM  commands  will  report  that there are duplicate global
427       locks.
428
429       If the situation arises where more  than  one  sanlock  VG  contains  a
430       global lock, the global lock should be manually disabled in all but one
431       of them with the command:
432
433       lvmlockctl --gl-disable <vgname>
434
435       (The one VG with the global lock enabled must be visible to all hosts.)
436
437       An opposite problem can occur if the VG  holding  the  global  lock  is
438       removed.   In  this case, no global lock will exist following the vgre‐
439       move, and subsequent LVM commands will fail to  acquire  it.   In  this
440       case,  the  global  lock  needs  to  be  manually enabled in one of the
441       remaining sanlock VGs with the command:
442
443       lvmlockctl --gl-enable <vgname>
444
445       (Using a small sanlock VG dedicated to  holding  the  global  lock  can
446       avoid  the  case where the global lock must be manually enabled after a
447       vgremove.)
448
449
450
451   internal lvmlock LV
452       A sanlock VG contains a hidden LV called "lvmlock" that holds the  san‐
453       lock  locks.  vgreduce cannot yet remove the PV holding the lvmlock LV.
454       To remove this PV, change the VG lock type  to  "none",  run  vgreduce,
455       then change the VG lock type back to "sanlock".  Similarly, pvmove can‐
456       not be used on a PV used by the lvmlock LV.
457
458       To place the lvmlock LV on a specific device, create the VG  with  only
459       that device, then use vgextend to add other devices.
460
461
462
463   LV activation
464       In  a  shared  VG, LV activation involves locking through lvmlockd, and
465       the following values are possible with lvchange/vgchange -a:
466
467
468       y|ey   The command activates the LV in exclusive mode, allowing a  sin‐
469              gle host to activate the LV.  Before activating the LV, the com‐
470              mand uses lvmlockd to acquire an exclusive lock on the  LV.   If
471              the  lock  cannot  be  acquired,  the LV is not activated and an
472              error is reported.  This would happen if the  LV  is  active  on
473              another host.
474
475
476       sy     The  command  activates the LV in shared mode, allowing multiple
477              hosts to activate the LV concurrently.   Before  activating  the
478              LV,  the  command  uses lvmlockd to acquire a shared lock on the
479              LV.  If the lock cannot be acquired, the LV is not activated and
480              an  error  is  reported.   This would happen if the LV is active
481              exclusively on another host.  If the LV  type  prohibits  shared
482              access, such as a snapshot, the command will report an error and
483              fail.  The shared mode  is  intended  for  a  multi-host/cluster
484              application  or  file system.  LV types that cannot be used con‐
485              currently from multiple hosts include  thin,  cache,  raid,  and
486              snapshot.
487
488
489       n      The  command deactivates the LV.  After deactivating the LV, the
490              command uses lvmlockd to release the current lock on the LV.
491
492
493
494   manually repairing a shared VG
495       Some failure conditions may not be repairable while the VG has a shared
496       lock  type.   In  these  cases,  it may be possible to repair the VG by
497       forcibly changing the lock type to "none".   This  is  done  by  adding
498       "--lock-opt  force"  to  the normal command for changing the lock type:
499       vgchange --lock-type none VG.  The VG lockspace should first be stopped
500       on all hosts, and be certain that no hosts are using the VG before this
501       is done.
502
503
504
505   recover from lost PV holding sanlock locks
506       In a sanlock VG, the sanlock locks are held on the hidden "lvmlock" LV.
507       If  the  PV  holding this LV is lost, a new lvmlock LV needs to be cre‐
508       ated.  To do this, ensure no hosts are  using  the  VG,  then  forcibly
509       change  the lock type to "none" (see above).  Then change the lock type
510       back to "sanlock" with the normal command for changing the  lock  type:
511       vgchange  --lock-type  sanlock VG.  This recreates the internal lvmlock
512       LV with the necessary locks.
513
514
515
516   locking system failures
517       lvmlockd failure
518
519       If lvmlockd fails or is killed  while  holding  locks,  the  locks  are
520       orphaned in the lock manager.  lvmlockd can be restarted with an option
521       to adopt locks in the lock manager that had been held by  the  previous
522       instance.
523
524       dlm/corosync failure
525
526       If  dlm  or  corosync  fail,  the clustering system will fence the host
527       using a method configured within the dlm/corosync  clustering  environ‐
528       ment.
529
530       LVM  commands  on  other hosts will be blocked from acquiring any locks
531       until the dlm/corosync recovery process is complete.
532
533       sanlock lease storage failure
534
535       If the PV under a sanlock VG's lvmlock LV is disconnected, unresponsive
536       or  too slow, sanlock cannot renew the lease for the VG's locks.  After
537       some time, the lease will expire, and locks that the host owns  in  the
538       VG can be acquired by other hosts.  The VG must be forcibly deactivated
539       on the host with the expiring lease before other hosts can acquire  its
540       locks.
541
542       When the sanlock daemon detects that the lease storage is lost, it runs
543       the command lvmlockctl --kill <vgname>.  This command  emits  a  syslog
544       message  stating that lease storage is lost for the VG, and LVs must be
545       immediately deactivated.
546
547       If no LVs are active in the VG, then the  lockspace  with  an  expiring
548       lease  will  be removed, and errors will be reported when trying to use
549       the VG.  Use the lvmlockctl --drop command to clear the stale lockspace
550       from lvmlockd.
551
552       If the VG has active LVs when the lock storage is lost, the LVs must be
553       quickly deactivated before the lockspace lease expires.  After all  LVs
554       are  deactivated,  run lvmlockctl --drop <vgname> to clear the expiring
555       lockspace from lvmlockd.  If all LVs in  the  VG  are  not  deactivated
556       within  about  40  seconds, sanlock uses wdmd and the local watchdog to
557       reset the host.  The machine reset is  effectively  a  severe  form  of
558       "deactivating"  LVs  before  they can be activated on other hosts.  The
559       reset is considered a better alternative than having LVs used by multi‐
560       ple hosts at once, which could easily damage or destroy their content.
561
562       In the future, the lvmlockctl kill command may automatically attempt to
563       forcibly deactivate LVs before the sanlock lease expires.  Until  then,
564       the  user must notice the syslog message and manually deactivate the VG
565       before sanlock resets the machine.
566
567       sanlock daemon failure
568
569       If the sanlock daemon fails or exits while a lockspace is started,  the
570       local  watchdog  will reset the host.  This is necessary to protect any
571       application resources that depend on sanlock leases.
572
573
574
575   changing dlm cluster name
576       When a dlm VG is created, the cluster name is saved in the VG metadata.
577       To  use  the  VG,  a host must be in the named dlm cluster.  If the dlm
578       cluster name changes, or the VG is moved to  a  new  cluster,  the  dlm
579       cluster name saved in the VG must also be changed.
580
581       To see the dlm cluster name saved in the VG, use the command:
582       vgs -o+locktype,lockargs <vgname>
583
584       To  change  the dlm cluster name in the VG when the VG is still used by
585       the original cluster:
586
587
588       · Start the VG on the host changing the lock type
589         vgchange --lock-start <vgname>
590
591
592       · Stop the VG on all other hosts:
593         vgchange --lock-stop <vgname>
594
595
596       · Change the VG lock type to none on the host where the VG is started:
597         vgchange --lock-type none <vgname>
598
599
600       · Change the dlm cluster name on the hosts or move the VG  to  the  new
601         cluster.   The new dlm cluster must now be running on the host.  Ver‐
602         ify the new name by:
603         cat /sys/kernel/config/dlm/cluster/cluster_name
604
605
606       · Change the VG lock type back to dlm which sets the new cluster name:
607         vgchange --lock-type dlm <vgname>
608
609
610       · Start the VG on hosts to use it:
611         vgchange --lock-start <vgname>
612
613
614       To change the dlm cluster name in the VG when the dlm cluster name  has
615       already  been  changed  on  the hosts, or the VG has already moved to a
616       different cluster:
617
618
619       · Ensure the VG is not being used by any hosts.
620
621
622       · The new dlm cluster must be running on the host  making  the  change.
623         The current dlm cluster name can be seen by:
624         cat /sys/kernel/config/dlm/cluster/cluster_name
625
626
627       · Change the VG lock type to none:
628         vgchange --lock-type none --lock-opt force <vgname>
629
630
631       · Change the VG lock type back to dlm which sets the new cluster name:
632         vgchange --lock-type dlm <vgname>
633
634
635       · Start the VG on hosts to use it:
636         vgchange --lock-start <vgname>
637
638
639
640   changing a local VG to a shared VG
641       All LVs must be inactive to change the lock type.
642
643       lvmlockd must be configured and running as described in USAGE.
644
645
646       · Change a local VG to a shared VG with the command:
647         vgchange --lock-type sanlock|dlm <vgname>
648
649
650       · Start the VG on hosts to use it:
651         vgchange --lock-start <vgname>
652
653
654   changing a shared VG to a local VG
655       All LVs must be inactive to change the lock type.
656
657
658       · Start the VG on the host making the change:
659         vgchange --lock-start <vgname>
660
661
662       · Stop the VG on all other hosts:
663         vgchange --lock-stop <vgname>
664
665
666       · Change the VG lock type to none on the host where the VG is started:
667         vgchange --lock-type none <vgname>
668
669
670       If  the VG cannot be started with the previous lock type, then the lock
671       type can be forcibly changed to none with:
672
673       vgchange --lock-type none --lock-opt force <vgname>
674
675       To change a VG from one lock type to another (i.e. between sanlock  and
676       dlm), first change it to a local VG, then to the new type.
677
678
679
680   changing a clvm/clustered VG to a shared VG
681       All LVs must be inactive to change the lock type.
682
683       First  change  the  clvm/clustered  VG to a local VG.  Within a running
684       clvm cluster, change a clustered VG to a local VG with the command:
685
686       vgchange -cn <vgname>
687
688       If the clvm cluster is no longer  running  on  any  nodes,  then  extra
689       options  can  be  used to forcibly make the VG local.  Caution: this is
690       only safe if all nodes have stopped using the VG:
691
692       vgchange --lock-type none --lock-opt force <vgname>
693
694       After the VG is local, follow the steps described in "changing a  local
695       VG to a shared VG".
696
697
698   extending an LV active on multiple hosts
699       With  lvmlockd, a new procedure is required to extend an LV while it is
700       active on multiple hosts (e.g. when used under gfs2):
701
702       1. On one node run the lvextend command:
703          lvextend --lockopt skiplv -L Size VG/LV
704
705       2. On each node using the LV, refresh the LV:
706          lvchange --refresh VG/LV
707
708       3. On one node extend gfs2 (or comparable for other applications):
709          gfs2_grow VG/LV
710
711
712   limitations of shared VGs
713       Things that do not yet work in shared VGs:
714       · using external origins for thin LVs
715       · splitting snapshots from LVs
716       · splitting mirrors in sanlock VGs
717       · pvmove of entire PVs, or under LVs activated with shared locks
718       · vgsplit and vgmerge (convert to a local VG to do this)
719
720
721
722   lvmlockd changes from clvmd
723       (See above for converting an existing clvm VG to a shared VG.)
724
725       While lvmlockd and clvmd are entirely different  systems,  LVM  command
726       usage  remains  similar.   Differences are more notable when using lvm‐
727       lockd's sanlock option.
728
729       Visible usage differences  between  shared  VGs  (using  lvmlockd)  and
730       clvm/clustered VGs (using clvmd):
731
732
733       · lvm.conf  is  configured  to  use lvmlockd by setting use_lvmlockd=1.
734         clvmd used locking_type=3.
735
736
737       · vgcreate --shared creates a shared VG.  vgcreate --clustered  y  cre‐
738         ated a clvm/clustered VG.
739
740
741       · lvmlockd  adds  the option of using sanlock for locking, avoiding the
742         need for network clustering.
743
744
745       · lvmlockd defaults to the exclusive activation mode whenever the acti‐
746         vation mode is unspecified, i.e. -ay means -aey, not -asy.
747
748
749       · lvmlockd  commands  always apply to the local host, and never have an
750         effect on a remote host.  (The activation option 'l' is not used.)
751
752
753       · lvmlockd saves the cluster name for a  shared  VG  using  dlm.   Only
754         hosts in the matching cluster can use the VG.
755
756
757       · lvmlockd   requires   starting/stopping   shared  VGs  with  vgchange
758         --lock-start and --lock-stop.
759
760
761       · vgremove of a sanlock VG may fail indicating that all hosts have  not
762         stopped  the  VG  lockspace.  Stop the VG on all hosts using vgchange
763         --lock-stop.
764
765
766       · vgreduce or pvmove of a PV in a sanlock VG will fail if it holds  the
767         internal "lvmlock" LV that holds the sanlock locks.
768
769
770       · lvmlockd  uses  lock  retries  instead of lock queueing, so high lock
771         contention may  require  increasing  global/lvmlockd_lock_retries  to
772         avoid transient lock failures.
773
774
775       · lvmlockd  includes  VG reporting options lock_type and lock_args, and
776         LV reporting option lock_args  to  view  the  corresponding  metadata
777         fields.
778
779
780       · In  the 'vgs' command's sixth VG attr field, "s" for "shared" is dis‐
781         played for shared VGs.
782
783
784       · If lvmlockd fails or is killed while in use, locks it held remain but
785         are  orphaned in the lock manager.  lvmlockd can be restarted with an
786         option to adopt the orphan locks from the previous instance  of  lvm‐
787         lockd.
788
789
790Red Hat, Inc        LVM TOOLS 2.03.02(2)-RHEL8 (2019-01-04)        LVMLOCKD(8)
Impressum