1LVMLOCKD(8)                                                        LVMLOCKD(8)
2
3
4

NAME

6       lvmlockd — LVM locking daemon
7

SYNOPSIS

9       lvmlockd [options]
10

DESCRIPTION

12       LVM commands use lvmlockd to coordinate access to shared storage.
13       When LVM is used on devices shared by multiple hosts, locks will:
14       • coordinate reading and writing of LVM metadata
15       • validate caching of LVM metadata
16       • prevent conflicting activation of logical volumes
17       lvmlockd uses an external lock manager to perform basic locking.
18
19       Lock manager (lock type) options are:
20       • sanlock: places locks on disk within LVM storage.
21       • dlm: uses network communication and a cluster manager.
22

OPTIONS

24       -h|--help
25              Show this help information.
26
27       -V|--version
28              Show version of lvmlockd.
29
30       -T|--test
31              Test mode, do not call lock manager.
32
33       -f|--foreground
34              Don't fork.
35
36       -D|--daemon-debug
37              Don't fork and print debugging to stdout.
38
39       -p|--pid-file path
40              Set path to the pid file.
41
42       -s|--socket-path path
43              Set path to the socket to listen on.
44
45       --adopt-file path
46              Set path to the adopt file.
47
48       -S|--syslog-priority err|warning|debug
49              Write log messages from this level up to syslog.
50
51       -g|--gl-type sanlock|dlm
52              Set global lock type to be sanlock or dlm.
53
54       -i|--host-id num
55              Set the local sanlock host id.
56
57       -F|--host-id-file path
58              A file containing the local sanlock host_id.
59
60       -o|--sanlock-timeout seconds
61              Override the default sanlock I/O timeout.
62
63       -A|--adopt 0|1
64              Enable (1) or disable (0) lock adoption.
65

USAGE

67   Initial set up
68       Setting  up  LVM to use lvmlockd and a shared VG for the first time in‐
69       cludes some one time set up steps:
70
71   1. choose a lock manager
72       dlm
73       If dlm (or corosync) are already being used by other cluster  software,
74       then  select dlm.  dlm uses corosync which requires additional configu‐
75       ration beyond the scope of this document.  See corosync and  dlm  docu‐
76       mentation for instructions on configuration, set up and usage.
77
78       sanlock
79       Choose  sanlock  if  dlm/corosync  are not otherwise required.  sanlock
80       does not depend on any clustering software or configuration.
81
82   2. configure hosts to use lvmlockd
83       On all hosts running lvmlockd, configure lvm.conf:
84       use_lvmlockd = 1
85
86       sanlock
87       Assign each host a unique host_id in the range 1-2000 by setting
88       /etc/lvm/lvmlocal.conf local/host_id
89
90   3. start lvmlockd
91       Start the lvmlockd daemon.
92       Use systemctl, a cluster resource agent, or run directly, e.g.
93       systemctl start lvmlockd
94
95   4. start lock manager
96       sanlock
97       Start the sanlock and wdmd daemons.
98       Use systemctl or run directly, e.g.
99       systemctl start wdmd sanlock
100
101       dlm
102       Start the dlm and corosync daemons.
103       Use systemctl, a cluster resource agent, or run directly, e.g.
104       systemctl start corosync dlm
105
106   5. create VG on shared devices
107       vgcreate --shared <vgname> <devices>
108
109       The shared option sets the VG lock type to sanlock or dlm depending  on
110       which  lock  manager  is running.  LVM commands acquire locks from lvm‐
111       lockd, and lvmlockd uses the chosen lock manager.
112
113   6. start VG on all hosts
114       vgchange --lock-start
115
116       Shared VGs must be started before they are used.  Starting the VG  per‐
117       forms  lock  manager  initialization  that  is necessary to begin using
118       locks (i.e.  creating and joining a lockspace).  Starting  the  VG  may
119       take  some  time, and until the start completes the VG may not be modi‐
120       fied or activated.
121
122   7. create and activate LVs
123       Standard lvcreate and lvchange commands are used to create and activate
124       LVs in a shared VG.
125
126       An LV activated exclusively on one host cannot be activated on another.
127       When multiple hosts need to use the same LV concurrently, the LV can be
128       activated  with  a  shared  lock  (see  lvchange options -aey vs -asy.)
129       (Shared locks are disallowed for certain LV types that cannot  be  used
130       from multiple hosts.)
131
132   Normal start up and shut down
133       After  initial  set  up,  start  up and shut down include the following
134       steps.  They can be performed directly or may be automated  using  sys‐
135       temd or a cluster resource manager/agents.
136
137       • start lvmlockd
138       • start lock manager
139       • vgchange --lock-start
140       • activate LVs in shared VGs
141
142       The shut down sequence is the reverse:
143
144       • deactivate LVs in shared VGs
145       • vgchange --lock-stop
146       • stop lock manager
147       • stop lvmlockd
148

TOPICS

150   Protecting VGs on shared devices
151       The  following terms are used to describe the different ways of access‐
152       ing VGs on shared devices.
153
154       shared VG
155
156       A shared VG exists on shared storage that is visible to multiple hosts.
157       LVM acquires locks through lvmlockd to coordinate access to shared VGs.
158       A shared VG has lock_type "dlm" or "sanlock", which specifies the  lock
159       manager lvmlockd will use.
160
161       When  the  lock  manager  for  the lock type is not available (e.g. not
162       started or failed), lvmlockd is unable to acquire locks  for  LVM  com‐
163       mands.   In  this  situation, LVM commands are only allowed to read and
164       display the VG; changes and activation will fail.
165
166       local VG
167
168       A local VG is meant to be used by a single host.  It has no  lock  type
169       or lock type "none".  A local VG typically exists on local (non-shared)
170       devices and cannot be used concurrently from different hosts.
171
172       If a local VG does exist on shared devices, it should  be  owned  by  a
173       single  host by having the system ID set, see lvmsystemid(7).  The host
174       with a matching system ID can use the local VG and other hosts will ig‐
175       nore  it.   A  VG with no lock type and no system ID should be excluded
176       from all but one host using lvm.conf filters.   Without  any  of  these
177       protections,  a local VG on shared devices can be easily damaged or de‐
178       stroyed.
179
180       clvm VG
181
182       A clvm VG (or clustered VG) is a VG on shared storage  (like  a  shared
183       VG) that requires clvmd for clustering and locking.  See below for con‐
184       verting a clvm/clustered VG to a shared VG.
185
186   Shared VGs from hosts not using lvmlockd
187       Hosts that do not use shared VGs will not be running lvmlockd.  In this
188       case,  shared  VGs  that  are still visible to the host will be ignored
189       (like foreign VGs, see lvmsystemid(7)).
190
191       The --shared option for reporting and display  commands  causes  shared
192       VGs  to  be  displayed on a host not using lvmlockd, like the --foreign
193       option does for foreign VGs.
194
195   Creating the first sanlock VG
196       When use_lvmlockd is first enabled in lvm.conf, and  before  the  first
197       sanlock  VG  is  created,  no  global lock will exist.  In this initial
198       state, LVM commands try and fail to acquire the global lock,  producing
199       a warning, and some commands are disallowed.  Once the first sanlock VG
200       is created, the global lock will be available, and LVM  will  be  fully
201       operational.
202
203       When  a  new  sanlock  VG  is  created,  its lockspace is automatically
204       started on the host that creates it.  Other hosts need to run 'vgchange
205       --lock-start' to start the new VG before they can use it.
206
207       Creating  the  first  sanlock VG is not protected by locking, so it re‐
208       quires special attention.  This is because sanlock locks exist on stor‐
209       age within the VG, so they are not available until after the VG is cre‐
210       ated.  The first sanlock VG that is created will automatically  contain
211       the "global lock".  Be aware of the following special considerations:
212
213       • The  first  vgcreate  command  needs to be given the path to a device
214         that has not yet been initialized with pvcreate.  The  pvcreate  ini‐
215         tialization  will  be done by vgcreate.  This is because the pvcreate
216         command requires the global lock, which will not be  available  until
217         after the first sanlock VG is created.
218
219       • Because  the  first  sanlock VG will contain the global lock, this VG
220         needs to be accessible to all hosts that will use sanlock shared VGs.
221         All hosts will need to use the global lock from the first sanlock VG.
222
223       • The  device and VG name used by the initial vgcreate will not be pro‐
224         tected from concurrent use by another vgcreate on another host.
225
226       See below for more information about managing the sanlock global lock.
227
228   Using shared VGs
229       In the 'vgs' command, shared VGs are indicated by "s" (for  shared)  in
230       the  sixth attr field, and by "shared" in the "--options shared" report
231       field.  The specific lock type and lock args for a  shared  VG  can  be
232       displayed with 'vgs -o+locktype,lockargs'.
233
234       Shared  VGs  need  to be "started" and "stopped", unlike other types of
235       VGs.  See the following section for a full description of starting  and
236       stopping.
237
238       Removing a shared VG will fail if other hosts have the VG started.  Run
239       vgchange --lock-stop <vgname> on all other hosts before vgremove.   (It
240       may take several seconds before vgremove recognizes that all hosts have
241       stopped a sanlock VG.)
242
243   Starting and stopping VGs
244       Starting a shared VG (vgchange --lock-start) causes the lock manager to
245       start  (join)  the  lockspace  for  the VG on the host where it is run.
246       This makes locks for the VG available to LVM commands on the host.  Be‐
247       fore  a  VG  is started, only LVM commands that read/display the VG are
248       allowed to continue without locks (and with a warning).
249
250       Stopping a shared VG (vgchange --lock-stop) causes the lock manager  to
251       stop  (leave)  the  lockspace  for  the VG on the host where it is run.
252       This makes locks for the VG inaccessible to the host.  A VG  cannot  be
253       stopped while it has active LVs.
254
255       When  using  the  lock type sanlock, starting a VG can take a long time
256       (potentially minutes if the  host  was  previously  shut  down  without
257       cleanly stopping the VG.)
258
259       A shared VG can be started after all the following are true:
260
261       • lvmlockd is running
262       • the lock manager is running
263       • the VG's devices are visible on the system
264
265       A shared VG can be stopped if all LVs are deactivated.
266
267       All shared VGs can be started/stopped using:
268       vgchange --lock-start
269       vgchange --lock-stop
270
271       Individual VGs can be started/stopped using:
272       vgchange --lock-start <vgname> ...
273       vgchange --lock-stop <vgname> ...
274
275       To make vgchange not wait for start to complete:
276       vgchange --lock-start --lock-opt nowait ...
277
278       lvmlockd can be asked directly to stop all lockspaces:
279       lvmlockctl -S|--stop-lockspaces
280
281       To   start   only   selected  shared  VGs,  use  the  lvm.conf  activa‐
282       tion/lock_start_list.  When defined, only VG names  in  this  list  are
283       started  by  vgchange.   If  the list is not defined (the default), all
284       visible shared VGs are started.  To start only "vg1", use the following
285       lvm.conf configuration:
286
287       activation {
288           lock_start_list = [ "vg1" ]
289           ...
290       }
291
292   Internal command locking
293       To  optimize  the use of LVM with lvmlockd, be aware of the three kinds
294       of locks and when they are used:
295
296       Global lock
297
298       The global lock is associated with global information, which is  infor‐
299       mation not isolated to a single VG.  This includes:
300
301       • The global VG namespace.
302       • The set of orphan PVs and unused devices.
303       • The properties of orphan PVs, e.g. PV size.
304
305       The  global  lock is acquired in shared mode by commands that read this
306       information, or in exclusive mode by commands that change it.  For  ex‐
307       ample,  the  command  'vgs' acquires the global lock in shared mode be‐
308       cause it reports the list of all VG names, and the vgcreate command ac‐
309       quires  the  global  lock in exclusive mode because it creates a new VG
310       name, and it takes a PV from the list of unused PVs.
311
312       When an LVM command is given a tag argument, or uses  select,  it  must
313       read  all  VGs  to  match the tag or selection, which causes the global
314       lock to be acquired.
315
316       VG lock
317
318       A VG lock is associated with each shared VG.  The VG lock  is  acquired
319       in shared mode to read the VG and in exclusive mode to change the VG or
320       activate LVs.  This lock serializes access to a VG with all  other  LVM
321       commands accessing the VG from all hosts.
322
323       The  command  'vgs  <vgname>' does not acquire the global lock (it does
324       not need the list of all VG names), but will acquire  the  VG  lock  on
325       each VG name argument.
326
327       LV lock
328
329       An  LV lock is acquired before the LV is activated, and is released af‐
330       ter the LV is deactivated.  If the LV lock cannot be acquired,  the  LV
331       is  not  activated.   (LV locks are persistent and remain in place when
332       the activation command is done.  Global and VG locks are transient, and
333       are held only while an LVM command is running.)
334
335       lock retries
336
337       If  a request for a global or VG lock fails due to a lock conflict with
338       another host, lvmlockd automatically retries for a  short  time  before
339       returning  a failure to the LVM command.  If those retries are insuffi‐
340       cient, the LVM command will retry the entire lock request a  number  of
341       times  specified  by global/lvmlockd_lock_retries before failing.  If a
342       request for an LV lock fails due to a lock conflict, the command  fails
343       immediately.
344
345   Managing the global lock in sanlock VGs
346       The global lock exists in one of the sanlock VGs.  The first sanlock VG
347       created will contain the global lock.  Subsequent sanlock VGs will each
348       contain a disabled global lock that can be enabled later if necessary.
349
350       The  VG  containing  the global lock must be visible to all hosts using
351       sanlock VGs.  For this reason, it can be useful to create a small  san‐
352       lock VG, visible to all hosts, and dedicated to just holding the global
353       lock.  While not required, this strategy can help to  avoid  difficulty
354       in the future if VGs are moved or removed.
355
356       The  vgcreate  command  typically  acquires the global lock, but in the
357       case of the first sanlock VG, there will be no global lock  to  acquire
358       until  the  first vgcreate is complete.  So, creating the first sanlock
359       VG is a special case that skips the global lock.
360
361       vgcreate determines that it's creating the first  sanlock  VG  when  no
362       other sanlock VGs are visible on the system.  It is possible that other
363       sanlock VGs do exist, but are not  visible  when  vgcreate  checks  for
364       them.   In  this  case,  vgcreate will create a new sanlock VG with the
365       global lock enabled.  When the another VG containing a global lock  ap‐
366       pears,  lvmlockd  will then see more than one VG with a global lock en‐
367       abled.  LVM commands will report that there are duplicate global locks.
368
369       If the situation arises where more  than  one  sanlock  VG  contains  a
370       global lock, the global lock should be manually disabled in all but one
371       of them with the command:
372
373       lvmlockctl --gl-disable <vgname>
374
375       (The one VG with the global lock enabled must be visible to all hosts.)
376
377       An opposite problem can occur if the VG holding the global lock is  re‐
378       moved.  In this case, no global lock will exist following the vgremove,
379       and subsequent LVM commands will fail to acquire it.  In this case, the
380       global  lock  needs to be manually enabled in one of the remaining san‐
381       lock VGs with the command:
382
383       lvmlockctl --gl-enable <vgname>
384
385       (Using a small sanlock VG dedicated to  holding  the  global  lock  can
386       avoid  the  case where the global lock must be manually enabled after a
387       vgremove.)
388
389   Internal lvmlock LV
390       A sanlock VG contains a hidden LV called "lvmlock" that holds the  san‐
391       lock  locks.  vgreduce cannot yet remove the PV holding the lvmlock LV.
392       To remove this PV, change the VG lock type  to  "none",  run  vgreduce,
393       then change the VG lock type back to "sanlock".  Similarly, pvmove can‐
394       not be used on a PV used by the lvmlock LV.
395
396       To place the lvmlock LV on a specific device, create the VG  with  only
397       that device, then use vgextend to add other devices.
398
399   LV activation
400       In  a  shared  VG, LV activation involves locking through lvmlockd, and
401       the following values are possible with lvchange/vgchange -a:
402
403       y|ey   The command activates the LV in exclusive mode, allowing a  sin‐
404              gle host to activate the LV.  Before activating the LV, the com‐
405              mand uses lvmlockd to acquire an exclusive lock on the  LV.   If
406              the  lock cannot be acquired, the LV is not activated and an er‐
407              ror is reported.  This would happen if the LV is active  on  an‐
408              other host.
409
410       sy     The  command  activates the LV in shared mode, allowing multiple
411              hosts to activate the LV concurrently.   Before  activating  the
412              LV,  the  command  uses lvmlockd to acquire a shared lock on the
413              LV.  If the lock cannot be acquired, the LV is not activated and
414              an error is reported.  This would happen if the LV is active ex‐
415              clusively on another host.  If the LV type prohibits shared  ac‐
416              cess,  such  as a snapshot, the command will report an error and
417              fail.  The shared mode is intended for a multi-host/cluster  ap‐
418              plication  or file system.  LV types that cannot be used concur‐
419              rently from multiple hosts include thin,  cache,  raid,  mirror,
420              and snapshot.
421
422       n      The  command deactivates the LV.  After deactivating the LV, the
423              command uses lvmlockd to release the current lock on the LV.
424
425   Manually repairing a shared VG
426       Some failure conditions may not be repairable while the VG has a shared
427       lock  type.   In  these  cases,  it may be possible to repair the VG by
428       forcibly changing the lock type to "none".   This  is  done  by  adding
429       "--lock-opt  force"  to  the normal command for changing the lock type:
430       vgchange --lock-type none VG.  The VG lockspace should first be stopped
431       on all hosts, and be certain that no hosts are using the VG before this
432       is done.
433
434   Recover from lost PV holding sanlock locks
435       In a sanlock VG, the sanlock locks are held on the hidden "lvmlock" LV.
436       If  the  PV  holding this LV is lost, a new lvmlock LV needs to be cre‐
437       ated.  To do this, ensure no hosts are  using  the  VG,  then  forcibly
438       change  the lock type to "none" (see above).  Then change the lock type
439       back to "sanlock" with the normal command for changing the  lock  type:
440       vgchange  --lock-type  sanlock VG.  This recreates the internal lvmlock
441       LV with the necessary locks.
442
443   Locking system failures
444       lvmlockd failure
445
446       If lvmlockd fails or is killed while holding locks, the locks  are  or‐
447       phaned  in the lock manager.  Orphaned locks must be cleared or adopted
448       before the associated resources can  be  accessed  normally.   If  lock
449       adoption  is  enabled,  lvmlockd  keeps a record of locks in the adopt-
450       file.  A subsequent instance of lvmlockd will then adopt locks orphaned
451       by  the  previous instance.  Adoption must be enabled in both instances
452       (--adopt|-A 1).  Without adoption, the lock manager or host  would  re‐
453       quire a reset to clear orphaned lock state.
454
455       dlm/corosync failure
456
457       If  dlm or corosync fail, the clustering system will fence the host us‐
458       ing a method configured within the dlm/corosync clustering environment.
459
460       LVM commands on other hosts will be blocked from  acquiring  any  locks
461       until the dlm/corosync recovery process is complete.
462
463       sanlock lease storage failure
464
465       If the PV under a sanlock VG's lvmlock LV is disconnected, unresponsive
466       or too slow, sanlock cannot renew the lease for the VG's locks.   After
467       some  time,  the lease will expire, and locks that the host owns in the
468       VG can be acquired by other hosts.  The VG must be forcibly deactivated
469       on  the host with the expiring lease before other hosts can acquire its
470       locks.  This is necessary for data protection.
471
472       When the sanlock daemon detects that VG storage  is  lost  and  the  VG
473       lease  is  expiring,  it  runs  the command lvmlockctl --kill <vgname>.
474       This command emits a syslog message stating that storage  is  lost  for
475       the VG, and that LVs in the VG must be immediately deactivated.
476
477       If  no LVs are active in the VG, then the VG lockspace will be removed,
478       and errors will be reported when trying to use the VG.   Use  the  lvm‐
479       lockctl --drop command to clear the stale lockspace from lvmlockd.
480
481       If  the  VG has active LVs, they must be quickly deactivated before the
482       locks expire.  After all LVs are  deactivated,  run  lvmlockctl  --drop
483       <vgname> to clear the expiring lockspace from lvmlockd.
484
485       If  all LVs in the VG are not deactivated within about 40 seconds, san‐
486       lock uses wdmd and the local watchdog to reset the host.   The  machine
487       reset  is  effectively  a severe form of "deactivating" LVs before they
488       can be activated on other hosts.  The reset is considered a better  al‐
489       ternative  than  having LVs used by multiple hosts at once, which could
490       easily damage or destroy their content.
491
492       sanlock lease storage failure automation
493
494       When the sanlock daemon detects that the lease storage is lost, it runs
495       the command lvmlockctl --kill <vgname>.  This lvmlockctl command can be
496       configured to run another command to forcibly  deactivate  LVs,  taking
497       the  place of the manual process described above.  The other command is
498       configured in the lvm.conf  lvmlockctl_kill_command  setting.   The  VG
499       name is appended to the end of the command specified.
500
501       The  lvmlockctl_kill_command  should forcibly deactivate LVs in the VG,
502       ensuring that existing writes to LVs in the VG are  complete  and  that
503       further writes to the LVs in the VG will be rejected.  If it is able to
504       do this successfully, it should exit with success, otherwise it  should
505       exit with an error.  If lvmlockctl --kill gets a successful result from
506       lvmlockctl_kill_command, it tells lvmlockd to drop  locks  for  the  VG
507       (the  equivalent  of  running lvmlockctl --drop).  If this completes in
508       time, a machine reset can be avoided.
509
510       One possible option is to create a script my_vg_kill_script.sh:
511         #!/bin/bash
512         VG=$1
513         # replace dm table with the error target for top level LVs
514         dmsetup wipe_table -S "uuid=~LVM && vgname=$VG && lv_layer=\"\""
515         # check that the error target is in place
516         dmsetup table -c -S "uuid=~LVM && vgname=$VG && lv_layer=\"\"" |grep -vw error
517         if [[ $? -ne 0 ]] ; then
518           exit 0
519         fi
520         exit 1
521
522       Set in lvm.conf:
523         lvmlockctl_kill_command="/usr/sbin/my_vg_kill_script.sh"
524
525       (The script and dmsetup commands should be tested with the actual VG to
526       ensure that all top level LVs are properly disabled.)
527
528       If  the lvmlockctl_kill_command is not configured, or fails, lvmlockctl
529       --kill will emit syslog messages as described in the previous  section,
530       notifying  the user to manually deactivate the VG before sanlock resets
531       the machine.
532
533       sanlock daemon failure
534
535       If the sanlock daemon fails or exits while a lockspace is started,  the
536       local  watchdog  will reset the host.  This is necessary to protect any
537       application resources that depend on sanlock leases.
538
539   Changing dlm cluster name
540       When a dlm VG is created, the cluster name is saved in the VG metadata.
541       To  use  the  VG,  a host must be in the named dlm cluster.  If the dlm
542       cluster name changes, or the VG is moved to  a  new  cluster,  the  dlm
543       cluster name saved in the VG must also be changed.
544
545       To see the dlm cluster name saved in the VG, use the command:
546       vgs -o+locktype,lockargs <vgname>
547
548       To  change  the dlm cluster name in the VG when the VG is still used by
549       the original cluster:
550
551       • Start the VG on the host changing the lock type
552         vgchange --lock-start <vgname>
553
554       • Stop the VG on all other hosts:
555         vgchange --lock-stop <vgname>
556
557       • Change the VG lock type to none on the host where the VG is started:
558         vgchange --lock-type none <vgname>
559
560       • Change the dlm cluster name on the hosts or move the VG  to  the  new
561         cluster.   The new dlm cluster must now be running on the host.  Ver‐
562         ify the new name by:
563         cat /sys/kernel/config/dlm/cluster/cluster_name
564
565       • Change the VG lock type back to dlm which sets the new cluster name:
566         vgchange --lock-type dlm <vgname>
567
568       • Start the VG on hosts to use it:
569         vgchange --lock-start <vgname>
570
571       To change the dlm cluster name in the VG when the dlm cluster name  has
572       already  been  changed  on  the hosts, or the VG has already moved to a
573       different cluster:
574
575       • Ensure the VG is not being used by any hosts.
576
577       • The new dlm cluster must be running on the host  making  the  change.
578         The current dlm cluster name can be seen by:
579         cat /sys/kernel/config/dlm/cluster/cluster_name
580
581       • Change the VG lock type to none:
582         vgchange --lock-type none --lock-opt force <vgname>
583
584       • Change the VG lock type back to dlm which sets the new cluster name:
585         vgchange --lock-type dlm <vgname>
586
587       • Start the VG on hosts to use it:
588         vgchange --lock-start <vgname>
589
590   Changing a local VG to a shared VG
591       All LVs must be inactive to change the lock type.
592
593       lvmlockd must be configured and running as described in USAGE.
594
595       • Change a local VG to a shared VG with the command:
596         vgchange --lock-type sanlock|dlm <vgname>
597
598       • Start the VG on hosts to use it:
599         vgchange --lock-start <vgname>
600
601   Changing a shared VG to a local VG
602       All LVs must be inactive to change the lock type.
603
604       • Start the VG on the host making the change:
605         vgchange --lock-start <vgname>
606
607       • Stop the VG on all other hosts:
608         vgchange --lock-stop <vgname>
609
610       • Change the VG lock type to none on the host where the VG is started:
611         vgchange --lock-type none <vgname>
612
613       If  the VG cannot be started with the previous lock type, then the lock
614       type can be forcibly changed to none with:
615       vgchange --lock-type none --lock-opt force <vgname>
616
617       To change a VG from one lock type to another (i.e. between sanlock  and
618       dlm), first change it to a local VG, then to the new type.
619
620   Changing a clvm/clustered VG to a shared VG
621       All LVs must be inactive to change the lock type.
622
623       First  change  the  clvm/clustered  VG to a local VG.  Within a running
624       clvm cluster, change a clustered VG to a local VG with the command:
625
626       vgchange -cn <vgname>
627
628       If the clvm cluster is no longer running on any nodes, then  extra  op‐
629       tions can be used to forcibly make the VG local.  Caution: this is only
630       safe if all nodes have stopped using the VG:
631
632       vgchange --lock-type none --lock-opt force <vgname>
633
634       After the VG is local, follow the steps described in "changing a  local
635       VG to a shared VG".
636
637   Extending an LV active on multiple hosts
638       With  lvmlockd  and  dlm, a special clustering procedure is used to re‐
639       fresh a shared LV on remote cluster nodes after it has been extended on
640       one node.
641
642       When  an  LV  holding  gfs2 or ocfs2 is active on multiple hosts with a
643       shared lock, lvextend is permitted to run with an  existing  shared  LV
644       lock in place of the normal exclusive LV lock.
645
646       After lvextend has finished extending the LV, it sends a remote request
647       to other nodes running the dlm to run 'lvchange --refresh' on  the  LV.
648       This uses dlm_controld and corosync features.
649
650       Some  special  --lockopt  values  can  be  used to modify this process.
651       "shupdate" permits the lvextend update with an existing shared lock  if
652       it  isn't otherwise permitted.  "norefresh" prevents the remote refresh
653       operation.
654
655   Limitations of shared VGs
656       Things that do not yet work in shared VGs:
657       • using external origins for thin LVs
658       • splitting snapshots from LVs
659       • splitting mirrors in sanlock VGs
660       • pvmove of entire PVs, or under LVs activated with shared locks
661       • vgsplit and vgmerge (convert to a local VG to do this)
662
663   lvmlockd changes from clvmd
664       (See above for converting an existing clvm VG to a shared VG.)
665
666       While lvmlockd and clvmd are entirely different  systems,  LVM  command
667       usage  remains  similar.   Differences are more notable when using lvm‐
668       lockd's sanlock option.
669
670       Visible usage differences  between  shared  VGs  (using  lvmlockd)  and
671       clvm/clustered VGs (using clvmd):
672
673       • lvm.conf  is  configured  to  use lvmlockd by setting use_lvmlockd=1.
674         clvmd used locking_type=3.
675
676       • vgcreate --shared creates a shared VG.  vgcreate --clustered  y  cre‐
677         ated a clvm/clustered VG.
678
679       • lvmlockd  adds  the option of using sanlock for locking, avoiding the
680         need for network clustering.
681
682       • lvmlockd defaults to the exclusive activation mode whenever the acti‐
683         vation mode is unspecified, i.e. -ay means -aey, not -asy.
684
685       • lvmlockd  commands  always apply to the local host, and never have an
686         effect on a remote host.  (The activation option 'l' is not used.)
687
688       • lvmlockd saves the cluster name for a  shared  VG  using  dlm.   Only
689         hosts in the matching cluster can use the VG.
690
691       • lvmlockd   requires   starting/stopping   shared  VGs  with  vgchange
692         --lock-start and --lock-stop.
693
694       • vgremove of a sanlock VG may fail indicating that all hosts have  not
695         stopped  the  VG  lockspace.  Stop the VG on all hosts using vgchange
696         --lock-stop.
697
698       • vgreduce or pvmove of a PV in a sanlock VG will fail if it holds  the
699         internal "lvmlock" LV that holds the sanlock locks.
700
701       • lvmlockd  uses  lock  retries  instead of lock queueing, so high lock
702         contention may  require  increasing  global/lvmlockd_lock_retries  to
703         avoid transient lock failures.
704
705       • lvmlockd  includes  VG reporting options lock_type and lock_args, and
706         LV reporting option lock_args  to  view  the  corresponding  metadata
707         fields.
708
709       • In  the 'vgs' command's sixth VG attr field, "s" for "shared" is dis‐
710         played for shared VGs.
711
712       • If lvmlockd fails or is killed while in use, locks it held remain but
713         are  orphaned in the lock manager.  lvmlockd can be restarted with an
714         option to adopt the orphan locks from the previous instance  of  lvm‐
715         lockd.
716
717       • The  'lvs' command does not report any remote state, because lvmlockd
718         is unable to passively check the remote active or lock  state  of  an
719         LV.
720

SEE ALSO

722       lvm(8), lvmlockctl(8)
723
724
725
726Red Hat, Inc           LVM TOOLS 2.03.22(2) (2023-08-02)           LVMLOCKD(8)
Impressum