1DLM.CONF(5)                           dlm                          DLM.CONF(5)
2
3
4

NAME

6       dlm.conf - dlm_controld configuration file
7
8

SYNOPSIS

10       /etc/dlm/dlm.conf
11
12

DESCRIPTION

14       The  configuration  options in dlm.conf mirror the dlm_controld command
15       line options.  The config file additionally allows advanced fencing and
16       lockspace configuration that are not supported on the command line.
17
18

Command line equivalents

20       If  an  option is specified on the command line and in the config file,
21       the command line  setting  overrides  the  config  file  setting.   See
22       dlm_controld(8) for descriptions and dlm_controld -h for defaults.
23
24       Format:
25
26       setting=value
27
28       Example:
29
30       log_debug=1
31       post_join_delay=10
32       protocol=tcp
33
34       Options:
35
36       daemon_debug(*)
37       log_debug(*)
38       protocol
39       bind_all
40       mark
41       debug_logfile(*)
42       enable_plock
43       plock_debug(*)
44       plock_rate_limit(*)
45       plock_ownership
46       drop_resources_time(*)
47       drop_resources_count(*)
48       drop_resources_age(*)
49       post_join_delay(*)
50       enable_fencing
51       enable_concurrent_fencing
52       enable_startup_fencing
53       enable_quorum_fencing(*)
54       enable_quorum_lockspace(*)
55       repeat_failed_fencing(*)
56       enable_helper
57
58       Options with (*) can be reloaded, see Reload config.
59
60

Reload config

62       Some dlm.conf settings can be changed while dlm_controld is running us‐
63       ing dlm_tool reload_config.  Edit dlm.conf, adding, removing,  comment‐
64       ing  or  changing  values, then run dlm_tool reload_config to apply the
65       changes in dlm_controld.  dlm_tool dump_config will show the  new  set‐
66       tings.
67
68

Fencing

70       A fence device definition begins with a device line, followed by a num‐
71       ber of connect lines, one for each node connected to the device.
72
73       A blank line separates device definitions.
74
75       Devices are used in the order they are listed.
76
77       The device key word is followed by a unique dev_name, the agent program
78       to be used, and args, which are agent arguments specific to the device.
79
80       The connect key word is followed by the dev_name of the device section,
81       the node ID of the connected node in the format node=nodeid  and  args,
82       which are agent arguments specific to the node for the given device.
83
84       The  format  of  args is key=val on both device and connect lines, each
85       pair separated by a space, e.g. key1=val1 key2=val2 key3=val3.
86
87       Format:
88
89       device  dev_name agent [args]
90       connect dev_name node=nodeid [args]
91       connect dev_name node=nodeid [args]
92       connect dev_name node=nodeid [args]
93
94       Example:
95
96       device  foo fence_foo ipaddr=1.1.1.1 login=x password=y
97       connect foo node=1 port=1
98       connect foo node=2 port=2
99       connect foo node=3 port=3
100
101       device  bar fence_bar ipaddr=2.2.2.2 login=x password=y
102       connect bar node=1 port=1
103       connect bar node=2 port=2
104       connect bar node=3 port=3
105
106
107   Parallel devices
108       Some devices, like dual power or dual path, must all be turned  off  in
109       parallel  for  fencing to succeed.  To define multiple devices as being
110       parallel to each other, use the same base dev_name with different  suf‐
111       fixes and a colon separator between base name and suffix.
112
113       Format:
114
115       device  dev_name:1 agent [args]
116       connect dev_name:1 node=nodeid [args]
117       connect dev_name:1 node=nodeid [args]
118       connect dev_name:1 node=nodeid [args]
119
120       device  dev_name:2 agent [args]
121       connect dev_name:2 node=nodeid [args]
122       connect dev_name:2 node=nodeid [args]
123       connect dev_name:2 node=nodeid [args]
124
125       Example:
126
127       device  foo:1 fence_foo ipaddr=1.1.1.1 login=x password=y
128       connect foo:1 node=1 port=1
129       connect foo:2 node=2 port=2
130       connect foo:3 node=3 port=3
131
132       device  foo:2 fence_foo ipaddr=5.5.5.5 login=x password=y
133       connect foo:2 node=1 port=1
134       connect foo:2 node=2 port=2
135       connect foo:2 node=3 port=3
136
137
138   Unfencing
139       A  node  may sometimes need to "unfence" itself when starting.  The un‐
140       fencing command reverses the effect of  a  previous  fencing  operation
141       against  it.  An example would be fencing that disables a port on a SAN
142       switch.  A node could use unfencing to re-enable its switch  port  when
143       starting  up  after rebooting.  (Care must be taken to ensure it's safe
144       for a node to unfence itself.  A node often needs  to  be  cleanly  re‐
145       booted before unfencing itself.)
146
147       To  specify  that  a node should unfence itself for a given device, the
148       unfence line is added after the connect lines.
149
150       Format:
151
152       device  dev_name agent [args]
153       connect dev_name node=nodeid [args]
154       connect dev_name node=nodeid [args]
155       connect dev_name node=nodeid [args]
156       unfence dev_name
157
158       Example:
159
160       device  foo fence_foo ipaddr=1.1.1.1 login=x password=y
161       connect foo node=1 port=1
162       connect foo node=2 port=2
163       connect foo node=3 port=3
164       unfence foo
165
166
167   Simple devices
168       In some cases, a single fence device is used for all nodes, and it  re‐
169       quires no node-specific args.  This would typically be a "bridge" fence
170       device in which an agent is passing a fence request to another  subsys‐
171       tem  to handle.  (Note that a "node=nodeid" arg is always automatically
172       included in agent args, so a node-specific nodeid is always present  to
173       minimally identify the victim.)
174
175       In such a case, a simplified, single-line fence configuration is possi‐
176       ble, with format:
177
178       fence_all agent [args]
179
180       Example:
181
182       fence_all dlm_stonith
183
184       A fence_all configuration is not compatible with a fence device config‐
185       uration (above).
186
187       Unfencing can optionally be applied with:
188
189       fence_all agent [args]
190       unfence_all
191
192

Lockspace configuration

194       A lockspace definition begins with a lockspace line, followed by a num‐
195       ber of master lines.  A blank line separates lockspace definitions.
196
197       Format:
198
199       lockspace ls_name [ls_args]
200       master    ls_name node=nodeid [node_args]
201       master    ls_name node=nodeid [node_args]
202       master    ls_name node=nodeid [node_args]
203
204
205   Disabling resource directory
206       Lockspaces usually use a resource directory to keep track of which node
207       is  the  master  of each resource.  The dlm can operate without the re‐
208       source directory, though, by statically assigning the master of  a  re‐
209       source  using  a  hash  of  the resource name.  To enable, set the per-
210       lockspace nodir option to 1.
211
212       Example:
213
214       lockspace foo nodir=1
215
216
217   Lock-server configuration
218       The nodir setting can be combined with node weights to create a config‐
219       uration  where  select  node(s)  are the master of all resources/locks.
220       These master nodes can be viewed as "lock servers" for the other nodes.
221
222       Example of nodeid 1 as master of all resources:
223
224       lockspace foo nodir=1
225       master    foo node=1
226
227       Example of nodeid's 1 and 2 as masters of all resources:
228
229       lockspace foo nodir=1
230       master    foo node=1
231       master    foo node=2
232
233       Lock management will be partitioned among the available masters.  There
234       can be any number of masters defined.  The designated master nodes will
235       master all resources/locks (according to the resource name hash).  When
236       no  masters  are members of the lockspace, then the nodes revert to the
237       common fully-distributed configuration.  Recovery is faster, with  lit‐
238       tle disruption, when a non-master node joins/leaves.
239
240       There is no special mode in the dlm for this lock server configuration,
241       it's just a natural consequence of combining the  "nodir"  option  with
242       node  weights.   When  a lockspace has master nodes defined, the master
243       has a default weight of 1 and all non-master nodes have  weight  of  0.
244       An explicit non-zero weight can also be assigned to master nodes, e.g.
245
246       lockspace foo nodir=1
247       master    foo node=1 weight=2
248       master    foo node=2 weight=1
249
250       In  which case node 1 will master 2/3 of the total resources and node 2
251       will master the other 1/3.
252
253
254   Node configuration
255       Node configurations can be set by the node  keyword  followed  of  key-
256       value pairs.
257
258       Keys:
259
260       mark  The  mark  key  can be used to set a specific mark value which is
261       then used by the in-kernel DLM socket creation. This  can  be  used  to
262       match for DLM specific packets for e.g. routing.
263
264       Example  of setting a per socket value for nodeid 1 and a mark value of
265       42:
266
267       node id=1 mark=42
268
269       For local nodes this value doesn't have any effect.
270
271

SEE ALSO

273       dlm_controld(8), dlm_tool(8)
274
275
276
277
278dlm                               2012-04-09                       DLM.CONF(5)
Impressum