1DLM.CONF(5)                           dlm                          DLM.CONF(5)
2
3
4

NAME

6       dlm.conf - dlm_controld configuration file
7
8

SYNOPSIS

10       /etc/dlm/dlm.conf
11
12

DESCRIPTION

14       The  configuration  options in dlm.conf mirror the dlm_controld command
15       line options.  The config file additionally allows advanced fencing and
16       lockspace configuration that are not supported on the command line.
17
18

Command line equivalents

20       If  an  option is specified on the command line and in the config file,
21       the command line  setting  overrides  the  config  file  setting.   See
22       dlm_controld(8) for descriptions and dlm_controld -h for defaults.
23
24       Format:
25
26       key=val
27
28       Example:
29
30       log_debug=1
31       post_join_delay=10
32       protocol=tcp
33
34       Options:
35
36       daemon_debug
37       log_debug
38       protocol
39       debug_logfile
40       enable_plock
41       plock_debug
42       plock_rate_limit
43       plock_ownership
44       drop_resources_time
45       drop_resources_count
46       drop_resources_age
47       post_join_delay
48       enable_fencing
49       enable_concurrent_fencing
50       enable_startup_fencing
51       enable_quorum_fencing
52       enable_quorum_lockspace
53       repeat_failed_fencing
54
55

Fencing

57       A fence device definition begins with a device line, followed by a num‐
58       ber of connect lines, one for each node connected to the device.
59
60       A blank line separates device definitions.
61
62       Devices are used in the order they are listed.
63
64       The device key word is followed by a unique dev_name, the agent program
65       to be used, and args, which are agent arguments specific to the device.
66
67       The connect key word is followed by the dev_name of the device section,
68       the node ID of the connected node in the format node=nodeid  and  args,
69       which are agent arguments specific to the node for the given device.
70
71       The  format  of  args is key=val on both device and connect lines, each
72       pair separated by a space, e.g. key1=val1 key2=val2 key3=val3.
73
74       Format:
75
76       device  dev_name agent [args]
77       connect dev_name node=nodeid [args]
78       connect dev_name node=nodeid [args]
79       connect dev_name node=nodeid [args]
80
81       Example:
82
83       device  foo fence_foo ipaddr=1.1.1.1 login=x password=y
84       connect foo node=1 port=1
85       connect foo node=2 port=2
86       connect foo node=3 port=3
87
88       device  bar fence_bar ipaddr=2.2.2.2 login=x password=y
89       connect bar node=1 port=1
90       connect bar node=2 port=2
91       connect bar node=3 port=3
92
93
94   Parallel devices
95       Some devices, like dual power or dual path, must all be turned  off  in
96       parallel  for  fencing to succeed.  To define multiple devices as being
97       parallel to each other, use the same base dev_name with different  suf‐
98       fixes and a colon separator between base name and suffix.
99
100       Format:
101
102       device  dev_name:1 agent [args]
103       connect dev_name:1 node=nodeid [args]
104       connect dev_name:1 node=nodeid [args]
105       connect dev_name:1 node=nodeid [args]
106
107       device  dev_name:2 agent [args]
108       connect dev_name:2 node=nodeid [args]
109       connect dev_name:2 node=nodeid [args]
110       connect dev_name:2 node=nodeid [args]
111
112       Example:
113
114       device  foo:1 fence_foo ipaddr=1.1.1.1 login=x password=y
115       connect foo:1 node=1 port=1
116       connect foo:2 node=2 port=2
117       connect foo:3 node=3 port=3
118
119       device  foo:2 fence_foo ipaddr=5.5.5.5 login=x password=y
120       connect foo:2 node=1 port=1
121       connect foo:2 node=2 port=2
122       connect foo:2 node=3 port=3
123
124
125   Unfencing
126       A  node  may  sometimes  need  to  "unfence" itself when starting.  The
127       unfencing command reverses the effect of a previous  fencing  operation
128       against  it.  An example would be fencing that disables a port on a SAN
129       switch.  A node could use unfencing to re-enable its switch  port  when
130       starting  up  after rebooting.  (Care must be taken to ensure it's safe
131       for a node to unfence  itself.   A  node  often  needs  to  be  cleanly
132       rebooted before unfencing itself.)
133
134       To  specify  that  a node should unfence itself for a given device, the
135       unfence line is added after the connect lines.
136
137       Format:
138
139       device  dev_name agent [args]
140       connect dev_name node=nodeid [args]
141       connect dev_name node=nodeid [args]
142       connect dev_name node=nodeid [args]
143       unfence dev_name
144
145       Example:
146
147       device  foo fence_foo ipaddr=1.1.1.1 login=x password=y
148       connect foo node=1 port=1
149       connect foo node=2 port=2
150       connect foo node=3 port=3
151       unfence foo
152
153
154   Simple devices
155       In some cases, a single fence device is used  for  all  nodes,  and  it
156       requires  no  node-specific  args.   This would typically be a "bridge"
157       fence device in which an agent is passing a fence  request  to  another
158       subsystem to handle.  (Note that a "node=nodeid" arg is always automat‐
159       ically included in agent args, so  a  node-specific  nodeid  is  always
160       present to minimally identify the victim.)
161
162       In such a case, a simplified, single-line fence configuration is possi‐
163       ble, with format:
164
165       fence_all agent [args]
166
167       Example:
168
169       fence_all dlm_stonith
170
171       A fence_all configuration is not compatible with a fence device config‐
172       uration (above).
173
174       Unfencing can optionally be applied with:
175
176       fence_all agent [args]
177       unfence_all
178
179

Lockspace configuration

181       A lockspace definition begins with a lockspace line, followed by a num‐
182       ber of master lines.  A blank line separates lockspace definitions.
183
184       Format:
185
186       lockspace ls_name [ls_args]
187       master    ls_name node=nodeid [node_args]
188       master    ls_name node=nodeid [node_args]
189       master    ls_name node=nodeid [node_args]
190
191
192   Disabling resource directory
193       Lockspaces usually use a resource directory to keep track of which node
194       is  the  master  of  each  resource.   The  dlm can operate without the
195       resource directory, though, by statically assigning  the  master  of  a
196       resource  using  a  hash of the resource name.  To enable, set the per-
197       lockspace nodir option to 1.
198
199       Example:
200
201       lockspace foo nodir=1
202
203
204   Lock-server configuration
205       The nodir setting can be combined with node weights to create a config‐
206       uration  where  select  node(s)  are the master of all resources/locks.
207       These master nodes can be viewed as "lock servers" for the other nodes.
208
209       Example of nodeid 1 as master of all resources:
210
211       lockspace foo nodir=1
212       master    foo node=1
213
214       Example of nodeid's 1 and 2 as masters of all resources:
215
216       lockspace foo nodir=1
217       master    foo node=1
218       master    foo node=2
219
220       Lock management will be partitioned among the available masters.  There
221       can be any number of masters defined.  The designated master nodes will
222       master all resources/locks (according to the resource name hash).  When
223       no  masters  are members of the lockspace, then the nodes revert to the
224       common fully-distributed configuration.  Recovery is faster, with  lit‐
225       tle disruption, when a non-master node joins/leaves.
226
227       There is no special mode in the dlm for this lock server configuration,
228       it's just a natural consequence of combining the  "nodir"  option  with
229       node  weights.   When  a lockspace has master nodes defined, the master
230       has a default weight of 1 and all non-master nodes have  weight  of  0.
231       An explicit non-zero weight can also be assigned to master nodes, e.g.
232
233       lockspace foo nodir=1
234       master    foo node=1 weight=2
235       master    foo node=2 weight=1
236
237       In  which case node 1 will master 2/3 of the total resources and node 2
238       will master the other 1/3.
239
240

SEE ALSO

242       dlm_controld(8), dlm_tool(8)
243
244
245
246
247dlm                               2012-04-09                       DLM.CONF(5)
Impressum