1DLM.CONF(5)                           dlm                          DLM.CONF(5)
2
3
4

NAME

6       dlm.conf - dlm_controld configuration file
7
8

SYNOPSIS

10       /etc/dlm/dlm.conf
11
12

DESCRIPTION

14       The  configuration  options in dlm.conf mirror the dlm_controld command
15       line options.  The config file additionally allows advanced fencing and
16       lockspace configuration that are not supported on the command line.
17
18

Command line equivalents

20       If  an  option is specified on the command line and in the config file,
21       the command line  setting  overrides  the  config  file  setting.   See
22       dlm_controld(8) for descriptions and dlm_controld -h for defaults.
23
24       Format:
25
26       key=val
27
28       Example:
29
30       log_debug=1
31       post_join_delay=10
32       protocol=tcp
33
34       Options:
35
36       log_debug
37       protocol
38       bind_all
39       debug_logfile
40       enable_plock
41       plock_debug
42       plock_rate_limit
43       plock_ownership
44       drop_resources_time
45       drop_resources_count
46       drop_resources_age
47       post_join_delay
48       enable_fencing
49       enable_concurrent_fencing
50       enable_startup_fencing
51       enable_quorum_fencing
52       enable_quorum_lockspace
53       repeat_failed_fencing
54       enable_helper
55
56

Fencing

58       A fence device definition begins with a device line, followed by a num‐
59       ber of connect lines, one for each node connected to the device.
60
61       A blank line separates device definitions.
62
63       Devices are used in the order they are listed.
64
65       The device key word is followed by a unique dev_name, the agent program
66       to be used, and args, which are agent arguments specific to the device.
67
68       The connect key word is followed by the dev_name of the device section,
69       the node ID of the connected node in the format node=nodeid  and  args,
70       which are agent arguments specific to the node for the given device.
71
72       The  format  of  args is key=val on both device and connect lines, each
73       pair separated by a space, e.g. key1=val1 key2=val2 key3=val3.
74
75       Format:
76
77       device  dev_name agent [args]
78       connect dev_name node=nodeid [args]
79       connect dev_name node=nodeid [args]
80       connect dev_name node=nodeid [args]
81
82       Example:
83
84       device  foo fence_foo ipaddr=1.1.1.1 login=x password=y
85       connect foo node=1 port=1
86       connect foo node=2 port=2
87       connect foo node=3 port=3
88
89       device  bar fence_bar ipaddr=2.2.2.2 login=x password=y
90       connect bar node=1 port=1
91       connect bar node=2 port=2
92       connect bar node=3 port=3
93
94
95   Parallel devices
96       Some devices, like dual power or dual path, must all be turned  off  in
97       parallel  for  fencing to succeed.  To define multiple devices as being
98       parallel to each other, use the same base dev_name with different  suf‐
99       fixes and a colon separator between base name and suffix.
100
101       Format:
102
103       device  dev_name:1 agent [args]
104       connect dev_name:1 node=nodeid [args]
105       connect dev_name:1 node=nodeid [args]
106       connect dev_name:1 node=nodeid [args]
107
108       device  dev_name:2 agent [args]
109       connect dev_name:2 node=nodeid [args]
110       connect dev_name:2 node=nodeid [args]
111       connect dev_name:2 node=nodeid [args]
112
113       Example:
114
115       device  foo:1 fence_foo ipaddr=1.1.1.1 login=x password=y
116       connect foo:1 node=1 port=1
117       connect foo:2 node=2 port=2
118       connect foo:3 node=3 port=3
119
120       device  foo:2 fence_foo ipaddr=5.5.5.5 login=x password=y
121       connect foo:2 node=1 port=1
122       connect foo:2 node=2 port=2
123       connect foo:2 node=3 port=3
124
125
126   Unfencing
127       A  node  may  sometimes  need  to  "unfence" itself when starting.  The
128       unfencing command reverses the effect of a previous  fencing  operation
129       against  it.  An example would be fencing that disables a port on a SAN
130       switch.  A node could use unfencing to re-enable its switch  port  when
131       starting  up  after rebooting.  (Care must be taken to ensure it's safe
132       for a node to unfence  itself.   A  node  often  needs  to  be  cleanly
133       rebooted before unfencing itself.)
134
135       To  specify  that  a node should unfence itself for a given device, the
136       unfence line is added after the connect lines.
137
138       Format:
139
140       device  dev_name agent [args]
141       connect dev_name node=nodeid [args]
142       connect dev_name node=nodeid [args]
143       connect dev_name node=nodeid [args]
144       unfence dev_name
145
146       Example:
147
148       device  foo fence_foo ipaddr=1.1.1.1 login=x password=y
149       connect foo node=1 port=1
150       connect foo node=2 port=2
151       connect foo node=3 port=3
152       unfence foo
153
154
155   Simple devices
156       In some cases, a single fence device is used  for  all  nodes,  and  it
157       requires  no  node-specific  args.   This would typically be a "bridge"
158       fence device in which an agent is passing a fence  request  to  another
159       subsystem to handle.  (Note that a "node=nodeid" arg is always automat‐
160       ically included in agent args, so  a  node-specific  nodeid  is  always
161       present to minimally identify the victim.)
162
163       In such a case, a simplified, single-line fence configuration is possi‐
164       ble, with format:
165
166       fence_all agent [args]
167
168       Example:
169
170       fence_all dlm_stonith
171
172       A fence_all configuration is not compatible with a fence device config‐
173       uration (above).
174
175       Unfencing can optionally be applied with:
176
177       fence_all agent [args]
178       unfence_all
179
180

Lockspace configuration

182       A lockspace definition begins with a lockspace line, followed by a num‐
183       ber of master lines.  A blank line separates lockspace definitions.
184
185       Format:
186
187       lockspace ls_name [ls_args]
188       master    ls_name node=nodeid [node_args]
189       master    ls_name node=nodeid [node_args]
190       master    ls_name node=nodeid [node_args]
191
192
193   Disabling resource directory
194       Lockspaces usually use a resource directory to keep track of which node
195       is  the  master  of  each  resource.   The  dlm can operate without the
196       resource directory, though, by statically assigning  the  master  of  a
197       resource  using  a  hash of the resource name.  To enable, set the per-
198       lockspace nodir option to 1.
199
200       Example:
201
202       lockspace foo nodir=1
203
204
205   Lock-server configuration
206       The nodir setting can be combined with node weights to create a config‐
207       uration  where  select  node(s)  are the master of all resources/locks.
208       These master nodes can be viewed as "lock servers" for the other nodes.
209
210       Example of nodeid 1 as master of all resources:
211
212       lockspace foo nodir=1
213       master    foo node=1
214
215       Example of nodeid's 1 and 2 as masters of all resources:
216
217       lockspace foo nodir=1
218       master    foo node=1
219       master    foo node=2
220
221       Lock management will be partitioned among the available masters.  There
222       can be any number of masters defined.  The designated master nodes will
223       master all resources/locks (according to the resource name hash).  When
224       no  masters  are members of the lockspace, then the nodes revert to the
225       common fully-distributed configuration.  Recovery is faster, with  lit‐
226       tle disruption, when a non-master node joins/leaves.
227
228       There is no special mode in the dlm for this lock server configuration,
229       it's just a natural consequence of combining the  "nodir"  option  with
230       node  weights.   When  a lockspace has master nodes defined, the master
231       has a default weight of 1 and all non-master nodes have  weight  of  0.
232       An explicit non-zero weight can also be assigned to master nodes, e.g.
233
234       lockspace foo nodir=1
235       master    foo node=1 weight=2
236       master    foo node=2 weight=1
237
238       In  which case node 1 will master 2/3 of the total resources and node 2
239       will master the other 1/3.
240
241

SEE ALSO

243       dlm_controld(8), dlm_tool(8)
244
245
246
247
248dlm                               2012-04-09                       DLM.CONF(5)
Impressum