1DLM.CONF(5)                           dlm                          DLM.CONF(5)
2
3
4

NAME

6       dlm.conf - dlm_controld configuration file
7
8

SYNOPSIS

10       /etc/dlm/dlm.conf
11
12

DESCRIPTION

14       The  configuration  options in dlm.conf mirror the dlm_controld command
15       line options.  The config file additionally allows advanced fencing and
16       lockspace configuration that are not supported on the command line.
17
18

Command line equivalents

20       If  an  option is specified on the command line and in the config file,
21       the command line  setting  overrides  the  config  file  setting.   See
22       dlm_controld(8) for descriptions and dlm_controld -h for defaults.
23
24       Format:
25
26       key=val
27
28       Example:
29
30       log_debug=1
31       post_join_delay=10
32       protocol=tcp
33
34       Options:
35
36       daemon_debug
37       log_debug
38       protocol
39       debug_logfile
40       enable_plock
41       plock_debug
42       plock_rate_limit
43       plock_ownership
44       drop_resources_time
45       drop_resources_count
46       drop_resources_age
47       post_join_delay
48       enable_fencing
49       enable_concurrent_fencing
50       enable_startup_fencing
51       enable_quorum_fencing
52       enable_quorum_lockspace
53
54

Fencing

56       A fence device definition begins with a device line, followed by a num‐
57       ber of connect lines, one for each node connected to the device.
58
59       A blank line separates device definitions.
60
61       Devices are used in the order they are listed.
62
63       The device key word is followed by a unique dev_name, the agent program
64       to be used, and args, which are agent arguments specific to the device.
65
66       The connect key word is followed by the dev_name of the device section,
67       the node ID of the connected node in the format node=nodeid  and  args,
68       which are agent arguments specific to the node for the given device.
69
70       The  format  of  args is key=val on both device and connect lines, each
71       pair separated by a space, e.g. key1=val1 key2=val2 key3=val3.
72
73       Format:
74
75       device  dev_name agent [args]
76       connect dev_name node=nodeid [args]
77       connect dev_name node=nodeid [args]
78       connect dev_name node=nodeid [args]
79
80       Example:
81
82       device  foo fence_foo ipaddr=1.1.1.1 login=x password=y
83       connect foo node=1 port=1
84       connect foo node=2 port=2
85       connect foo node=3 port=3
86
87       device  bar fence_bar ipaddr=2.2.2.2 login=x password=y
88       connect bar node=1 port=1
89       connect bar node=2 port=2
90       connect bar node=3 port=3
91
92
93   Parallel devices
94       Some devices, like dual power or dual path, must all be turned  off  in
95       parallel  for  fencing to succeed.  To define multiple devices as being
96       parallel to each other, use the same base dev_name with different  suf‐
97       fixes and a colon separator between base name and suffix.
98
99       Format:
100
101       device  dev_name:1 agent [args]
102       connect dev_name:1 node=nodeid [args]
103       connect dev_name:1 node=nodeid [args]
104       connect dev_name:1 node=nodeid [args]
105
106       device  dev_name:2 agent [args]
107       connect dev_name:2 node=nodeid [args]
108       connect dev_name:2 node=nodeid [args]
109       connect dev_name:2 node=nodeid [args]
110
111       Example:
112
113       device  foo:1 fence_foo ipaddr=1.1.1.1 login=x password=y
114       connect foo:1 node=1 port=1
115       connect foo:2 node=2 port=2
116       connect foo:3 node=3 port=3
117
118       device  foo:2 fence_foo ipaddr=5.5.5.5 login=x password=y
119       connect foo:2 node=1 port=1
120       connect foo:2 node=2 port=2
121       connect foo:2 node=3 port=3
122
123
124   Unfencing
125       A  node  may  sometimes  need  to  "unfence" itself when starting.  The
126       unfencing command reverses the effect of a previous  fencing  operation
127       against  it.  An example would be fencing that disables a port on a SAN
128       switch.  A node could use unfencing to re-enable its switch  port  when
129       starting  up  after rebooting.  (Care must be taken to ensure it's safe
130       for a node to unfence  itself.   A  node  often  needs  to  be  cleanly
131       rebooted before unfencing itself.)
132
133       To  specify  that  a node should unfence itself for a given device, the
134       unfence line is added after the connect lines.
135
136       Format:
137
138       device  dev_name agent [args]
139       connect dev_name node=nodeid [args]
140       connect dev_name node=nodeid [args]
141       connect dev_name node=nodeid [args]
142       unfence dev_name
143
144       Example:
145
146       device  foo fence_foo ipaddr=1.1.1.1 login=x password=y
147       connect foo node=1 port=1
148       connect foo node=2 port=2
149       connect foo node=3 port=3
150       unfence foo
151
152
153   Simple devices
154       In some cases, a single fence device is used  for  all  nodes,  and  it
155       requires  no  node-specific  args.   This would typically be a "bridge"
156       fence device in which an agent is passing a fence  request  to  another
157       subsystem to handle.  (Note that a "node=nodeid" arg is always automat‐
158       ically included in agent args, so  a  node-specific  nodeid  is  always
159       present to minimally identify the victim.)
160
161       In such a case, a simplified, single-line fence configuration is possi‐
162       ble, with format:
163
164       fence_all agent [args]
165
166       Example:
167
168       fence_all dlm_stonith
169
170       A fence_all configuration is not compatible with a fence device config‐
171       uration (above).
172
173       Unfencing can optionally be applied with:
174
175       fence_all agent [args]
176       unfence_all
177
178

Lockspace configuration

180       A lockspace definition begins with a lockspace line, followed by a num‐
181       ber of master lines.  A blank line separates lockspace definitions.
182
183       Format:
184
185       lockspace ls_name [ls_args]
186       master    ls_name node=nodeid [node_args]
187       master    ls_name node=nodeid [node_args]
188       master    ls_name node=nodeid [node_args]
189
190
191   Disabling resource directory
192       Lockspaces usually use a resource directory to keep track of which node
193       is  the  master  of  each  resource.   The  dlm can operate without the
194       resource directory, though, by statically assigning  the  master  of  a
195       resource  using  a  hash of the resource name.  To enable, set the per-
196       lockspace nodir option to 1.
197
198       Example:
199
200       lockspace foo nodir=1
201
202
203   Lock-server configuration
204       The nodir setting can be combined with node weights to create a config‐
205       uration  where  select  node(s)  are the master of all resources/locks.
206       These master nodes can be viewed as "lock servers" for the other nodes.
207
208       Example of nodeid 1 as master of all resources:
209
210       lockspace foo nodir=1
211       master    foo node=1
212
213       Example of nodeid's 1 and 2 as masters of all resources:
214
215       lockspace foo nodir=1
216       master    foo node=1
217       master    foo node=2
218
219       Lock management will be partitioned among the available masters.  There
220       can be any number of masters defined.  The designated master nodes will
221       master all resources/locks (according to the resource name hash).  When
222       no  masters  are members of the lockspace, then the nodes revert to the
223       common fully-distributed configuration.  Recovery is faster, with  lit‐
224       tle disruption, when a non-master node joins/leaves.
225
226       There is no special mode in the dlm for this lock server configuration,
227       it's just a natural consequence of combining the  "nodir"  option  with
228       node  weights.   When  a lockspace has master nodes defined, the master
229       has a default weight of 1 and all non-master nodes have  weight  of  0.
230       An explicit non-zero weight can also be assigned to master nodes, e.g.
231
232       lockspace foo nodir=1
233       master    foo node=1 weight=2
234       master    foo node=2 weight=1
235
236       In  which case node 1 will master 2/3 of the total resources and node 2
237       will master the other 1/3.
238
239

SEE ALSO

241       dlm_controld(8), dlm_tool(8)
242
243
244
245
246dlm                               2012-04-09                       DLM.CONF(5)
Impressum