1DLM.CONF(5) dlm DLM.CONF(5)
2
3
4
6 dlm.conf - dlm_controld configuration file
7
8
10 /etc/dlm/dlm.conf
11
12
14 The configuration options in dlm.conf mirror the dlm_controld command
15 line options. The config file additionally allows advanced fencing and
16 lockspace configuration that are not supported on the command line.
17
18
20 If an option is specified on the command line and in the config file,
21 the command line setting overrides the config file setting. See
22 dlm_controld(8) for descriptions and dlm_controld -h for defaults.
23
24 Format:
25
26 key=val
27
28 Example:
29
30 log_debug=1
31 post_join_delay=10
32 protocol=tcp
33
34 Options:
35
36 log_debug
37 protocol
38 bind_all
39 mark
40 debug_logfile
41 enable_plock
42 plock_debug
43 plock_rate_limit
44 plock_ownership
45 drop_resources_time
46 drop_resources_count
47 drop_resources_age
48 post_join_delay
49 enable_fencing
50 enable_concurrent_fencing
51 enable_startup_fencing
52 enable_quorum_fencing
53 enable_quorum_lockspace
54 repeat_failed_fencing
55 enable_helper
56
57
59 A fence device definition begins with a device line, followed by a num‐
60 ber of connect lines, one for each node connected to the device.
61
62 A blank line separates device definitions.
63
64 Devices are used in the order they are listed.
65
66 The device key word is followed by a unique dev_name, the agent program
67 to be used, and args, which are agent arguments specific to the device.
68
69 The connect key word is followed by the dev_name of the device section,
70 the node ID of the connected node in the format node=nodeid and args,
71 which are agent arguments specific to the node for the given device.
72
73 The format of args is key=val on both device and connect lines, each
74 pair separated by a space, e.g. key1=val1 key2=val2 key3=val3.
75
76 Format:
77
78 device dev_name agent [args]
79 connect dev_name node=nodeid [args]
80 connect dev_name node=nodeid [args]
81 connect dev_name node=nodeid [args]
82
83 Example:
84
85 device foo fence_foo ipaddr=1.1.1.1 login=x password=y
86 connect foo node=1 port=1
87 connect foo node=2 port=2
88 connect foo node=3 port=3
89
90 device bar fence_bar ipaddr=2.2.2.2 login=x password=y
91 connect bar node=1 port=1
92 connect bar node=2 port=2
93 connect bar node=3 port=3
94
95
96 Parallel devices
97 Some devices, like dual power or dual path, must all be turned off in
98 parallel for fencing to succeed. To define multiple devices as being
99 parallel to each other, use the same base dev_name with different suf‐
100 fixes and a colon separator between base name and suffix.
101
102 Format:
103
104 device dev_name:1 agent [args]
105 connect dev_name:1 node=nodeid [args]
106 connect dev_name:1 node=nodeid [args]
107 connect dev_name:1 node=nodeid [args]
108
109 device dev_name:2 agent [args]
110 connect dev_name:2 node=nodeid [args]
111 connect dev_name:2 node=nodeid [args]
112 connect dev_name:2 node=nodeid [args]
113
114 Example:
115
116 device foo:1 fence_foo ipaddr=1.1.1.1 login=x password=y
117 connect foo:1 node=1 port=1
118 connect foo:2 node=2 port=2
119 connect foo:3 node=3 port=3
120
121 device foo:2 fence_foo ipaddr=5.5.5.5 login=x password=y
122 connect foo:2 node=1 port=1
123 connect foo:2 node=2 port=2
124 connect foo:2 node=3 port=3
125
126
127 Unfencing
128 A node may sometimes need to "unfence" itself when starting. The un‐
129 fencing command reverses the effect of a previous fencing operation
130 against it. An example would be fencing that disables a port on a SAN
131 switch. A node could use unfencing to re-enable its switch port when
132 starting up after rebooting. (Care must be taken to ensure it's safe
133 for a node to unfence itself. A node often needs to be cleanly re‐
134 booted before unfencing itself.)
135
136 To specify that a node should unfence itself for a given device, the
137 unfence line is added after the connect lines.
138
139 Format:
140
141 device dev_name agent [args]
142 connect dev_name node=nodeid [args]
143 connect dev_name node=nodeid [args]
144 connect dev_name node=nodeid [args]
145 unfence dev_name
146
147 Example:
148
149 device foo fence_foo ipaddr=1.1.1.1 login=x password=y
150 connect foo node=1 port=1
151 connect foo node=2 port=2
152 connect foo node=3 port=3
153 unfence foo
154
155
156 Simple devices
157 In some cases, a single fence device is used for all nodes, and it re‐
158 quires no node-specific args. This would typically be a "bridge" fence
159 device in which an agent is passing a fence request to another subsys‐
160 tem to handle. (Note that a "node=nodeid" arg is always automatically
161 included in agent args, so a node-specific nodeid is always present to
162 minimally identify the victim.)
163
164 In such a case, a simplified, single-line fence configuration is possi‐
165 ble, with format:
166
167 fence_all agent [args]
168
169 Example:
170
171 fence_all dlm_stonith
172
173 A fence_all configuration is not compatible with a fence device config‐
174 uration (above).
175
176 Unfencing can optionally be applied with:
177
178 fence_all agent [args]
179 unfence_all
180
181
183 A lockspace definition begins with a lockspace line, followed by a num‐
184 ber of master lines. A blank line separates lockspace definitions.
185
186 Format:
187
188 lockspace ls_name [ls_args]
189 master ls_name node=nodeid [node_args]
190 master ls_name node=nodeid [node_args]
191 master ls_name node=nodeid [node_args]
192
193
194 Disabling resource directory
195 Lockspaces usually use a resource directory to keep track of which node
196 is the master of each resource. The dlm can operate without the re‐
197 source directory, though, by statically assigning the master of a re‐
198 source using a hash of the resource name. To enable, set the per-
199 lockspace nodir option to 1.
200
201 Example:
202
203 lockspace foo nodir=1
204
205
206 Lock-server configuration
207 The nodir setting can be combined with node weights to create a config‐
208 uration where select node(s) are the master of all resources/locks.
209 These master nodes can be viewed as "lock servers" for the other nodes.
210
211 Example of nodeid 1 as master of all resources:
212
213 lockspace foo nodir=1
214 master foo node=1
215
216 Example of nodeid's 1 and 2 as masters of all resources:
217
218 lockspace foo nodir=1
219 master foo node=1
220 master foo node=2
221
222 Lock management will be partitioned among the available masters. There
223 can be any number of masters defined. The designated master nodes will
224 master all resources/locks (according to the resource name hash). When
225 no masters are members of the lockspace, then the nodes revert to the
226 common fully-distributed configuration. Recovery is faster, with lit‐
227 tle disruption, when a non-master node joins/leaves.
228
229 There is no special mode in the dlm for this lock server configuration,
230 it's just a natural consequence of combining the "nodir" option with
231 node weights. When a lockspace has master nodes defined, the master
232 has a default weight of 1 and all non-master nodes have weight of 0.
233 An explicit non-zero weight can also be assigned to master nodes, e.g.
234
235 lockspace foo nodir=1
236 master foo node=1 weight=2
237 master foo node=2 weight=1
238
239 In which case node 1 will master 2/3 of the total resources and node 2
240 will master the other 1/3.
241
242
243 Node configuration
244 Node configurations can be set by the node keyword followed of key-
245 value pairs.
246
247 Keys:
248
249 mark The mark key can be used to set a specific mark value which is
250 then used by the in-kernel DLM socket creation. This can be used to
251 match for DLM specfic packets for e.g. routing.
252
253 Example of setting a per socket value for nodeid 1 and a mark value of
254 42:
255
256 node id=1 mark=42
257
258 For local nodes this value doesn't have any effect.
259
260
262 dlm_controld(8), dlm_tool(8)
263
264
265
266
267dlm 2012-04-09 DLM.CONF(5)