1DLM_CONTROLD(8) cluster DLM_CONTROLD(8)
2
3
4
6 dlm_controld - daemon that configures dlm according to cluster events
7
8
10 dlm_controld [OPTIONS]
11
12
14 The dlm lives in the kernel, and the cluster infrastructure (corosync
15 membership and group management) lives in user space. The dlm in the
16 kernel needs to adjust/recover for certain cluster events. It's the
17 job of dlm_controld to receive these events and reconfigure the kernel
18 dlm as needed. dlm_controld controls and configures the dlm through
19 sysfs and configfs files that are considered dlm-internal interfaces.
20
21 The cman init script usually starts the dlm_controld daemon.
22
23
25 Command line options override a corresponding setting in cluster.conf.
26
27
28 -D Enable debugging to stderr and don't fork.
29 See also dlm_tool dump in dlm_tool(8).
30
31
32 -L Enable debugging to log file.
33 See also logging in cluster.conf(5).
34
35
36 -K Enable kernel dlm debugging messages.
37 See also log_debug below.
38
39
40 -r num dlm kernel lowcomms protocol, 0 tcp, 1 sctp, 2 detect. 2
41 selects tcp if corosync rrp_mode is "none", otherwise sctp.
42 Default 2.
43
44
45 -g num groupd compatibility mode, 0 off, 1 on.
46 Default 0.
47
48
49 -f num Enable (1) or disable (0) fencing recovery dependency.
50 Default 1.
51
52
53 -q num Enable (1) or disable (0) quorum recovery dependency.
54 Default 0.
55
56
57 -d num Enable (1) or disable (0) deadlock detection code.
58 Default 0.
59
60
61 -p num Enable (1) or disable (0) plock code for cluster fs.
62 Default 1.
63
64
65 -l num Limit the rate of plock operations, 0 for no limit.
66 Default 0.
67
68
69 -o num Enable (1) or disable (0) plock ownership.
70 Default 1.
71
72
73 -t ms Plock ownership drop resources time (milliseconds).
74 Default 10000.
75
76
77 -c num Plock ownership drop resources count.
78 Default 10.
79
80
81 -a ms Plock ownership drop resources age (milliseconds).
82 Default 10000.
83
84
85 -P Enable plock debugging messages (can produce excessive output).
86
87
88 -h Print a help message describing available options, then exit.
89
90
91 -V Print program version information, then exit.
92
93
94
96 cluster.conf(5) is usually located at /etc/cluster/cluster.conf. It is
97 not read directly. Other cluster components load the contents into
98 memory, and the values are accessed through the libccs library.
99
100 Configuration options for dlm (kernel) and dlm_controld are added to
101 the <dlm /> section of cluster.conf, within the top level <cluster>
102 section.
103
104
105 Kernel options
106 protocol
107 The network protocol can be set to tcp, sctp or detect which
108 selects tcp or sctp based on the corosync rrp_mode configuration
109 (redundant ring protocol). The rrp_mode "none" results in tcp.
110 Default detect.
111
112 <dlm protocol="detect"/>
113
114
115 timewarn
116 After waiting timewarn centiseconds, the dlm will emit a warning
117 via netlink. This only applies to lockspaces created with the
118 DLM_LSFL_TIMEWARN flag, and is used for deadlock detection.
119 Default 500 (5 seconds).
120
121 <dlm timewarn="500"/>
122
123
124 log_debug
125 DLM kernel debug messages can be enabled by setting log_debug to
126 1. Default 0.
127
128 <dlm log_debug="0"/>
129
130
131 clusternode/weight
132 The lock directory weight can be specified one the clusternode
133 lines. Weights would usually be used in the lock server config‐
134 urations shown below instead.
135
136 <clusternode name="node01" nodeid="1" weight="1"/>
137
138
139 Daemon options
140 enable_fencing
141 See command line description.
142
143 <dlm enable_fencing="1"/>
144
145
146 enable_quorum
147 See command line description.
148
149 <dlm enable_quorum="0"/>
150
151
152 enable_deadlk
153 See command line description.
154
155 <dlm enable_deadlk="0"/>
156
157
158 enable_plock
159 See command line description.
160
161 <dlm enable_plock="1"/>
162
163
164 plock_rate_limit
165 See command line description.
166
167 <dlm plock_rate_limit="0"/>
168
169
170 plock_ownership
171 See command line description.
172
173 <dlm plock_ownership="1"/>
174
175
176 drop_resources_time
177 See command line description.
178
179 <dlm drop_resources_time="10000"/>
180
181
182 drop_resources_count
183 See command line description.
184
185 <dlm drop_resources_count="10"/>
186
187
188 drop_resources_age
189 See command line description.
190
191 <dlm drop_resources_age="10000"/>
192
193
194 plock_debug
195 Enable (1) or disable (0) plock debugging messages (can produce
196 excessive output). Default 0.
197
198 <dlm plock_debug="0"/>
199
200
201
202 Disabling resource directory
203 Lockspaces usually use a resource directory to keep track of which node
204 is the master of each resource. The dlm can operate without the
205 resource directory, though, by statically assigning the master of a
206 resource using a hash of the resource name. To enable, set the per-
207 lockspace nodir option to 1.
208
209 <dlm>
210 <lockspace name="foo" nodir="1"/>
211 </dlm>
212
213
214 Lock-server configuration
215 The nodir setting can be combined with node weights to create a config‐
216 uration where select node(s) are the master of all resources/locks.
217 These master nodes can be viewed as "lock servers" for the other nodes.
218
219 <dlm>
220 <lockspace name="foo" nodir="1">
221 <master name="node01"/>
222 </lockspace>
223 </dlm>
224
225 or,
226
227 <dlm>
228 <lockspace name="foo" nodir="1">
229 <master name="node01"/>
230 <master name="node02"/>
231 </lockspace>
232 </dlm>
233
234 Lock management will be partitioned among the available masters. There
235 can be any number of masters defined. The designated master nodes will
236 master all resources/locks (according to the resource name hash). When
237 no masters are members of the lockspace, then the nodes revert to the
238 common fully-distributed configuration. Recovery is faster, with lit‐
239 tle disruption, when a non-master node joins/leaves.
240
241 There is no special mode in the dlm for this lock server configuration,
242 it's just a natural consequence of combining the "nodir" option with
243 node weights. When a lockspace has master nodes defined, the master
244 has a default weight of 1 and all non-master nodes have weight of 0.
245 An explicit non-zero weight can also be assigned to master nodes, e.g.
246
247 <dlm>
248 <lockspace name="foo" nodir="1">
249 <master name="node01" weight="2"/>
250 <master name="node02" weight="1"/>
251 </lockspace>
252 </dlm>
253
254 In which case node01 will master 2/3 of the total resources and node2
255 will master the other 1/3.
256
257
259 dlm_tool(8), fenced(8), cman(5), cluster.conf(5)
260
261
262
263
264cluster 2009-01-18 DLM_CONTROLD(8)