1CTDB-SCRIPT.OPTIO(5)     CTDB - clustered TDB database    CTDB-SCRIPT.OPTIO(5)
2
3
4

NAME

6       ctdb-script.options - CTDB scripts configuration files
7

DESCRIPTION

9       Each CTDB script has 2 possible locations for its configuration
10       options:
11
12       /etc/ctdb/script.options
13           This is a catch-all global file for general purpose scripts and for
14           options that are used in multiple event scripts.
15
16       SCRIPT.options
17           That is, options for SCRIPT are placed in a file alongside the
18           script, with a ".script" suffix added. This style is usually
19           recommended for event scripts.
20
21           Options in this script-specific file override those in the global
22           file.
23
24       These files should include simple shell-style variable assignments and
25       shell-style comments.
26

NETWORK CONFIGURATION

28   10.interface
29       This event script handles monitoring of interfaces using by public IP
30       addresses.
31
32       CTDB_PARTIALLY_ONLINE_INTERFACES=yes|no
33           Whether one or more offline interfaces should cause a monitor event
34           to fail if there are other interfaces that are up. If this is "yes"
35           and a node has some interfaces that are down then ctdb status will
36           display the node as "PARTIALLYONLINE".
37
38           Note that CTDB_PARTIALLY_ONLINE_INTERFACES=yes is not generally
39           compatible with NAT gateway or LVS. NAT gateway relies on the
40           interface configured by CTDB_NATGW_PUBLIC_IFACE to be up and LVS
41           replies on CTDB_LVS_PUBLIC_IFACE to be up. CTDB does not check if
42           these options are set in an incompatible way so care is needed to
43           understand the interaction.
44
45           Default is "no".
46
47   11.natgw
48       Provides CTDB's NAT gateway functionality.
49
50       NAT gateway is used to configure fallback routing for nodes when they
51       do not host any public IP addresses. For example, it allows unhealthy
52       nodes to reliably communicate with external infrastructure. One node in
53       a NAT gateway group will be designated as the NAT gateway master node
54       and other (slave) nodes will be configured with fallback routes via the
55       NAT gateway master node. For more information, see the NAT GATEWAY
56       section in ctdb(7).
57
58       CTDB_NATGW_DEFAULT_GATEWAY=IPADDR
59           IPADDR is an alternate network gateway to use on the NAT gateway
60           master node. If set, a fallback default route is added via this
61           network gateway.
62
63           No default. Setting this variable is optional - if not set that no
64           route is created on the NAT gateway master node.
65
66       CTDB_NATGW_NODES=FILENAME
67           FILENAME contains the list of nodes that belong to the same NAT
68           gateway group.
69
70           File format:
71
72               IPADDR [slave-only]
73
74
75           IPADDR is the private IP address of each node in the NAT gateway
76           group.
77
78           If "slave-only" is specified then the corresponding node can not be
79           the NAT gateway master node. In this case CTDB_NATGW_PUBLIC_IFACE
80           and CTDB_NATGW_PUBLIC_IP are optional and unused.
81
82           No default, usually /etc/ctdb/natgw_nodes when enabled.
83
84       CTDB_NATGW_PRIVATE_NETWORK=IPADDR/MASK
85           IPADDR/MASK is the private sub-network that is internally routed
86           via the NAT gateway master node. This is usually the private
87           network that is used for node addresses.
88
89           No default.
90
91       CTDB_NATGW_PUBLIC_IFACE=IFACE
92           IFACE is the network interface on which the CTDB_NATGW_PUBLIC_IP
93           will be configured.
94
95           No default.
96
97       CTDB_NATGW_PUBLIC_IP=IPADDR/MASK
98           IPADDR/MASK indicates the IP address that is used for outgoing
99           traffic (originating from CTDB_NATGW_PRIVATE_NETWORK) on the NAT
100           gateway master node. This must not be a configured public IP
101           address.
102
103           No default.
104
105       CTDB_NATGW_STATIC_ROUTES=IPADDR/MASK[@GATEWAY] ...
106           Each IPADDR/MASK identifies a network or host to which NATGW should
107           create a fallback route, instead of creating a single default
108           route. This can be used when there is already a default route, via
109           an interface that can not reach required infrastructure, that
110           overrides the NAT gateway default route.
111
112           If GATEWAY is specified then the corresponding route on the NATGW
113           master node will be via GATEWAY. Such routes are created even if
114           CTDB_NATGW_DEFAULT_GATEWAY is not specified. If GATEWAY is not
115           specified for some networks then routes are only created on the
116           NATGW master node for those networks if CTDB_NATGW_DEFAULT_GATEWAY
117           is specified.
118
119           This should be used with care to avoid causing traffic to
120           unnecessarily double-hop through the NAT gateway master, even when
121           a node is hosting public IP addresses. Each specified network or
122           host should probably have a corresponding automatically created
123           link route or static route to avoid this.
124
125           No default.
126
127       Example
128               CTDB_NATGW_NODES=/etc/ctdb/natgw_nodes
129               CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
130               CTDB_NATGW_DEFAULT_GATEWAY=10.0.0.1
131               CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
132               CTDB_NATGW_PUBLIC_IFACE=eth0
133
134
135           A variation that ensures that infrastructure (ADS, DNS, ...)
136           directly attached to the public network (10.0.0.0/24) is always
137           reachable would look like this:
138
139               CTDB_NATGW_NODES=/etc/ctdb/natgw_nodes
140               CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
141               CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
142               CTDB_NATGW_PUBLIC_IFACE=eth0
143               CTDB_NATGW_STATIC_ROUTES=10.0.0.0/24
144
145
146           Note that CTDB_NATGW_DEFAULT_GATEWAY is not specified.
147
148   13.per_ip_routing
149       Provides CTDB's policy routing functionality.
150
151       A node running CTDB may be a component of a complex network topology.
152       In particular, public addresses may be spread across several different
153       networks (or VLANs) and it may not be possible to route packets from
154       these public addresses via the system's default route. Therefore, CTDB
155       has support for policy routing via the 13.per_ip_routing eventscript.
156       This allows routing to be specified for packets sourced from each
157       public address. The routes are added and removed as CTDB moves public
158       addresses between nodes.
159
160       For more information, see the POLICY ROUTING section in ctdb(7).
161
162       CTDB_PER_IP_ROUTING_CONF=FILENAME
163           FILENAME contains elements for constructing the desired routes for
164           each source address.
165
166           The special FILENAME value __auto_link_local__ indicates that no
167           configuration file is provided and that CTDB should generate
168           reasonable link-local routes for each public IP address.
169
170           File format:
171
172                         IPADDR DEST-IPADDR/MASK [GATEWAY-IPADDR]
173
174
175           No default, usually /etc/ctdb/policy_routing when enabled.
176
177       CTDB_PER_IP_ROUTING_RULE_PREF=NUM
178           NUM sets the priority (or preference) for the routing rules that
179           are added by CTDB.
180
181           This should be (strictly) greater than 0 and (strictly) less than
182           32766. A priority of 100 is recommended, unless this conflicts with
183           a priority already in use on the system. See ip(8), for more
184           details.
185
186       CTDB_PER_IP_ROUTING_TABLE_ID_LOW=LOW-NUM,
187       CTDB_PER_IP_ROUTING_TABLE_ID_HIGH=HIGH-NUM
188           CTDB determines a unique routing table number to use for the
189           routing related to each public address. LOW-NUM and HIGH-NUM
190           indicate the minimum and maximum routing table numbers that are
191           used.
192
193           ip(8) uses some reserved routing table numbers below 255.
194           Therefore, CTDB_PER_IP_ROUTING_TABLE_ID_LOW should be (strictly)
195           greater than 255.
196
197           CTDB uses the standard file /etc/iproute2/rt_tables to maintain a
198           mapping between the routing table numbers and labels. The label for
199           a public address ADDR will look like ctdb.addr. This means that the
200           associated rules and routes are easy to read (and manipulate).
201
202           No default, usually 1000 and 9000.
203
204       Example
205               CTDB_PER_IP_ROUTING_CONF=/etc/ctdb/policy_routing
206               CTDB_PER_IP_ROUTING_RULE_PREF=100
207               CTDB_PER_IP_ROUTING_TABLE_ID_LOW=1000
208               CTDB_PER_IP_ROUTING_TABLE_ID_HIGH=9000
209
210
211   91.lvs
212       Provides CTDB's LVS functionality.
213
214       For a general description see the LVS section in ctdb(7).
215
216       CTDB_LVS_NODES=FILENAME
217           FILENAME contains the list of nodes that belong to the same LVS
218           group.
219
220           File format:
221
222               IPADDR [slave-only]
223
224
225           IPADDR is the private IP address of each node in the LVS group.
226
227           If "slave-only" is specified then the corresponding node can not be
228           the LVS master node. In this case CTDB_LVS_PUBLIC_IFACE and
229           CTDB_LVS_PUBLIC_IP are optional and unused.
230
231           No default, usually /etc/ctdb/lvs_nodes when enabled.
232
233       CTDB_LVS_PUBLIC_IFACE=INTERFACE
234           INTERFACE is the network interface that clients will use to
235           connection to CTDB_LVS_PUBLIC_IP. This is optional for slave-only
236           nodes. No default.
237
238       CTDB_LVS_PUBLIC_IP=IPADDR
239           CTDB_LVS_PUBLIC_IP is the LVS public address. No default.
240

SERVICE CONFIGURATION

242       CTDB can be configured to manage and/or monitor various NAS (and other)
243       services via its eventscripts.
244
245       In the simplest case CTDB will manage a service. This means the service
246       will be started and stopped along with CTDB, CTDB will monitor the
247       service and CTDB will do any required reconfiguration of the service
248       when public IP addresses are failed over.
249
250   20.multipathd
251       Provides CTDB's Linux multipathd service management.
252
253       It can monitor multipath devices to ensure that active paths are
254       available.
255
256       CTDB_MONITOR_MPDEVICES=MP-DEVICE-LIST
257           MP-DEVICE-LIST is a list of multipath devices for CTDB to monitor?
258
259           No default.
260
261   31.clamd
262       This event script provide CTDB's ClamAV anti-virus service management.
263
264       This eventscript is not enabled by default. Use ctdb enablescript to
265       enable it.
266
267       CTDB_CLAMD_SOCKET=FILENAME
268           FILENAME is the socket to monitor ClamAV.
269
270           No default.
271
272   49.winbind
273       Provides CTDB's Samba winbind service management.
274
275       CTDB_SERVICE_WINBIND=SERVICE
276           Distribution specific SERVICE for managing winbindd.
277
278           Default is "winbind".
279
280   50.samba
281       Provides the core of CTDB's Samba file service management.
282
283       CTDB_SAMBA_CHECK_PORTS=PORT-LIST
284           When monitoring Samba, check TCP ports in space-separated
285           PORT-LIST.
286
287           Default is to monitor ports that Samba is configured to listen on.
288
289       CTDB_SAMBA_SKIP_SHARE_CHECK=yes|no
290           As part of monitoring, should CTDB skip the check for the existence
291           of each directory configured as share in Samba. This may be
292           desirable if there is a large number of shares.
293
294           Default is no.
295
296       CTDB_SERVICE_NMB=SERVICE
297           Distribution specific SERVICE for managing nmbd.
298
299           Default is distribution-dependant.
300
301       CTDB_SERVICE_SMB=SERVICE
302           Distribution specific SERVICE for managing smbd.
303
304           Default is distribution-dependant.
305
306   60.nfs
307       This event script (along with 06.nfs) provides CTDB's NFS service
308       management.
309
310       This includes parameters for the kernel NFS server. Alternative NFS
311       subsystems (such as NFS-Ganesha[1]) can be integrated using
312       CTDB_NFS_CALLOUT.
313
314       CTDB_NFS_CALLOUT=COMMAND
315           COMMAND specifies the path to a callout to handle interactions with
316           the configured NFS system, including startup, shutdown, monitoring.
317
318           Default is the included nfs-linux-kernel-callout.
319
320       CTDB_NFS_CHECKS_DIR=DIRECTORY
321           Specifies the path to a DIRECTORY containing files that describe
322           how to monitor the responsiveness of NFS RPC services. See the
323           README file for this directory for an explanation of the contents
324           of these "check" files.
325
326           CTDB_NFS_CHECKS_DIR can be used to point to different sets of
327           checks for different NFS servers.
328
329           One way of using this is to have it point to, say,
330           /etc/ctdb/nfs-checks-enabled.d and populate it with symbolic links
331           to the desired check files. This avoids duplication and is
332           upgrade-safe.
333
334           Default is /etc/ctdb/nfs-checks.d, which contains NFS RPC checks
335           suitable for Linux kernel NFS.
336
337       CTDB_NFS_SKIP_SHARE_CHECK=yes|no
338           As part of monitoring, should CTDB skip the check for the existence
339           of each directory exported via NFS. This may be desirable if there
340           is a large number of exports.
341
342           Default is no.
343
344       CTDB_RPCINFO_LOCALHOST=IPADDR|HOSTNAME
345           IPADDR or HOSTNAME indicates the address that rpcinfo should
346           connect to when doing rpcinfo check on IPv4 RPC service during
347           monitoring. Optimally this would be "localhost". However, this can
348           add some performance overheads.
349
350           Default is "127.0.0.1".
351
352       CTDB_RPCINFO_LOCALHOST6=IPADDR|HOSTNAME
353           IPADDR or HOSTNAME indicates the address that rpcinfo should
354           connect to when doing rpcinfo check on IPv6 RPC service during
355           monitoring. Optimally this would be "localhost6" (or similar).
356           However, this can add some performance overheads.
357
358           Default is "::1".
359
360       CTDB_NFS_STATE_FS_TYPE=TYPE
361           The type of filesystem used for a clustered NFS' shared state. No
362           default.
363
364       CTDB_NFS_STATE_MNT=DIR
365           The directory where a clustered NFS' shared state will be located.
366           No default.
367
368   70.iscsi
369       Provides CTDB's Linux iSCSI tgtd service management.
370
371       CTDB_START_ISCSI_SCRIPTS=DIRECTORY
372           DIRECTORY on shared storage containing scripts to start tgtd for
373           each public IP address.
374
375           No default.
376

DATABASE SETUP

378       CTDB checks the consistency of databases during startup.
379
380   00.ctdb
381       CTDB_MAX_CORRUPT_DB_BACKUPS=NUM
382           NUM is the maximum number of volatile TDB database backups to be
383           kept (for each database) when a corrupt database is found during
384           startup. Volatile TDBs are zeroed during startup so backups are
385           needed to debug any corruption that occurs before a restart.
386
387           Default is 10.
388

SYSTEM RESOURCE MONITORING

390   05.system
391       Provides CTDB's filesystem and memory usage monitoring.
392
393       CTDB can experience seemingly random (performance and other) issues if
394       system resources become too constrained. Options in this section can be
395       enabled to allow certain system resources to be checked. They allows
396       warnings to be logged and nodes to be marked unhealthy when system
397       resource usage reaches the configured thresholds.
398
399       Some checks are enabled by default. It is recommended that these checks
400       remain enabled or are augmented by extra checks. There is no supported
401       way of completely disabling the checks.
402
403       CTDB_MONITOR_FILESYSTEM_USAGE=FS-LIMIT-LIST
404           FS-LIMIT-LIST is a space-separated list of
405           FILESYSTEM:WARN_LIMIT[:UNHEALTHY_LIMIT] triples indicating that
406           warnings should be logged if the space used on FILESYSTEM reaches
407           WARN_LIMIT%. If usage reaches UNHEALTHY_LIMIT then the node should
408           be flagged unhealthy. Either WARN_LIMIT or UNHEALTHY_LIMIT may be
409           left blank, meaning that check will be omitted.
410
411           Default is to warn for each filesystem containing a database
412           directory (volatile database directory,
413           persistent database directory, state database directory) with a
414           threshold of 90%.
415
416       CTDB_MONITOR_MEMORY_USAGE=MEM-LIMITS
417           MEM-LIMITS takes the form WARN_LIMIT[:UNHEALTHY_LIMIT] indicating
418           that warnings should be logged if memory usage reaches WARN_LIMIT%.
419           If usage reaches UNHEALTHY_LIMIT then the node should be flagged
420           unhealthy. Either WARN_LIMIT or UNHEALTHY_LIMIT may be left blank,
421           meaning that check will be omitted.
422
423           Default is 80, so warnings will be logged when memory usage reaches
424           80%.
425
426       CTDB_MONITOR_SWAP_USAGE=SWAP-LIMITS
427           SWAP-LIMITS takes the form WARN_LIMIT[:UNHEALTHY_LIMIT] indicating
428           that warnings should be logged if swap usage reaches WARN_LIMIT%.
429           If usage reaches UNHEALTHY_LIMIT then the node should be flagged
430           unhealthy. Either WARN_LIMIT or UNHEALTHY_LIMIT may be left blank,
431           meaning that check will be omitted.
432
433           Default is 25, so warnings will be logged when swap usage reaches
434           25%.
435

EVENT SCRIPT DEBUGGING

437   debug-hung-script.sh
438       CTDB_DEBUG_HUNG_SCRIPT_STACKPAT=REGEXP
439           REGEXP specifies interesting processes for which stack traces
440           should be logged when debugging hung eventscripts and those
441           processes are matched in pstree output. REGEXP is an extended
442           regexp so choices are separated by pipes ('|'). However, REGEXP
443           should not contain parentheses. See also the ctdb.conf(5) [event]
444           "debug script" option.
445
446           Default is "exportfs|rpcinfo".
447

FILES

449           /etc/ctdb/script.options
450

SEE ALSO

452       ctdbd(1), ctdb(7), http://ctdb.samba.org/
453

AUTHOR

455       This documentation was written by Amitay Isaacs, Martin Schwenke
456
458       Copyright © 2007 Andrew Tridgell, Ronnie Sahlberg
459
460       This program is free software; you can redistribute it and/or modify it
461       under the terms of the GNU General Public License as published by the
462       Free Software Foundation; either version 3 of the License, or (at your
463       option) any later version.
464
465       This program is distributed in the hope that it will be useful, but
466       WITHOUT ANY WARRANTY; without even the implied warranty of
467       MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
468       General Public License for more details.
469
470       You should have received a copy of the GNU General Public License along
471       with this program; if not, see http://www.gnu.org/licenses.
472
473

NOTES

475        1. NFS-Ganesha
476           https://github.com/nfs-ganesha/nfs-ganesha/wiki
477
478
479
480ctdb                              05/11/2019              CTDB-SCRIPT.OPTIO(5)
Impressum