1CTDB-SCRIPT.OPTIO(5)     CTDB - clustered TDB database    CTDB-SCRIPT.OPTIO(5)
2
3
4

NAME

6       ctdb-script.options - CTDB scripts configuration files
7

DESCRIPTION

9       Each CTDB script has 2 possible locations for its configuration
10       options:
11
12       /etc/ctdb/script.options
13           This is a catch-all global file for general purpose scripts and for
14           options that are used in multiple event scripts.
15
16       SCRIPT.options
17           That is, options for SCRIPT are placed in a file alongside the
18           script, with a ".script" suffix added. This style is usually
19           recommended for event scripts.
20
21           Options in this script-specific file override those in the global
22           file.
23
24       These files should include simple shell-style variable assignments and
25       shell-style comments.
26

NETWORK CONFIGURATION

28   10.interface
29       This event script handles monitoring of interfaces using by public IP
30       addresses.
31
32       CTDB_PARTIALLY_ONLINE_INTERFACES=yes|no
33           Whether one or more offline interfaces should cause a monitor event
34           to fail if there are other interfaces that are up. If this is "yes"
35           and a node has some interfaces that are down then ctdb status will
36           display the node as "PARTIALLYONLINE".
37
38           Note that CTDB_PARTIALLY_ONLINE_INTERFACES=yes is not generally
39           compatible with NAT gateway or LVS. NAT gateway relies on the
40           interface configured by CTDB_NATGW_PUBLIC_IFACE to be up and LVS
41           replies on CTDB_LVS_PUBLIC_IFACE to be up. CTDB does not check if
42           these options are set in an incompatible way so care is needed to
43           understand the interaction.
44
45           Default is "no".
46
47   11.natgw
48       Provides CTDB's NAT gateway functionality.
49
50       NAT gateway is used to configure fallback routing for nodes when they
51       do not host any public IP addresses. For example, it allows unhealthy
52       nodes to reliably communicate with external infrastructure. One node in
53       a NAT gateway group will be designated as the NAT gateway master node
54       and other (slave) nodes will be configured with fallback routes via the
55       NAT gateway master node. For more information, see the NAT GATEWAY
56       section in ctdb(7).
57
58       CTDB_NATGW_DEFAULT_GATEWAY=IPADDR
59           IPADDR is an alternate network gateway to use on the NAT gateway
60           master node. If set, a fallback default route is added via this
61           network gateway.
62
63           No default. Setting this variable is optional - if not set that no
64           route is created on the NAT gateway master node.
65
66       CTDB_NATGW_NODES=FILENAME
67           FILENAME contains the list of nodes that belong to the same NAT
68           gateway group.
69
70           File format:
71
72               IPADDR [slave-only]
73
74
75           IPADDR is the private IP address of each node in the NAT gateway
76           group.
77
78           If "slave-only" is specified then the corresponding node can not be
79           the NAT gateway master node. In this case CTDB_NATGW_PUBLIC_IFACE
80           and CTDB_NATGW_PUBLIC_IP are optional and unused.
81
82           No default, usually /etc/ctdb/natgw_nodes when enabled.
83
84       CTDB_NATGW_PRIVATE_NETWORK=IPADDR/MASK
85           IPADDR/MASK is the private sub-network that is internally routed
86           via the NAT gateway master node. This is usually the private
87           network that is used for node addresses.
88
89           No default.
90
91       CTDB_NATGW_PUBLIC_IFACE=IFACE
92           IFACE is the network interface on which the CTDB_NATGW_PUBLIC_IP
93           will be configured.
94
95           No default.
96
97       CTDB_NATGW_PUBLIC_IP=IPADDR/MASK
98           IPADDR/MASK indicates the IP address that is used for outgoing
99           traffic (originating from CTDB_NATGW_PRIVATE_NETWORK) on the NAT
100           gateway master node. This must not be a configured public IP
101           address.
102
103           No default.
104
105       CTDB_NATGW_STATIC_ROUTES=IPADDR/MASK[@GATEWAY] ...
106           Each IPADDR/MASK identifies a network or host to which NATGW should
107           create a fallback route, instead of creating a single default
108           route. This can be used when there is already a default route, via
109           an interface that can not reach required infrastructure, that
110           overrides the NAT gateway default route.
111
112           If GATEWAY is specified then the corresponding route on the NATGW
113           master node will be via GATEWAY. Such routes are created even if
114           CTDB_NATGW_DEFAULT_GATEWAY is not specified. If GATEWAY is not
115           specified for some networks then routes are only created on the
116           NATGW master node for those networks if CTDB_NATGW_DEFAULT_GATEWAY
117           is specified.
118
119           This should be used with care to avoid causing traffic to
120           unnecessarily double-hop through the NAT gateway master, even when
121           a node is hosting public IP addresses. Each specified network or
122           host should probably have a corresponding automatically created
123           link route or static route to avoid this.
124
125           No default.
126
127       Example
128               CTDB_NATGW_NODES=/etc/ctdb/natgw_nodes
129               CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
130               CTDB_NATGW_DEFAULT_GATEWAY=10.0.0.1
131               CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
132               CTDB_NATGW_PUBLIC_IFACE=eth0
133
134
135           A variation that ensures that infrastructure (ADS, DNS, ...)
136           directly attached to the public network (10.0.0.0/24) is always
137           reachable would look like this:
138
139               CTDB_NATGW_NODES=/etc/ctdb/natgw_nodes
140               CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
141               CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
142               CTDB_NATGW_PUBLIC_IFACE=eth0
143               CTDB_NATGW_STATIC_ROUTES=10.0.0.0/24
144
145
146           Note that CTDB_NATGW_DEFAULT_GATEWAY is not specified.
147
148   13.per_ip_routing
149       Provides CTDB's policy routing functionality.
150
151       A node running CTDB may be a component of a complex network topology.
152       In particular, public addresses may be spread across several different
153       networks (or VLANs) and it may not be possible to route packets from
154       these public addresses via the system's default route. Therefore, CTDB
155       has support for policy routing via the 13.per_ip_routing eventscript.
156       This allows routing to be specified for packets sourced from each
157       public address. The routes are added and removed as CTDB moves public
158       addresses between nodes.
159
160       For more information, see the POLICY ROUTING section in ctdb(7).
161
162       CTDB_PER_IP_ROUTING_CONF=FILENAME
163           FILENAME contains elements for constructing the desired routes for
164           each source address.
165
166           The special FILENAME value __auto_link_local__ indicates that no
167           configuration file is provided and that CTDB should generate
168           reasonable link-local routes for each public IP address.
169
170           File format:
171
172                         IPADDR DEST-IPADDR/MASK [GATEWAY-IPADDR]
173
174
175           No default, usually /etc/ctdb/policy_routing when enabled.
176
177       CTDB_PER_IP_ROUTING_RULE_PREF=NUM
178           NUM sets the priority (or preference) for the routing rules that
179           are added by CTDB.
180
181           This should be (strictly) greater than 0 and (strictly) less than
182           32766. A priority of 100 is recommended, unless this conflicts with
183           a priority already in use on the system. See ip(8), for more
184           details.
185
186       CTDB_PER_IP_ROUTING_TABLE_ID_LOW=LOW-NUM,
187       CTDB_PER_IP_ROUTING_TABLE_ID_HIGH=HIGH-NUM
188           CTDB determines a unique routing table number to use for the
189           routing related to each public address. LOW-NUM and HIGH-NUM
190           indicate the minimum and maximum routing table numbers that are
191           used.
192
193           ip(8) uses some reserved routing table numbers below 255.
194           Therefore, CTDB_PER_IP_ROUTING_TABLE_ID_LOW should be (strictly)
195           greater than 255.
196
197           CTDB uses the standard file /etc/iproute2/rt_tables to maintain a
198           mapping between the routing table numbers and labels. The label for
199           a public address ADDR will look like ctdb.addr. This means that the
200           associated rules and routes are easy to read (and manipulate).
201
202           No default, usually 1000 and 9000.
203
204       Example
205               CTDB_PER_IP_ROUTING_CONF=/etc/ctdb/policy_routing
206               CTDB_PER_IP_ROUTING_RULE_PREF=100
207               CTDB_PER_IP_ROUTING_TABLE_ID_LOW=1000
208               CTDB_PER_IP_ROUTING_TABLE_ID_HIGH=9000
209
210
211   91.lvs
212       Provides CTDB's LVS functionality.
213
214       For a general description see the LVS section in ctdb(7).
215
216       CTDB_LVS_NODES=FILENAME
217           FILENAME contains the list of nodes that belong to the same LVS
218           group.
219
220           File format:
221
222               IPADDR [slave-only]
223
224
225           IPADDR is the private IP address of each node in the LVS group.
226
227           If "slave-only" is specified then the corresponding node can not be
228           the LVS master node. In this case CTDB_LVS_PUBLIC_IFACE and
229           CTDB_LVS_PUBLIC_IP are optional and unused.
230
231           No default, usually /etc/ctdb/lvs_nodes when enabled.
232
233       CTDB_LVS_PUBLIC_IFACE=INTERFACE
234           INTERFACE is the network interface that clients will use to
235           connection to CTDB_LVS_PUBLIC_IP. This is optional for slave-only
236           nodes. No default.
237
238       CTDB_LVS_PUBLIC_IP=IPADDR
239           CTDB_LVS_PUBLIC_IP is the LVS public address. No default.
240

SERVICE CONFIGURATION

242       CTDB can be configured to manage and/or monitor various NAS (and other)
243       services via its eventscripts.
244
245       In the simplest case CTDB will manage a service. This means the service
246       will be started and stopped along with CTDB, CTDB will monitor the
247       service and CTDB will do any required reconfiguration of the service
248       when public IP addresses are failed over.
249
250   20.multipathd
251       Provides CTDB's Linux multipathd service management.
252
253       It can monitor multipath devices to ensure that active paths are
254       available.
255
256       CTDB_MONITOR_MPDEVICES=MP-DEVICE-LIST
257           MP-DEVICE-LIST is a list of multipath devices for CTDB to monitor?
258
259           No default.
260
261   31.clamd
262       This event script provide CTDB's ClamAV anti-virus service management.
263
264       This eventscript is not enabled by default. Use ctdb enablescript to
265       enable it.
266
267       CTDB_CLAMD_SOCKET=FILENAME
268           FILENAME is the socket to monitor ClamAV.
269
270           No default.
271
272   48.netbios
273       Provides CTDB's NetBIOS service management.
274
275       CTDB_SERVICE_NMB=SERVICE
276           Distribution specific SERVICE for managing nmbd.
277
278           Default is distribution-dependant.
279
280   49.winbind
281       Provides CTDB's Samba winbind service management.
282
283       CTDB_SERVICE_WINBIND=SERVICE
284           Distribution specific SERVICE for managing winbindd.
285
286           Default is "winbind".
287
288   50.samba
289       Provides the core of CTDB's Samba file service management.
290
291       CTDB_SAMBA_CHECK_PORTS=PORT-LIST
292           When monitoring Samba, check TCP ports in space-separated
293           PORT-LIST.
294
295           Default is to monitor ports that Samba is configured to listen on.
296
297       CTDB_SAMBA_SKIP_SHARE_CHECK=yes|no
298           As part of monitoring, should CTDB skip the check for the existence
299           of each directory configured as share in Samba. This may be
300           desirable if there is a large number of shares.
301
302           Default is no.
303
304       CTDB_SERVICE_SMB=SERVICE
305           Distribution specific SERVICE for managing smbd.
306
307           Default is distribution-dependant.
308
309   60.nfs
310       This event script (along with 06.nfs) provides CTDB's NFS service
311       management.
312
313       This includes parameters for the kernel NFS server. Alternative NFS
314       subsystems (such as NFS-Ganesha[1]) can be integrated using
315       CTDB_NFS_CALLOUT.
316
317       CTDB_NFS_CALLOUT=COMMAND
318           COMMAND specifies the path to a callout to handle interactions with
319           the configured NFS system, including startup, shutdown, monitoring.
320
321           Default is the included nfs-linux-kernel-callout.
322
323       CTDB_NFS_CHECKS_DIR=DIRECTORY
324           Specifies the path to a DIRECTORY containing files that describe
325           how to monitor the responsiveness of NFS RPC services. See the
326           README file for this directory for an explanation of the contents
327           of these "check" files.
328
329           CTDB_NFS_CHECKS_DIR can be used to point to different sets of
330           checks for different NFS servers.
331
332           One way of using this is to have it point to, say,
333           /etc/ctdb/nfs-checks-enabled.d and populate it with symbolic links
334           to the desired check files. This avoids duplication and is
335           upgrade-safe.
336
337           Default is /etc/ctdb/nfs-checks.d, which contains NFS RPC checks
338           suitable for Linux kernel NFS.
339
340       CTDB_NFS_SKIP_SHARE_CHECK=yes|no
341           As part of monitoring, should CTDB skip the check for the existence
342           of each directory exported via NFS. This may be desirable if there
343           is a large number of exports.
344
345           Default is no.
346
347       CTDB_RPCINFO_LOCALHOST=IPADDR|HOSTNAME
348           IPADDR or HOSTNAME indicates the address that rpcinfo should
349           connect to when doing rpcinfo check on IPv4 RPC service during
350           monitoring. Optimally this would be "localhost". However, this can
351           add some performance overheads.
352
353           Default is "127.0.0.1".
354
355       CTDB_RPCINFO_LOCALHOST6=IPADDR|HOSTNAME
356           IPADDR or HOSTNAME indicates the address that rpcinfo should
357           connect to when doing rpcinfo check on IPv6 RPC service during
358           monitoring. Optimally this would be "localhost6" (or similar).
359           However, this can add some performance overheads.
360
361           Default is "::1".
362
363       CTDB_NFS_STATE_FS_TYPE=TYPE
364           The type of filesystem used for a clustered NFS' shared state. No
365           default.
366
367       CTDB_NFS_STATE_MNT=DIR
368           The directory where a clustered NFS' shared state will be located.
369           No default.
370
371   70.iscsi
372       Provides CTDB's Linux iSCSI tgtd service management.
373
374       CTDB_START_ISCSI_SCRIPTS=DIRECTORY
375           DIRECTORY on shared storage containing scripts to start tgtd for
376           each public IP address.
377
378           No default.
379

DATABASE SETUP

381       CTDB checks the consistency of databases during startup.
382
383   00.ctdb
384       CTDB_MAX_CORRUPT_DB_BACKUPS=NUM
385           NUM is the maximum number of volatile TDB database backups to be
386           kept (for each database) when a corrupt database is found during
387           startup. Volatile TDBs are zeroed during startup so backups are
388           needed to debug any corruption that occurs before a restart.
389
390           Default is 10.
391

SYSTEM RESOURCE MONITORING

393   05.system
394       Provides CTDB's filesystem and memory usage monitoring.
395
396       CTDB can experience seemingly random (performance and other) issues if
397       system resources become too constrained. Options in this section can be
398       enabled to allow certain system resources to be checked. They allows
399       warnings to be logged and nodes to be marked unhealthy when system
400       resource usage reaches the configured thresholds.
401
402       Some checks are enabled by default. It is recommended that these checks
403       remain enabled or are augmented by extra checks. There is no supported
404       way of completely disabling the checks.
405
406       CTDB_MONITOR_FILESYSTEM_USAGE=FS-LIMIT-LIST
407           FS-LIMIT-LIST is a space-separated list of
408           FILESYSTEM:WARN_LIMIT[:UNHEALTHY_LIMIT] triples indicating that
409           warnings should be logged if the space used on FILESYSTEM reaches
410           WARN_LIMIT%. If usage reaches UNHEALTHY_LIMIT then the node should
411           be flagged unhealthy. Either WARN_LIMIT or UNHEALTHY_LIMIT may be
412           left blank, meaning that check will be omitted.
413
414           Default is to warn for each filesystem containing a database
415           directory (volatile database directory,
416           persistent database directory, state database directory) with a
417           threshold of 90%.
418
419       CTDB_MONITOR_MEMORY_USAGE=MEM-LIMITS
420           MEM-LIMITS takes the form WARN_LIMIT[:UNHEALTHY_LIMIT] indicating
421           that warnings should be logged if memory usage reaches WARN_LIMIT%.
422           If usage reaches UNHEALTHY_LIMIT then the node should be flagged
423           unhealthy. Either WARN_LIMIT or UNHEALTHY_LIMIT may be left blank,
424           meaning that check will be omitted.
425
426           Default is 80, so warnings will be logged when memory usage reaches
427           80%.
428

EVENT SCRIPT DEBUGGING

430   debug-hung-script.sh
431       CTDB_DEBUG_HUNG_SCRIPT_STACKPAT=REGEXP
432           REGEXP specifies interesting processes for which stack traces
433           should be logged when debugging hung eventscripts and those
434           processes are matched in pstree output. REGEXP is an extended
435           regexp so choices are separated by pipes ('|'). However, REGEXP
436           should not contain parentheses. See also the ctdb.conf(5) [event]
437           "debug script" option.
438
439           Default is "exportfs|rpcinfo".
440

FILES

442           /etc/ctdb/script.options
443

SEE ALSO

445       ctdbd(1), ctdb(7), http://ctdb.samba.org/
446

AUTHOR

448       This documentation was written by Amitay Isaacs, Martin Schwenke
449
451       Copyright © 2007 Andrew Tridgell, Ronnie Sahlberg
452
453       This program is free software; you can redistribute it and/or modify it
454       under the terms of the GNU General Public License as published by the
455       Free Software Foundation; either version 3 of the License, or (at your
456       option) any later version.
457
458       This program is distributed in the hope that it will be useful, but
459       WITHOUT ANY WARRANTY; without even the implied warranty of
460       MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
461       General Public License for more details.
462
463       You should have received a copy of the GNU General Public License along
464       with this program; if not, see http://www.gnu.org/licenses.
465
466

NOTES

468        1. NFS-Ganesha
469           https://github.com/nfs-ganesha/nfs-ganesha/wiki
470
471
472
473ctdb                              03/25/2021              CTDB-SCRIPT.OPTIO(5)
Impressum