1CTDB-SCRIPT.OPTIO(5)     CTDB - clustered TDB database    CTDB-SCRIPT.OPTIO(5)
2
3
4

NAME

6       ctdb-script.options - CTDB scripts configuration files
7

DESCRIPTION

9       Each CTDB script has 2 possible locations for its configuration
10       options:
11
12       /etc/ctdb/script.options
13           This is a catch-all global file for general purpose scripts and for
14           options that are used in multiple event scripts.
15
16       SCRIPT.options
17           That is, options for SCRIPT are placed in a file alongside the
18           script, with a ".script" suffix added. This style is usually
19           recommended for event scripts.
20
21           Options in this script-specific file override those in the global
22           file.
23
24       These files should include simple shell-style variable assignments and
25       shell-style comments.
26

NETWORK CONFIGURATION

28   10.interface
29       This event script handles monitoring of interfaces using by public IP
30       addresses.
31
32       CTDB_PARTIALLY_ONLINE_INTERFACES=yes|no
33           Whether one or more offline interfaces should cause a monitor event
34           to fail if there are other interfaces that are up. If this is "yes"
35           and a node has some interfaces that are down then ctdb status will
36           display the node as "PARTIALLYONLINE".
37
38           Note that CTDB_PARTIALLY_ONLINE_INTERFACES=yes is not generally
39           compatible with NAT gateway or LVS. NAT gateway relies on the
40           interface configured by CTDB_NATGW_PUBLIC_IFACE to be up and LVS
41           replies on CTDB_LVS_PUBLIC_IFACE to be up. CTDB does not check if
42           these options are set in an incompatible way so care is needed to
43           understand the interaction.
44
45           Default is "no".
46
47   11.natgw
48       Provides CTDB's NAT gateway functionality.
49
50       NAT gateway is used to configure fallback routing for nodes when they
51       do not host any public IP addresses. For example, it allows unhealthy
52       nodes to reliably communicate with external infrastructure. One node in
53       a NAT gateway group will be designated as the NAT gateway leader node
54       and other (follower) nodes will be configured with fallback routes via
55       the NAT gateway leader node. For more information, see the NAT GATEWAY
56       section in ctdb(7).
57
58       CTDB_NATGW_DEFAULT_GATEWAY=IPADDR
59           IPADDR is an alternate network gateway to use on the NAT gateway
60           leader node. If set, a fallback default route is added via this
61           network gateway.
62
63           No default. Setting this variable is optional - if not set that no
64           route is created on the NAT gateway leader node.
65
66       CTDB_NATGW_NODES=FILENAME
67           FILENAME contains the list of nodes that belong to the same NAT
68           gateway group.
69
70           File format:
71
72               IPADDR [follower-only]
73
74
75           IPADDR is the private IP address of each node in the NAT gateway
76           group.
77
78           If "follower-only" is specified then the corresponding node can not
79           be the NAT gateway leader node. In this case
80           CTDB_NATGW_PUBLIC_IFACE and CTDB_NATGW_PUBLIC_IP are optional and
81           unused.
82
83           No default, usually /etc/ctdb/natgw_nodes when enabled.
84
85       CTDB_NATGW_PRIVATE_NETWORK=IPADDR/MASK
86           IPADDR/MASK is the private sub-network that is internally routed
87           via the NAT gateway leader node. This is usually the private
88           network that is used for node addresses.
89
90           No default.
91
92       CTDB_NATGW_PUBLIC_IFACE=IFACE
93           IFACE is the network interface on which the CTDB_NATGW_PUBLIC_IP
94           will be configured.
95
96           No default.
97
98       CTDB_NATGW_PUBLIC_IP=IPADDR/MASK
99           IPADDR/MASK indicates the IP address that is used for outgoing
100           traffic (originating from CTDB_NATGW_PRIVATE_NETWORK) on the NAT
101           gateway leader node. This must not be a configured public IP
102           address.
103
104           No default.
105
106       CTDB_NATGW_STATIC_ROUTES=IPADDR/MASK[@GATEWAY] ...
107           Each IPADDR/MASK identifies a network or host to which NATGW should
108           create a fallback route, instead of creating a single default
109           route. This can be used when there is already a default route, via
110           an interface that can not reach required infrastructure, that
111           overrides the NAT gateway default route.
112
113           If GATEWAY is specified then the corresponding route on the NATGW
114           leader node will be via GATEWAY. Such routes are created even if
115           CTDB_NATGW_DEFAULT_GATEWAY is not specified. If GATEWAY is not
116           specified for some networks then routes are only created on the
117           NATGW leader node for those networks if CTDB_NATGW_DEFAULT_GATEWAY
118           is specified.
119
120           This should be used with care to avoid causing traffic to
121           unnecessarily double-hop through the NAT gateway leader, even when
122           a node is hosting public IP addresses. Each specified network or
123           host should probably have a corresponding automatically created
124           link route or static route to avoid this.
125
126           No default.
127
128       Example
129               CTDB_NATGW_NODES=/etc/ctdb/natgw_nodes
130               CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
131               CTDB_NATGW_DEFAULT_GATEWAY=10.0.0.1
132               CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
133               CTDB_NATGW_PUBLIC_IFACE=eth0
134
135
136           A variation that ensures that infrastructure (ADS, DNS, ...)
137           directly attached to the public network (10.0.0.0/24) is always
138           reachable would look like this:
139
140               CTDB_NATGW_NODES=/etc/ctdb/natgw_nodes
141               CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
142               CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
143               CTDB_NATGW_PUBLIC_IFACE=eth0
144               CTDB_NATGW_STATIC_ROUTES=10.0.0.0/24
145
146
147           Note that CTDB_NATGW_DEFAULT_GATEWAY is not specified.
148
149   13.per_ip_routing
150       Provides CTDB's policy routing functionality.
151
152       A node running CTDB may be a component of a complex network topology.
153       In particular, public addresses may be spread across several different
154       networks (or VLANs) and it may not be possible to route packets from
155       these public addresses via the system's default route. Therefore, CTDB
156       has support for policy routing via the 13.per_ip_routing eventscript.
157       This allows routing to be specified for packets sourced from each
158       public address. The routes are added and removed as CTDB moves public
159       addresses between nodes.
160
161       For more information, see the POLICY ROUTING section in ctdb(7).
162
163       CTDB_PER_IP_ROUTING_CONF=FILENAME
164           FILENAME contains elements for constructing the desired routes for
165           each source address.
166
167           The special FILENAME value __auto_link_local__ indicates that no
168           configuration file is provided and that CTDB should generate
169           reasonable link-local routes for each public IP address.
170
171           File format:
172
173                         IPADDR DEST-IPADDR/MASK [GATEWAY-IPADDR]
174
175
176           No default, usually /etc/ctdb/policy_routing when enabled.
177
178       CTDB_PER_IP_ROUTING_RULE_PREF=NUM
179           NUM sets the priority (or preference) for the routing rules that
180           are added by CTDB.
181
182           This should be (strictly) greater than 0 and (strictly) less than
183           32766. A priority of 100 is recommended, unless this conflicts with
184           a priority already in use on the system. See ip(8), for more
185           details.
186
187       CTDB_PER_IP_ROUTING_TABLE_ID_LOW=LOW-NUM,
188       CTDB_PER_IP_ROUTING_TABLE_ID_HIGH=HIGH-NUM
189           CTDB determines a unique routing table number to use for the
190           routing related to each public address. LOW-NUM and HIGH-NUM
191           indicate the minimum and maximum routing table numbers that are
192           used.
193
194           ip(8) uses some reserved routing table numbers below 255.
195           Therefore, CTDB_PER_IP_ROUTING_TABLE_ID_LOW should be (strictly)
196           greater than 255.
197
198           CTDB uses the standard file /etc/iproute2/rt_tables to maintain a
199           mapping between the routing table numbers and labels. The label for
200           a public address ADDR will look like ctdb.addr. This means that the
201           associated rules and routes are easy to read (and manipulate).
202
203           No default, usually 1000 and 9000.
204
205       Example
206               CTDB_PER_IP_ROUTING_CONF=/etc/ctdb/policy_routing
207               CTDB_PER_IP_ROUTING_RULE_PREF=100
208               CTDB_PER_IP_ROUTING_TABLE_ID_LOW=1000
209               CTDB_PER_IP_ROUTING_TABLE_ID_HIGH=9000
210
211
212   91.lvs
213       Provides CTDB's LVS functionality.
214
215       For a general description see the LVS section in ctdb(7).
216
217       CTDB_LVS_NODES=FILENAME
218           FILENAME contains the list of nodes that belong to the same LVS
219           group.
220
221           File format:
222
223               IPADDR [follower-only]
224
225
226           IPADDR is the private IP address of each node in the LVS group.
227
228           If "follower-only" is specified then the corresponding node can not
229           be the LVS leader node. In this case CTDB_LVS_PUBLIC_IFACE and
230           CTDB_LVS_PUBLIC_IP are optional and unused.
231
232           No default, usually /etc/ctdb/lvs_nodes when enabled.
233
234       CTDB_LVS_PUBLIC_IFACE=INTERFACE
235           INTERFACE is the network interface that clients will use to
236           connection to CTDB_LVS_PUBLIC_IP. This is optional for
237           follower-only nodes. No default.
238
239       CTDB_LVS_PUBLIC_IP=IPADDR
240           CTDB_LVS_PUBLIC_IP is the LVS public address. No default.
241

SERVICE CONFIGURATION

243       CTDB can be configured to manage and/or monitor various NAS (and other)
244       services via its eventscripts.
245
246       In the simplest case CTDB will manage a service. This means the service
247       will be started and stopped along with CTDB, CTDB will monitor the
248       service and CTDB will do any required reconfiguration of the service
249       when public IP addresses are failed over.
250
251   20.multipathd
252       Provides CTDB's Linux multipathd service management.
253
254       It can monitor multipath devices to ensure that active paths are
255       available.
256
257       CTDB_MONITOR_MPDEVICES=MP-DEVICE-LIST
258           MP-DEVICE-LIST is a list of multipath devices for CTDB to monitor?
259
260           No default.
261
262   31.clamd
263       This event script provide CTDB's ClamAV anti-virus service management.
264
265       This eventscript is not enabled by default. Use ctdb enablescript to
266       enable it.
267
268       CTDB_CLAMD_SOCKET=FILENAME
269           FILENAME is the socket to monitor ClamAV.
270
271           No default.
272
273   48.netbios
274       Provides CTDB's NetBIOS service management.
275
276       CTDB_SERVICE_NMB=SERVICE
277           Distribution specific SERVICE for managing nmbd.
278
279           Default is distribution-dependant.
280
281   49.winbind
282       Provides CTDB's Samba winbind service management.
283
284       CTDB_SERVICE_WINBIND=SERVICE
285           Distribution specific SERVICE for managing winbindd.
286
287           Default is "winbind".
288
289   50.samba
290       Provides the core of CTDB's Samba file service management.
291
292       CTDB_SAMBA_CHECK_PORTS=PORT-LIST
293           When monitoring Samba, check TCP ports in space-separated
294           PORT-LIST.
295
296           Default is to monitor ports that Samba is configured to listen on.
297
298       CTDB_SAMBA_SKIP_SHARE_CHECK=yes|no
299           As part of monitoring, should CTDB skip the check for the existence
300           of each directory configured as share in Samba. This may be
301           desirable if there is a large number of shares.
302
303           Default is no.
304
305       CTDB_SERVICE_SMB=SERVICE
306           Distribution specific SERVICE for managing smbd.
307
308           Default is distribution-dependant.
309
310   60.nfs
311       This event script (along with 06.nfs) provides CTDB's NFS service
312       management.
313
314       This includes parameters for the kernel NFS server. Alternative NFS
315       subsystems (such as NFS-Ganesha[1]) can be integrated using
316       CTDB_NFS_CALLOUT.
317
318       CTDB_NFS_CALLOUT=COMMAND
319           COMMAND specifies the path to a callout to handle interactions with
320           the configured NFS system, including startup, shutdown, monitoring.
321
322           Default is the included nfs-linux-kernel-callout.
323
324       CTDB_NFS_CHECKS_DIR=DIRECTORY
325           Specifies the path to a DIRECTORY containing files that describe
326           how to monitor the responsiveness of NFS RPC services. See the
327           README file for this directory for an explanation of the contents
328           of these "check" files.
329
330           CTDB_NFS_CHECKS_DIR can be used to point to different sets of
331           checks for different NFS servers.
332
333           One way of using this is to have it point to, say,
334           /etc/ctdb/nfs-checks-enabled.d and populate it with symbolic links
335           to the desired check files. This avoids duplication and is
336           upgrade-safe.
337
338           Default is /etc/ctdb/nfs-checks.d, which contains NFS RPC checks
339           suitable for Linux kernel NFS.
340
341       CTDB_NFS_SKIP_SHARE_CHECK=yes|no
342           As part of monitoring, should CTDB skip the check for the existence
343           of each directory exported via NFS. This may be desirable if there
344           is a large number of exports.
345
346           Default is no.
347
348       CTDB_RPCINFO_LOCALHOST=IPADDR|HOSTNAME
349           IPADDR or HOSTNAME indicates the address that rpcinfo should
350           connect to when doing rpcinfo check on IPv4 RPC service during
351           monitoring. Optimally this would be "localhost". However, this can
352           add some performance overheads.
353
354           Default is "127.0.0.1".
355
356       CTDB_RPCINFO_LOCALHOST6=IPADDR|HOSTNAME
357           IPADDR or HOSTNAME indicates the address that rpcinfo should
358           connect to when doing rpcinfo check on IPv6 RPC service during
359           monitoring. Optimally this would be "localhost6" (or similar).
360           However, this can add some performance overheads.
361
362           Default is "::1".
363
364       CTDB_NFS_STATE_FS_TYPE=TYPE
365           The type of filesystem used for a clustered NFS' shared state. No
366           default.
367
368       CTDB_NFS_STATE_MNT=DIR
369           The directory where a clustered NFS' shared state will be located.
370           No default.
371
372   70.iscsi
373       Provides CTDB's Linux iSCSI tgtd service management.
374
375       CTDB_START_ISCSI_SCRIPTS=DIRECTORY
376           DIRECTORY on shared storage containing scripts to start tgtd for
377           each public IP address.
378
379           No default.
380

DATABASE SETUP

382       CTDB checks the consistency of databases during startup.
383
384   00.ctdb
385       CTDB_MAX_CORRUPT_DB_BACKUPS=NUM
386           NUM is the maximum number of volatile TDB database backups to be
387           kept (for each database) when a corrupt database is found during
388           startup. Volatile TDBs are zeroed during startup so backups are
389           needed to debug any corruption that occurs before a restart.
390
391           Default is 10.
392

SYSTEM RESOURCE MONITORING

394   05.system
395       Provides CTDB's filesystem and memory usage monitoring.
396
397       CTDB can experience seemingly random (performance and other) issues if
398       system resources become too constrained. Options in this section can be
399       enabled to allow certain system resources to be checked. They allows
400       warnings to be logged and nodes to be marked unhealthy when system
401       resource usage reaches the configured thresholds.
402
403       Some checks are enabled by default. It is recommended that these checks
404       remain enabled or are augmented by extra checks. There is no supported
405       way of completely disabling the checks.
406
407       CTDB_MONITOR_FILESYSTEM_USAGE=FS-LIMIT-LIST
408           FS-LIMIT-LIST is a space-separated list of
409           FILESYSTEM:WARN_LIMIT[:UNHEALTHY_LIMIT] triples indicating that
410           warnings should be logged if the space used on FILESYSTEM reaches
411           WARN_LIMIT%. If usage reaches UNHEALTHY_LIMIT then the node should
412           be flagged unhealthy. Either WARN_LIMIT or UNHEALTHY_LIMIT may be
413           left blank, meaning that check will be omitted.
414
415           Default is to warn for each filesystem containing a database
416           directory (volatile database directory,
417           persistent database directory, state database directory) with a
418           threshold of 90%.
419
420       CTDB_MONITOR_MEMORY_USAGE=MEM-LIMITS
421           MEM-LIMITS takes the form WARN_LIMIT[:UNHEALTHY_LIMIT] indicating
422           that warnings should be logged if memory usage reaches WARN_LIMIT%.
423           If usage reaches UNHEALTHY_LIMIT then the node should be flagged
424           unhealthy. Either WARN_LIMIT or UNHEALTHY_LIMIT may be left blank,
425           meaning that check will be omitted.
426
427           Default is 80, so warnings will be logged when memory usage reaches
428           80%.
429

EVENT SCRIPT DEBUGGING

431   debug-hung-script.sh
432       CTDB_DEBUG_HUNG_SCRIPT_STACKPAT=REGEXP
433           REGEXP specifies interesting processes for which stack traces
434           should be logged when debugging hung eventscripts and those
435           processes are matched in pstree output. REGEXP is an extended
436           regexp so choices are separated by pipes ('|'). However, REGEXP
437           should not contain parentheses. See also the ctdb.conf(5) [event]
438           "debug script" option.
439
440           Default is "exportfs|rpcinfo".
441

FILES

443           /etc/ctdb/script.options
444

SEE ALSO

446       ctdbd(1), ctdb(7), http://ctdb.samba.org/
447

AUTHOR

449       This documentation was written by Amitay Isaacs, Martin Schwenke
450
452       Copyright © 2007 Andrew Tridgell, Ronnie Sahlberg
453
454       This program is free software; you can redistribute it and/or modify it
455       under the terms of the GNU General Public License as published by the
456       Free Software Foundation; either version 3 of the License, or (at your
457       option) any later version.
458
459       This program is distributed in the hope that it will be useful, but
460       WITHOUT ANY WARRANTY; without even the implied warranty of
461       MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
462       General Public License for more details.
463
464       You should have received a copy of the GNU General Public License along
465       with this program; if not, see http://www.gnu.org/licenses.
466
467

NOTES

469        1. NFS-Ganesha
470           https://github.com/nfs-ganesha/nfs-ganesha/wiki
471
472
473
474ctdb                              06/13/2022              CTDB-SCRIPT.OPTIO(5)
Impressum