1PACEMAKER-CONTROLD(7) Pacemaker Configuration PACEMAKER-CONTROLD(7)
2
3
4
6 pacemaker-controld - Pacemaker controller options
7
9 [dc-version=string] [cluster-infrastructure=string]
10 [cluster-name=string] [dc-deadtime=time]
11 [cluster-recheck-interval=time] [load-threshold=percentage]
12 [node-action-limit=integer] [fence-reaction=string]
13 [election-timeout=time] [shutdown-escalation=time]
14 [join-integration-timeout=time] [join-finalization-timeout=time]
15 [transition-delay=time] [stonith-watchdog-timeout=time]
16 [stonith-max-attempts=integer] [no-quorum-policy=select]
17 [shutdown-lock=boolean] [shutdown-lock-limit=time]
18
20 Cluster options used by Pacemaker's controller
21
23 dc-version = string [none]
24 Pacemaker version on cluster node elected Designated Controller
25 (DC)
26
27 Includes a hash which identifies the exact changeset the code was
28 built from. Used for diagnostic purposes.
29
30 cluster-infrastructure = string [corosync]
31 The messaging stack on which Pacemaker is currently running
32
33 Used for informational and diagnostic purposes.
34
35 cluster-name = string
36 An arbitrary name for the cluster
37
38 This optional value is mostly for users' convenience as desired in
39 administration, but may also be used in Pacemaker configuration
40 rules via the #cluster-name node attribute, and by higher-level
41 tools and resource agents.
42
43 dc-deadtime = time [20s]
44 How long to wait for a response from other nodes during start-up
45
46 The optimal value will depend on the speed and load of your network
47 and the type of switches used.
48
49 cluster-recheck-interval = time [15min]
50 Polling interval to recheck cluster state and evaluate rules with
51 date specifications
52
53 Pacemaker is primarily event-driven, and looks ahead to know when
54 to recheck cluster state for failure timeouts and most time-based
55 rules. However, it will also recheck the cluster after this amount
56 of inactivity, to evaluate rules with date specifications and serve
57 as a fail-safe for certain types of scheduler bugs. Allowed values:
58 Zero disables polling, while positive values are an interval in
59 seconds(unless other units are specified, for example "5min")
60
61 load-threshold = percentage [80%]
62 Maximum amount of system load that should be used by cluster nodes
63
64 The cluster will slow down its recovery process when the amount of
65 system resources used (currently CPU) approaches this limit
66
67 node-action-limit = integer [0]
68 Maximum number of jobs that can be scheduled per node (defaults to
69 2x cores)
70
71 fence-reaction = string [stop]
72 How a cluster node should react if notified of its own fencing
73
74 A cluster node may receive notification of its own fencing if
75 fencing is misconfigured, or if fabric fencing is in use that
76 doesn't cut cluster communication. Allowed values are "stop" to
77 attempt to immediately stop Pacemaker and stay stopped, or "panic"
78 to attempt to immediately reboot the local node, falling back to
79 stop on failure.
80
81 election-timeout = time [2min]
82 *** Advanced Use Only ***
83
84 Declare an election failed if it is not decided within this much
85 time. If you need to adjust this value, it probably indicates the
86 presence of a bug.
87
88 shutdown-escalation = time [20min]
89 *** Advanced Use Only ***
90
91 Exit immediately if shutdown does not complete within this much
92 time. If you need to adjust this value, it probably indicates the
93 presence of a bug.
94
95 join-integration-timeout = time [3min]
96 *** Advanced Use Only ***
97
98 If you need to adjust this value, it probably indicates the
99 presence of a bug.
100
101 join-finalization-timeout = time [30min]
102 *** Advanced Use Only ***
103
104 If you need to adjust this value, it probably indicates the
105 presence of a bug.
106
107 transition-delay = time [0s]
108 *** Advanced Use Only *** Enabling this option will slow down
109 cluster recovery under all conditions
110
111 Delay cluster recovery for this much time to allow for additional
112 events to occur. Useful if your configuration is sensitive to the
113 order in which ping updates arrive.
114
115 stonith-watchdog-timeout = time [0]
116 How long before nodes can be assumed to be safely down when
117 watchdog-based self-fencing via SBD is in use
118
119 If this is set to a positive value, lost nodes are assumed to
120 self-fence using watchdog-based SBD within this much time. This
121 does not require a fencing resource to be explicitly configured,
122 though a fence_watchdog resource can be configured, to limit use to
123 specific nodes. If this is set to 0 (the default), the cluster will
124 never assume watchdog-based self-fencing. If this is set to a
125 negative value, the cluster will use twice the local value of the
126 `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or
127 otherwise treat this as 0. WARNING: When used, this timeout must be
128 larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use
129 watchdog-based SBD, and Pacemaker will refuse to start on any of
130 those nodes where this is not true for the local value or SBD is
131 not active. When this is set to a negative value,
132 `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes
133 that use SBD, otherwise data corruption or loss could occur.
134
135 stonith-max-attempts = integer [10]
136 How many times fencing can fail before it will no longer be
137 immediately re-attempted on a target
138
139 no-quorum-policy = select [stop]
140 What to do when the cluster does not have quorum
141
142 What to do when the cluster does not have quorum Allowed values:
143 stop, freeze, ignore, demote, suicide
144
145 shutdown-lock = boolean [false]
146 Whether to lock resources to a cleanly shut down node
147
148 When true, resources active on a node when it is cleanly shut down
149 are kept "locked" to that node (not allowed to run elsewhere) until
150 they start again on that node after it rejoins (or for at most
151 shutdown-lock-limit, if set). Stonith resources and Pacemaker
152 Remote connections are never locked. Clone and bundle instances and
153 the promoted role of promotable clones are currently never locked,
154 though support could be added in a future release.
155
156 shutdown-lock-limit = time [0]
157 Do not lock resources to a cleanly shut down node longer than this
158
159 If shutdown-lock is true and this is set to a nonzero time
160 duration, shutdown locks will expire after this much time has
161 passed since the shutdown was initiated, even if the node has not
162 rejoined.
163
165 Andrew Beekhof <andrew@beekhof.net>
166 Author.
167
168
169
170Pacemaker Configuration 11/03/2023 PACEMAKER-CONTROLD(7)