1PACEMAKER-CONTROLD(7) Pacemaker Configuration PACEMAKER-CONTROLD(7)
2
3
4
6 pacemaker-controld - Pacemaker controller options
7
9 [dc-version=string] [cluster-infrastructure=string]
10 [cluster-name=string] [dc-deadtime=time]
11 [cluster-recheck-interval=time] [load-threshold=percentage]
12 [node-action-limit=integer] [fence-reaction=string]
13 [election-timeout=time] [shutdown-escalation=time]
14 [join-integration-timeout=time] [join-finalization-timeout=time]
15 [transition-delay=time] [stonith-watchdog-timeout=time]
16 [stonith-max-attempts=integer] [no-quorum-policy=select]
17 [shutdown-lock=boolean] [shutdown-lock-limit=time]
18 [node-pending-timeout=time]
19
21 Cluster options used by Pacemaker's controller
22
24 dc-version = string [none]
25 Pacemaker version on cluster node elected Designated Controller
26 (DC)
27
28 Includes a hash which identifies the exact changeset the code was
29 built from. Used for diagnostic purposes.
30
31 cluster-infrastructure = string [corosync]
32 The messaging stack on which Pacemaker is currently running
33
34 Used for informational and diagnostic purposes.
35
36 cluster-name = string
37 An arbitrary name for the cluster
38
39 This optional value is mostly for users' convenience as desired in
40 administration, but may also be used in Pacemaker configuration
41 rules via the #cluster-name node attribute, and by higher-level
42 tools and resource agents.
43
44 dc-deadtime = time [20s]
45 How long to wait for a response from other nodes during start-up
46
47 The optimal value will depend on the speed and load of your network
48 and the type of switches used.
49
50 cluster-recheck-interval = time [15min]
51 Polling interval to recheck cluster state and evaluate rules with
52 date specifications
53
54 Pacemaker is primarily event-driven, and looks ahead to know when
55 to recheck cluster state for failure timeouts and most time-based
56 rules. However, it will also recheck the cluster after this amount
57 of inactivity, to evaluate rules with date specifications and serve
58 as a fail-safe for certain types of scheduler bugs. Allowed values:
59 Zero disables polling, while positive values are an interval in
60 seconds(unless other units are specified, for example "5min")
61
62 load-threshold = percentage [80%]
63 Maximum amount of system load that should be used by cluster nodes
64
65 The cluster will slow down its recovery process when the amount of
66 system resources used (currently CPU) approaches this limit
67
68 node-action-limit = integer [0]
69 Maximum number of jobs that can be scheduled per node (defaults to
70 2x cores)
71
72 fence-reaction = string [stop]
73 How a cluster node should react if notified of its own fencing
74
75 A cluster node may receive notification of its own fencing if
76 fencing is misconfigured, or if fabric fencing is in use that
77 doesn't cut cluster communication. Allowed values are "stop" to
78 attempt to immediately stop Pacemaker and stay stopped, or "panic"
79 to attempt to immediately reboot the local node, falling back to
80 stop on failure.
81
82 election-timeout = time [2min]
83 *** Advanced Use Only ***
84
85 Declare an election failed if it is not decided within this much
86 time. If you need to adjust this value, it probably indicates the
87 presence of a bug.
88
89 shutdown-escalation = time [20min]
90 *** Advanced Use Only ***
91
92 Exit immediately if shutdown does not complete within this much
93 time. If you need to adjust this value, it probably indicates the
94 presence of a bug.
95
96 join-integration-timeout = time [3min]
97 *** Advanced Use Only ***
98
99 If you need to adjust this value, it probably indicates the
100 presence of a bug.
101
102 join-finalization-timeout = time [30min]
103 *** Advanced Use Only ***
104
105 If you need to adjust this value, it probably indicates the
106 presence of a bug.
107
108 transition-delay = time [0s]
109 *** Advanced Use Only *** Enabling this option will slow down
110 cluster recovery under all conditions
111
112 Delay cluster recovery for this much time to allow for additional
113 events to occur. Useful if your configuration is sensitive to the
114 order in which ping updates arrive.
115
116 stonith-watchdog-timeout = time [0]
117 How long before nodes can be assumed to be safely down when
118 watchdog-based self-fencing via SBD is in use
119
120 If this is set to a positive value, lost nodes are assumed to
121 self-fence using watchdog-based SBD within this much time. This
122 does not require a fencing resource to be explicitly configured,
123 though a fence_watchdog resource can be configured, to limit use to
124 specific nodes. If this is set to 0 (the default), the cluster will
125 never assume watchdog-based self-fencing. If this is set to a
126 negative value, the cluster will use twice the local value of the
127 `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or
128 otherwise treat this as 0. WARNING: When used, this timeout must be
129 larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use
130 watchdog-based SBD, and Pacemaker will refuse to start on any of
131 those nodes where this is not true for the local value or SBD is
132 not active. When this is set to a negative value,
133 `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes
134 that use SBD, otherwise data corruption or loss could occur.
135
136 stonith-max-attempts = integer [10]
137 How many times fencing can fail before it will no longer be
138 immediately re-attempted on a target
139
140 no-quorum-policy = select [stop]
141 What to do when the cluster does not have quorum
142
143 What to do when the cluster does not have quorum Allowed values:
144 stop, freeze, ignore, demote, suicide
145
146 shutdown-lock = boolean [false]
147 Whether to lock resources to a cleanly shut down node
148
149 When true, resources active on a node when it is cleanly shut down
150 are kept "locked" to that node (not allowed to run elsewhere) until
151 they start again on that node after it rejoins (or for at most
152 shutdown-lock-limit, if set). Stonith resources and Pacemaker
153 Remote connections are never locked. Clone and bundle instances and
154 the promoted role of promotable clones are currently never locked,
155 though support could be added in a future release.
156
157 shutdown-lock-limit = time [0]
158 Do not lock resources to a cleanly shut down node longer than this
159
160 If shutdown-lock is true and this is set to a nonzero time
161 duration, shutdown locks will expire after this much time has
162 passed since the shutdown was initiated, even if the node has not
163 rejoined.
164
165 node-pending-timeout = time [0]
166 How long to wait for a node that has joined the cluster to join the
167 controller process group
168
169 Fence nodes that do not join the controller process group within
170 this much time after joining the cluster, to allow the cluster to
171 continue managing resources. A value of 0 means never fence pending
172 nodes. Setting the value to 2h means fence nodes after 2 hours.
173
175 Andrew Beekhof <andrew@beekhof.net>
176 Author.
177
178
179
180Pacemaker Configuration 11/27/2023 PACEMAKER-CONTROLD(7)