1PACEMAKER-SCHEDULER(7)      Pacemaker Configuration     PACEMAKER-SCHEDULER(7)
2
3
4

NAME

6       pacemaker-schedulerd - Pacemaker scheduler options
7

SYNOPSIS

9       [no-quorum-policy=select] [symmetric-cluster=boolean]
10       [maintenance-mode=boolean] [start-failure-is-fatal=boolean]
11       [enable-startup-probes=boolean] [shutdown-lock=boolean]
12       [shutdown-lock-limit=time] [stonith-enabled=boolean]
13       [stonith-action=select] [stonith-timeout=time] [have-watchdog=boolean]
14       [concurrent-fencing=boolean] [startup-fencing=boolean]
15       [priority-fencing-delay=time] [node-pending-timeout=time]
16       [cluster-delay=time] [batch-limit=integer] [migration-limit=integer]
17       [stop-all-resources=boolean] [stop-orphan-resources=boolean]
18       [stop-orphan-actions=boolean] [remove-after-stop=boolean]
19       [pe-error-series-max=integer] [pe-warn-series-max=integer]
20       [pe-input-series-max=integer] [node-health-strategy=select]
21       [node-health-base=integer] [node-health-green=integer]
22       [node-health-yellow=integer] [node-health-red=integer]
23       [placement-strategy=select]
24

DESCRIPTION

26       Cluster options used by Pacemaker's scheduler
27

SUPPORTED PARAMETERS

29       no-quorum-policy = select [stop]
30           What to do when the cluster does not have quorum
31
32           What to do when the cluster does not have quorum Allowed values:
33           stop, freeze, ignore, demote, suicide
34
35       symmetric-cluster = boolean [true]
36           Whether resources can run on any node by default
37
38       maintenance-mode = boolean [false]
39           Whether the cluster should refrain from monitoring, starting, and
40           stopping resources
41
42       start-failure-is-fatal = boolean [true]
43           Whether a start failure should prevent a resource from being
44           recovered on the same node
45
46           When true, the cluster will immediately ban a resource from a node
47           if it fails to start there. When false, the cluster will instead
48           check the resource's fail count against its migration-threshold.
49
50       enable-startup-probes = boolean [true]
51           Whether the cluster should check for active resources during
52           start-up
53
54       shutdown-lock = boolean [false]
55           Whether to lock resources to a cleanly shut down node
56
57           When true, resources active on a node when it is cleanly shut down
58           are kept "locked" to that node (not allowed to run elsewhere) until
59           they start again on that node after it rejoins (or for at most
60           shutdown-lock-limit, if set). Stonith resources and Pacemaker
61           Remote connections are never locked. Clone and bundle instances and
62           the promoted role of promotable clones are currently never locked,
63           though support could be added in a future release.
64
65       shutdown-lock-limit = time [0]
66           Do not lock resources to a cleanly shut down node longer than this
67
68           If shutdown-lock is true and this is set to a nonzero time
69           duration, shutdown locks will expire after this much time has
70           passed since the shutdown was initiated, even if the node has not
71           rejoined.
72
73       stonith-enabled = boolean [true]
74           *** Advanced Use Only *** Whether nodes may be fenced as part of
75           recovery
76
77           If false, unresponsive nodes are immediately assumed to be
78           harmless, and resources that were active on them may be recovered
79           elsewhere. This can result in a "split-brain" situation,
80           potentially leading to data loss and/or service unavailability.
81
82       stonith-action = select [reboot]
83           Action to send to fence device when a node needs to be fenced
84           ("poweroff" is a deprecated alias for "off")
85
86           Action to send to fence device when a node needs to be fenced
87           ("poweroff" is a deprecated alias for "off") Allowed values:
88           reboot, off, poweroff
89
90       stonith-timeout = time [60s]
91           *** Advanced Use Only *** Unused by Pacemaker
92
93           This value is not used by Pacemaker, but is kept for backward
94           compatibility, and certain legacy fence agents might use it.
95
96       have-watchdog = boolean [false]
97           Whether watchdog integration is enabled
98
99           This is set automatically by the cluster according to whether SBD
100           is detected to be in use. User-configured values are ignored. The
101           value `true` is meaningful if diskless SBD is used and
102           `stonith-watchdog-timeout` is nonzero. In that case, if fencing is
103           required, watchdog-based self-fencing will be performed via SBD
104           without requiring a fencing resource explicitly configured.
105
106       concurrent-fencing = boolean [false]
107           Allow performing fencing operations in parallel
108
109       startup-fencing = boolean [true]
110           *** Advanced Use Only *** Whether to fence unseen nodes at start-up
111
112           Setting this to false may lead to a "split-brain"
113           situation,potentially leading to data loss and/or service
114           unavailability.
115
116       priority-fencing-delay = time [0]
117           Apply fencing delay targeting the lost nodes with the highest total
118           resource priority
119
120           Apply specified delay for the fencings that are targeting the lost
121           nodes with the highest total resource priority in case we don't
122           have the majority of the nodes in our cluster partition, so that
123           the more significant nodes potentially win any fencing match, which
124           is especially meaningful under split-brain of 2-node cluster. A
125           promoted resource instance takes the base priority + 1 on
126           calculation if the base priority is not 0. Any static/random delays
127           that are introduced by `pcmk_delay_base/max` configured for the
128           corresponding fencing resources will be added to this delay. This
129           delay should be significantly greater than, safely twice, the
130           maximum `pcmk_delay_base/max`. By default, priority fencing delay
131           is disabled.
132
133       node-pending-timeout = time [2h]
134           How long to wait for a node that has joined the cluster to join the
135           controller process group
136
137           Fence nodes that do not join the controller process group within
138           this much time after joining the cluster, to allow the cluster to
139           continue managing resources. A value of 0 means never fence pending
140           nodes.
141
142       cluster-delay = time [60s]
143           Maximum time for node-to-node communication
144
145           The node elected Designated Controller (DC) will consider an action
146           failed if it does not get a response from the node executing the
147           action within this time (after considering the action's own
148           timeout). The "correct" value will depend on the speed and load of
149           your network and cluster nodes.
150
151       batch-limit = integer [0]
152           Maximum number of jobs that the cluster may execute in parallel
153           across all nodes
154
155           The "correct" value will depend on the speed and load of your
156           network and cluster nodes. If set to 0, the cluster will impose a
157           dynamically calculated limit when any node has a high load.
158
159       migration-limit = integer [-1]
160           The number of live migration actions that the cluster is allowed to
161           execute in parallel on a node (-1 means no limit)
162
163       stop-all-resources = boolean [false]
164           Whether the cluster should stop all active resources
165
166       stop-orphan-resources = boolean [true]
167           Whether to stop resources that were removed from the configuration
168
169       stop-orphan-actions = boolean [true]
170           Whether to cancel recurring actions removed from the configuration
171
172       remove-after-stop = boolean [false]
173           *** Deprecated *** Whether to remove stopped resources from the
174           executor
175
176           Values other than default are poorly tested and potentially
177           dangerous. This option will be removed in a future release.
178
179       pe-error-series-max = integer [-1]
180           The number of scheduler inputs resulting in errors to save
181
182           Zero to disable, -1 to store unlimited.
183
184       pe-warn-series-max = integer [5000]
185           The number of scheduler inputs resulting in warnings to save
186
187           Zero to disable, -1 to store unlimited.
188
189       pe-input-series-max = integer [4000]
190           The number of scheduler inputs without errors or warnings to save
191
192           Zero to disable, -1 to store unlimited.
193
194       node-health-strategy = select [none]
195           How cluster should react to node health attributes
196
197           Requires external entities to create node attributes (named with
198           the prefix "#health") with values "red", "yellow", or "green".
199           Allowed values: none, migrate-on-red, only-green, progressive,
200           custom
201
202       node-health-base = integer [0]
203           Base health score assigned to a node
204
205           Only used when "node-health-strategy" is set to "progressive".
206
207       node-health-green = integer [0]
208           The score to use for a node health attribute whose value is "green"
209
210           Only used when "node-health-strategy" is set to "custom" or
211           "progressive".
212
213       node-health-yellow = integer [0]
214           The score to use for a node health attribute whose value is
215           "yellow"
216
217           Only used when "node-health-strategy" is set to "custom" or
218           "progressive".
219
220       node-health-red = integer [-INFINITY]
221           The score to use for a node health attribute whose value is "red"
222
223           Only used when "node-health-strategy" is set to "custom" or
224           "progressive".
225
226       placement-strategy = select [default]
227           How the cluster should allocate resources to nodes
228
229           How the cluster should allocate resources to nodes Allowed values:
230           default, utilization, minimal, balanced
231

AUTHOR

233       Andrew Beekhof <andrew@beekhof.net>
234           Author.
235
236
237
238Pacemaker Configuration           11/03/2023            PACEMAKER-SCHEDULER(7)
Impressum