1PACEMAKER-SCHEDULER(7) Pacemaker Configuration PACEMAKER-SCHEDULER(7)
2
3
4
6 pacemaker-schedulerd - Pacemaker scheduler options
7
9 [no-quorum-policy=select] [symmetric-cluster=boolean]
10 [maintenance-mode=boolean] [start-failure-is-fatal=boolean]
11 [enable-startup-probes=boolean] [shutdown-lock=boolean]
12 [shutdown-lock-limit=time] [stonith-enabled=boolean]
13 [stonith-action=select] [stonith-timeout=time] [have-watchdog=boolean]
14 [concurrent-fencing=boolean] [startup-fencing=boolean]
15 [priority-fencing-delay=time] [cluster-delay=time]
16 [batch-limit=integer] [migration-limit=integer]
17 [stop-all-resources=boolean] [stop-orphan-resources=boolean]
18 [stop-orphan-actions=boolean] [remove-after-stop=boolean]
19 [pe-error-series-max=integer] [pe-warn-series-max=integer]
20 [pe-input-series-max=integer] [node-health-strategy=select]
21 [node-health-base=integer] [node-health-green=integer]
22 [node-health-yellow=integer] [node-health-red=integer]
23 [placement-strategy=select]
24
26 Cluster options used by Pacemaker's scheduler
27
29 no-quorum-policy = select [stop]
30 What to do when the cluster does not have quorum
31
32 What to do when the cluster does not have quorum Allowed values:
33 stop, freeze, ignore, demote, suicide
34
35 symmetric-cluster = boolean [true]
36 Whether resources can run on any node by default
37
38 maintenance-mode = boolean [false]
39 Whether the cluster should refrain from monitoring, starting, and
40 stopping resources
41
42 start-failure-is-fatal = boolean [true]
43 Whether a start failure should prevent a resource from being
44 recovered on the same node
45
46 When true, the cluster will immediately ban a resource from a node
47 if it fails to start there. When false, the cluster will instead
48 check the resource's fail count against its migration-threshold.
49
50 enable-startup-probes = boolean [true]
51 Whether the cluster should check for active resources during
52 start-up
53
54 shutdown-lock = boolean [false]
55 Whether to lock resources to a cleanly shut down node
56
57 When true, resources active on a node when it is cleanly shut down
58 are kept "locked" to that node (not allowed to run elsewhere) until
59 they start again on that node after it rejoins (or for at most
60 shutdown-lock-limit, if set). Stonith resources and Pacemaker
61 Remote connections are never locked. Clone and bundle instances and
62 the promoted role of promotable clones are currently never locked,
63 though support could be added in a future release.
64
65 shutdown-lock-limit = time [0]
66 Do not lock resources to a cleanly shut down node longer than this
67
68 If shutdown-lock is true and this is set to a nonzero time
69 duration, shutdown locks will expire after this much time has
70 passed since the shutdown was initiated, even if the node has not
71 rejoined.
72
73 stonith-enabled = boolean [true]
74 *** Advanced Use Only *** Whether nodes may be fenced as part of
75 recovery
76
77 If false, unresponsive nodes are immediately assumed to be
78 harmless, and resources that were active on them may be recovered
79 elsewhere. This can result in a "split-brain" situation,
80 potentially leading to data loss and/or service unavailability.
81
82 stonith-action = select [reboot]
83 Action to send to fence device when a node needs to be fenced
84 ("poweroff" is a deprecated alias for "off")
85
86 Action to send to fence device when a node needs to be fenced
87 ("poweroff" is a deprecated alias for "off") Allowed values:
88 reboot, off, poweroff
89
90 stonith-timeout = time [60s]
91 *** Advanced Use Only *** Unused by Pacemaker
92
93 This value is not used by Pacemaker, but is kept for backward
94 compatibility, and certain legacy fence agents might use it.
95
96 have-watchdog = boolean [false]
97 Whether watchdog integration is enabled
98
99 This is set automatically by the cluster according to whether SBD
100 is detected to be in use. User-configured values are ignored. The
101 value `true` is meaningful if diskless SBD is used and
102 `stonith-watchdog-timeout` is nonzero. In that case, if fencing is
103 required, watchdog-based self-fencing will be performed via SBD
104 without requiring a fencing resource explicitly configured.
105
106 concurrent-fencing = boolean [false]
107 Allow performing fencing operations in parallel
108
109 startup-fencing = boolean [true]
110 *** Advanced Use Only *** Whether to fence unseen nodes at start-up
111
112 Setting this to false may lead to a "split-brain"
113 situation,potentially leading to data loss and/or service
114 unavailability.
115
116 priority-fencing-delay = time [0]
117 Apply fencing delay targeting the lost nodes with the highest total
118 resource priority
119
120 Apply specified delay for the fencings that are targeting the lost
121 nodes with the highest total resource priority in case we don't
122 have the majority of the nodes in our cluster partition, so that
123 the more significant nodes potentially win any fencing match, which
124 is especially meaningful under split-brain of 2-node cluster. A
125 promoted resource instance takes the base priority + 1 on
126 calculation if the base priority is not 0. Any static/random delays
127 that are introduced by `pcmk_delay_base/max` configured for the
128 corresponding fencing resources will be added to this delay. This
129 delay should be significantly greater than, safely twice, the
130 maximum `pcmk_delay_base/max`. By default, priority fencing delay
131 is disabled.
132
133 cluster-delay = time [60s]
134 Maximum time for node-to-node communication
135
136 The node elected Designated Controller (DC) will consider an action
137 failed if it does not get a response from the node executing the
138 action within this time (after considering the action's own
139 timeout). The "correct" value will depend on the speed and load of
140 your network and cluster nodes.
141
142 batch-limit = integer [0]
143 Maximum number of jobs that the cluster may execute in parallel
144 across all nodes
145
146 The "correct" value will depend on the speed and load of your
147 network and cluster nodes. If set to 0, the cluster will impose a
148 dynamically calculated limit when any node has a high load.
149
150 migration-limit = integer [-1]
151 The number of live migration actions that the cluster is allowed to
152 execute in parallel on a node (-1 means no limit)
153
154 stop-all-resources = boolean [false]
155 Whether the cluster should stop all active resources
156
157 stop-orphan-resources = boolean [true]
158 Whether to stop resources that were removed from the configuration
159
160 stop-orphan-actions = boolean [true]
161 Whether to cancel recurring actions removed from the configuration
162
163 remove-after-stop = boolean [false]
164 *** Deprecated *** Whether to remove stopped resources from the
165 executor
166
167 Values other than default are poorly tested and potentially
168 dangerous. This option will be removed in a future release.
169
170 pe-error-series-max = integer [-1]
171 The number of scheduler inputs resulting in errors to save
172
173 Zero to disable, -1 to store unlimited.
174
175 pe-warn-series-max = integer [5000]
176 The number of scheduler inputs resulting in warnings to save
177
178 Zero to disable, -1 to store unlimited.
179
180 pe-input-series-max = integer [4000]
181 The number of scheduler inputs without errors or warnings to save
182
183 Zero to disable, -1 to store unlimited.
184
185 node-health-strategy = select [none]
186 How cluster should react to node health attributes
187
188 Requires external entities to create node attributes (named with
189 the prefix "#health") with values "red", "yellow", or "green".
190 Allowed values: none, migrate-on-red, only-green, progressive,
191 custom
192
193 node-health-base = integer [0]
194 Base health score assigned to a node
195
196 Only used when node-health-strategy is set to progressive.
197
198 node-health-green = integer [0]
199 The score to use for a node health attribute whose value is "green"
200
201 Only used when node-health-strategy is set to custom or
202 progressive.
203
204 node-health-yellow = integer [0]
205 The score to use for a node health attribute whose value is
206 "yellow"
207
208 Only used when node-health-strategy is set to custom or
209 progressive.
210
211 node-health-red = integer [-INFINITY]
212 The score to use for a node health attribute whose value is "red"
213
214 Only used when node-health-strategy is set to custom or
215 progressive.
216
217 placement-strategy = select [default]
218 How the cluster should allocate resources to nodes
219
220 How the cluster should allocate resources to nodes Allowed values:
221 default, utilization, minimal, balanced
222
224 Andrew Beekhof <andrew@beekhof.net>
225 Author.
226
227
228
229Pacemaker Configuration 12/08/2022 PACEMAKER-SCHEDULER(7)