1PCS(8) System Administration Utilities PCS(8)
2
3
4
6 pcs - pacemaker/corosync configuration system
7
9 pcs [-f file] [-h] [commands]...
10
12 Control and configure pacemaker and corosync.
13
15 -h, --help
16 Display usage and exit.
17
18 -f file
19 Perform actions on file instead of active CIB.
20 Commands supporting the option use the initial state of the
21 specified file as their input and then overwrite the file with
22 the state reflecting the requested operation(s).
23 A few commands only use the specified file in read-only mode
24 since their effect is not a CIB modification.
25
26 --debug
27 Print all network traffic and external commands run.
28
29 --version
30 Print pcs version information. List pcs capabilities if --full
31 is specified.
32
33 --request-timeout=<timeout>
34 Timeout for each outgoing request to another node in seconds.
35 Default is 60s.
36
37 Commands:
38 cluster
39 Configure cluster options and nodes.
40
41 resource
42 Manage cluster resources.
43
44 stonith
45 Manage fence devices.
46
47 constraint
48 Manage resource constraints.
49
50 property
51 Manage pacemaker properties.
52
53 acl
54 Manage pacemaker access control lists.
55
56 qdevice
57 Manage quorum device provider on the local host.
58
59 quorum
60 Manage cluster quorum settings.
61
62 booth
63 Manage booth (cluster ticket manager).
64
65 status
66 View cluster status.
67
68 config
69 View and manage cluster configuration.
70
71 pcsd
72 Manage pcs daemon.
73
74 host
75 Manage hosts known to pcs/pcsd.
76
77 node
78 Manage cluster nodes.
79
80 alert
81 Manage pacemaker alerts.
82
83 client
84 Manage pcsd client configuration.
85
86 dr
87 Manage disaster recovery configuration.
88
89 tag
90 Manage pacemaker tags.
91
92 resource
93 [status [<resource id | tag id>] [node=<node>] [--hide-inactive]]
94 Show status of all currently configured resources. If --hide-in‐
95 active is specified, only show active resources. If a resource
96 or tag id is specified, only show status of the specified re‐
97 source or resources in the specified tag. If node is specified,
98 only show status of resources configured for the specified node.
99
100 config [<resource id>]...
101 Show options of all currently configured resources or if re‐
102 source ids are specified show the options for the specified re‐
103 source ids.
104
105 list [filter] [--nodesc]
106 Show list of all available resource agents (if filter is pro‐
107 vided then only resource agents matching the filter will be
108 shown). If --nodesc is used then descriptions of resource agents
109 are not printed.
110
111 describe [<standard>:[<provider>:]]<type> [--full]
112 Show options for the specified resource. If --full is specified,
113 all options including advanced and deprecated ones are shown.
114
115 create <resource id> [<standard>:[<provider>:]]<type> [resource op‐
116 tions] [op <operation action> <operation options> [<operation action>
117 <operation options>]...] [meta <meta options>...] [clone [<clone id>]
118 [<clone options>] | promotable [<clone id>] [<promotable options>] |
119 --group <group id> [--before <resource id> | --after <resource id>] |
120 bundle <bundle id>] [--disabled] [--no-default-ops] [--wait[=n]]
121 Create specified resource. If clone is used a clone resource is
122 created. If promotable is used a promotable clone resource is
123 created. If --group is specified the resource is added to the
124 group named. You can use --before or --after to specify the po‐
125 sition of the added resource relatively to some resource already
126 existing in the group. If bundle is specified, resource will be
127 created inside of the specified bundle. If --disabled is speci‐
128 fied the resource is not started automatically. If --no-de‐
129 fault-ops is specified, only monitor operations are created for
130 the resource and all other operations use default settings. If
131 --wait is specified, pcs will wait up to 'n' seconds for the re‐
132 source to start and then return 0 if the resource is started, or
133 1 if the resource has not yet started. If 'n' is not specified
134 it defaults to 60 minutes.
135
136 Example: Create a new resource called 'VirtualIP' with IP ad‐
137 dress 192.168.0.99, netmask of 32, monitored everything 30 sec‐
138 onds, on eth2: pcs resource create VirtualIP ocf:heart‐
139 beat:IPaddr2 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor
140 interval=30s
141
142 delete <resource id|group id|bundle id|clone id>
143 Deletes the resource, group, bundle or clone (and all resources
144 within the group/bundle/clone).
145
146 remove <resource id|group id|bundle id|clone id>
147 Deletes the resource, group, bundle or clone (and all resources
148 within the group/bundle/clone).
149
150 enable <resource id | tag id>... [--wait[=n]]
151 Allow the cluster to start the resources. Depending on the rest
152 of the configuration (constraints, options, failures, etc), the
153 resources may remain stopped. If --wait is specified, pcs will
154 wait up to 'n' seconds for the resources to start and then re‐
155 turn 0 if the resources are started, or 1 if the resources have
156 not yet started. If 'n' is not specified it defaults to 60 min‐
157 utes.
158
159 disable <resource id | tag id>... [--safe [--brief] [--no-strict]]
160 [--simulate [--brief]] [--wait[=n]]
161 Attempt to stop the resources if they are running and forbid the
162 cluster from starting them again. Depending on the rest of the
163 configuration (constraints, options, failures, etc), the re‐
164 sources may remain started.
165 If --safe is specified, no changes to the cluster configuration
166 will be made if other than specified resources would be affected
167 in any way. If --brief is also specified, only errors are
168 printed.
169 If --no-strict is specified, no changes to the cluster configu‐
170 ration will be made if other than specified resources would get
171 stopped or demoted. Moving resources between nodes is allowed.
172 If --simulate is specified, no changes to the cluster configura‐
173 tion will be made and the effect of the changes will be printed
174 instead. If --brief is also specified, only a list of affected
175 resources will be printed.
176 If --wait is specified, pcs will wait up to 'n' seconds for the
177 resources to stop and then return 0 if the resources are stopped
178 or 1 if the resources have not stopped. If 'n' is not specified
179 it defaults to 60 minutes.
180
181 safe-disable <resource id | tag id>... [--brief] [--no-strict] [--simu‐
182 late [--brief]] [--wait[=n]] [--force]
183 Attempt to stop the resources if they are running and forbid the
184 cluster from starting them again. Depending on the rest of the
185 configuration (constraints, options, failures, etc), the re‐
186 sources may remain started. No changes to the cluster configura‐
187 tion will be made if other than specified resources would be af‐
188 fected in any way.
189 If --brief is specified, only errors are printed.
190 If --no-strict is specified, no changes to the cluster configu‐
191 ration will be made if other than specified resources would get
192 stopped or demoted. Moving resources between nodes is allowed.
193 If --simulate is specified, no changes to the cluster configura‐
194 tion will be made and the effect of the changes will be printed
195 instead. If --brief is also specified, only a list of affected
196 resources will be printed.
197 If --wait is specified, pcs will wait up to 'n' seconds for the
198 resources to stop and then return 0 if the resources are stopped
199 or 1 if the resources have not stopped. If 'n' is not specified
200 it defaults to 60 minutes.
201 If --force is specified, checks for safe disable will be
202 skipped.
203
204 restart <resource id> [node] [--wait=n]
205 Restart the resource specified. If a node is specified and if
206 the resource is a clone or bundle it will be restarted only on
207 the node specified. If --wait is specified, then we will wait up
208 to 'n' seconds for the resource to be restarted and return 0 if
209 the restart was successful or 1 if it was not.
210
211 debug-start <resource id> [--full]
212 This command will force the specified resource to start on this
213 node ignoring the cluster recommendations and print the output
214 from starting the resource. Using --full will give more de‐
215 tailed output. This is mainly used for debugging resources that
216 fail to start.
217
218 debug-stop <resource id> [--full]
219 This command will force the specified resource to stop on this
220 node ignoring the cluster recommendations and print the output
221 from stopping the resource. Using --full will give more de‐
222 tailed output. This is mainly used for debugging resources that
223 fail to stop.
224
225 debug-promote <resource id> [--full]
226 This command will force the specified resource to be promoted on
227 this node ignoring the cluster recommendations and print the
228 output from promoting the resource. Using --full will give more
229 detailed output. This is mainly used for debugging resources
230 that fail to promote.
231
232 debug-demote <resource id> [--full]
233 This command will force the specified resource to be demoted on
234 this node ignoring the cluster recommendations and print the
235 output from demoting the resource. Using --full will give more
236 detailed output. This is mainly used for debugging resources
237 that fail to demote.
238
239 debug-monitor <resource id> [--full]
240 This command will force the specified resource to be monitored
241 on this node ignoring the cluster recommendations and print the
242 output from monitoring the resource. Using --full will give
243 more detailed output. This is mainly used for debugging re‐
244 sources that fail to be monitored.
245
246 move <resource id> [destination node] [--master] [[lifetime=<lifetime>]
247 | [--autodelete [--strict]]] [--wait[=n]]
248 Move the resource off the node it is currently running on by
249 creating a -INFINITY location constraint to ban the node. If
250 destination node is specified the resource will be moved to that
251 node by creating an INFINITY location constraint to prefer the
252 destination node. If --master is used the scope of the command
253 is limited to the master role and you must use the promotable
254 clone id (instead of the resource id).
255
256 If lifetime is specified then the constraint will expire after
257 that time, otherwise it defaults to infinity and the constraint
258 can be cleared manually with 'pcs resource clear' or 'pcs con‐
259 straint delete'. Lifetime is expected to be specified as ISO
260 8601 duration (see https://en.wikipedia.org/wiki/ISO_8601#Dura‐
261 tions).
262
263 If --autodelete is specified, a constraint needed for moving the
264 resource will be automatically removed once the resource is run‐
265 ning on it's new location. The command will fail in case it is
266 not possible to verify that the resource will not be moved after
267 deleting the constraint. If --strict is specified, the command
268 will also fail if other resources would be affected. NOTE: This
269 feature is still being worked on and thus may be changed in fu‐
270 ture.
271
272 If --wait is specified, pcs will wait up to 'n' seconds for the
273 resource to move and then return 0 on success or 1 on error. If
274 'n' is not specified it defaults to 60 minutes.
275
276 If you want the resource to preferably avoid running on some
277 nodes but be able to failover to them use 'pcs constraint loca‐
278 tion avoids'.
279
280 ban <resource id> [node] [--master] [lifetime=<lifetime>] [--wait[=n]]
281 Prevent the resource id specified from running on the node (or
282 on the current node it is running on if no node is specified) by
283 creating a -INFINITY location constraint. If --master is used
284 the scope of the command is limited to the master role and you
285 must use the promotable clone id (instead of the resource id).
286
287 If lifetime is specified then the constraint will expire after
288 that time, otherwise it defaults to infinity and the constraint
289 can be cleared manually with 'pcs resource clear' or 'pcs con‐
290 straint delete'. Lifetime is expected to be specified as ISO
291 8601 duration (see https://en.wikipedia.org/wiki/ISO_8601#Dura‐
292 tions).
293
294 If --wait is specified, pcs will wait up to 'n' seconds for the
295 resource to move and then return 0 on success or 1 on error. If
296 'n' is not specified it defaults to 60 minutes.
297
298 If you want the resource to preferably avoid running on some
299 nodes but be able to failover to them use 'pcs constraint loca‐
300 tion avoids'.
301
302 clear <resource id> [node] [--master] [--expired] [--wait[=n]]
303 Remove constraints created by move and/or ban on the specified
304 resource (and node if specified). If --master is used the scope
305 of the command is limited to the master role and you must use
306 the master id (instead of the resource id). If --expired is
307 specified, only constraints with expired lifetimes will be re‐
308 moved. If --wait is specified, pcs will wait up to 'n' seconds
309 for the operation to finish (including starting and/or moving
310 resources if appropriate) and then return 0 on success or 1 on
311 error. If 'n' is not specified it defaults to 60 minutes.
312
313 standards
314 List available resource agent standards supported by this in‐
315 stallation (OCF, LSB, etc.).
316
317 providers
318 List available OCF resource agent providers.
319
320 agents [standard[:provider]]
321 List available agents optionally filtered by standard and
322 provider.
323
324 update <resource id> [resource options] [op [<operation action> <opera‐
325 tion options>]...] [meta <meta operations>...] [--wait[=n]]
326 Add/Change options to specified resource, clone or multi-state
327 resource. If an operation (op) is specified it will update the
328 first found operation with the same action on the specified re‐
329 source, if no operation with that action exists then a new oper‐
330 ation will be created. (WARNING: all existing options on the
331 updated operation will be reset if not specified.) If you want
332 to create multiple monitor operations you should use the 'op
333 add' & 'op remove' commands. If --wait is specified, pcs will
334 wait up to 'n' seconds for the changes to take effect and then
335 return 0 if the changes have been processed or 1 otherwise. If
336 'n' is not specified it defaults to 60 minutes.
337
338 op add <resource id> <operation action> [operation properties]
339 Add operation for specified resource.
340
341 op delete <resource id> <operation action> [<operation properties>...]
342 Remove specified operation (note: you must specify the exact op‐
343 eration properties to properly remove an existing operation).
344
345 op delete <operation id>
346 Remove the specified operation id.
347
348 op remove <resource id> <operation action> [<operation properties>...]
349 Remove specified operation (note: you must specify the exact op‐
350 eration properties to properly remove an existing operation).
351
352 op remove <operation id>
353 Remove the specified operation id.
354
355 op defaults [config] [--all] [--full] [--no-check-expired]
356 List currently configured default values for operations. If
357 --all is specified, also list expired sets of values. If --full
358 is specified, also list ids. If --no-expire-check is specified,
359 do not evaluate whether sets of values are expired.
360
361 op defaults <name>=<value>
362 Set default values for operations.
363 NOTE: Defaults do not apply to resources which override them
364 with their own defined values.
365
366 op defaults set create [<set options>] [meta [<name>=<value>]...] [rule
367 [<expression>]]
368 Create a new set of default values for resource operations. You
369 may specify a rule describing resources and / or operations to
370 which the set applies.
371
372 Set options are: id, score
373
374 Expression looks like one of the following:
375 op <operation name> [interval=<interval>]
376 resource [<standard>]:[<provider>]:[<type>]
377 defined|not_defined <node attribute>
378 <node attribute> lt|gt|lte|gte|eq|ne [string|integer|num‐
379 ber|version] <value>
380 date gt|lt <date>
381 date in_range [<date>] to <date>
382 date in_range <date> to duration <duration options>
383 date-spec <date-spec options>
384 <expression> and|or <expression>
385 (<expression>)
386
387 You may specify all or any of 'standard', 'provider' and 'type'
388 in a resource expression. For example: 'resource ocf::' matches
389 all resources of 'ocf' standard, while 'resource ::Dummy'
390 matches all resources of 'Dummy' type regardless of their stan‐
391 dard and provider.
392
393 Dates are expected to conform to ISO 8601 format.
394
395 Duration options are: hours, monthdays, weekdays, yearsdays,
396 months, weeks, years, weekyears, moon. Value for these options
397 is an integer.
398
399 Date-spec options are: hours, monthdays, weekdays, yearsdays,
400 months, weeks, years, weekyears, moon. Value for these options
401 is an integer or a range written as integer-integer.
402
403 NOTE: Defaults do not apply to resources which override them
404 with their own defined values.
405
406 op defaults set delete [<set id>]...
407 Delete specified options sets.
408
409 op defaults set remove [<set id>]...
410 Delete specified options sets.
411
412 op defaults set update <set id> [meta [<name>=<value>]...]
413 Add, remove or change values in specified set of default values
414 for resource operations.
415 NOTE: Defaults do not apply to resources which override them
416 with their own defined values.
417
418 op defaults update <name>=<value>...
419 Set default values for operations. This is a simplified command
420 useful for cases when you only manage one set of default values.
421 NOTE: Defaults do not apply to resources which override them
422 with their own defined values.
423
424 meta <resource id | group id | clone id> <meta options> [--wait[=n]]
425 Add specified options to the specified resource, group or clone.
426 Meta options should be in the format of name=value, options may
427 be removed by setting an option without a value. If --wait is
428 specified, pcs will wait up to 'n' seconds for the changes to
429 take effect and then return 0 if the changes have been processed
430 or 1 otherwise. If 'n' is not specified it defaults to 60 min‐
431 utes.
432 Example: pcs resource meta TestResource failure-timeout=50
433 stickiness=
434
435 group list
436 Show all currently configured resource groups and their re‐
437 sources.
438
439 group add <group id> <resource id> [resource id] ... [resource id]
440 [--before <resource id> | --after <resource id>] [--wait[=n]]
441 Add the specified resource to the group, creating the group if
442 it does not exist. If the resource is present in another group
443 it is moved to the new group. If the group remains empty after
444 move, it is deleted (for cloned groups, the clone is deleted as
445 well). The delete operation may fail in case the group is refer‐
446 enced within the configuration, e.g. by constraints. In that
447 case, use 'pcs resource ungroup' command prior to moving all re‐
448 sources out of the group.
449
450 You can use --before or --after to specify the position of the
451 added resources relatively to some resource already existing in
452 the group. By adding resources to a group they are already in
453 and specifying --after or --before you can move the resources in
454 the group.
455
456 If --wait is specified, pcs will wait up to 'n' seconds for the
457 operation to finish (including moving resources if appropriate)
458 and then return 0 on success or 1 on error. If 'n' is not speci‐
459 fied it defaults to 60 minutes.
460
461 group delete <group id> [resource id]... [--wait[=n]]
462 Remove the group (note: this does not remove any resources from
463 the cluster) or if resources are specified, remove the specified
464 resources from the group. If --wait is specified, pcs will wait
465 up to 'n' seconds for the operation to finish (including moving
466 resources if appropriate) and the return 0 on success or 1 on
467 error. If 'n' is not specified it defaults to 60 minutes.
468
469 group remove <group id> [resource id]... [--wait[=n]]
470 Remove the group (note: this does not remove any resources from
471 the cluster) or if resources are specified, remove the specified
472 resources from the group. If --wait is specified, pcs will wait
473 up to 'n' seconds for the operation to finish (including moving
474 resources if appropriate) and the return 0 on success or 1 on
475 error. If 'n' is not specified it defaults to 60 minutes.
476
477 ungroup <group id> [resource id]... [--wait[=n]]
478 Remove the group (note: this does not remove any resources from
479 the cluster) or if resources are specified, remove the specified
480 resources from the group. If --wait is specified, pcs will wait
481 up to 'n' seconds for the operation to finish (including moving
482 resources if appropriate) and the return 0 on success or 1 on
483 error. If 'n' is not specified it defaults to 60 minutes.
484
485 clone <resource id | group id> [<clone id>] [clone options]...
486 [--wait[=n]]
487 Set up the specified resource or group as a clone. If --wait is
488 specified, pcs will wait up to 'n' seconds for the operation to
489 finish (including starting clone instances if appropriate) and
490 then return 0 on success or 1 on error. If 'n' is not specified
491 it defaults to 60 minutes.
492
493 promotable <resource id | group id> [<clone id>] [clone options]...
494 [--wait[=n]]
495 Set up the specified resource or group as a promotable clone.
496 This is an alias for 'pcs resource clone <resource id> pro‐
497 motable=true'.
498
499 unclone <clone id | resource id | group id> [--wait[=n]]
500 Remove the specified clone or the clone which contains the spec‐
501 ified group or resource (the resource or group will not be re‐
502 moved). If --wait is specified, pcs will wait up to 'n' seconds
503 for the operation to finish (including stopping clone instances
504 if appropriate) and then return 0 on success or 1 on error. If
505 'n' is not specified it defaults to 60 minutes.
506
507 bundle create <bundle id> container <container type> [<container op‐
508 tions>] [network <network options>] [port-map <port options>]... [stor‐
509 age-map <storage options>]... [meta <meta options>] [--disabled]
510 [--wait[=n]]
511 Create a new bundle encapsulating no resources. The bundle can
512 be used either as it is or a resource may be put into it at any
513 time. If --disabled is specified, the bundle is not started au‐
514 tomatically. If --wait is specified, pcs will wait up to 'n'
515 seconds for the bundle to start and then return 0 on success or
516 1 on error. If 'n' is not specified it defaults to 60 minutes.
517
518 bundle reset <bundle id> [container <container options>] [network <net‐
519 work options>] [port-map <port options>]... [storage-map <storage op‐
520 tions>]... [meta <meta options>] [--disabled] [--wait[=n]]
521 Configure specified bundle with given options. Unlike bundle up‐
522 date, this command resets the bundle according given options -
523 no previous options are kept. Resources inside the bundle are
524 kept as they are. If --disabled is specified, the bundle is not
525 started automatically. If --wait is specified, pcs will wait up
526 to 'n' seconds for the bundle to start and then return 0 on suc‐
527 cess or 1 on error. If 'n' is not specified it defaults to 60
528 minutes.
529
530 bundle update <bundle id> [container <container options>] [network
531 <network options>] [port-map (add <port options>) | (delete | remove
532 <id>...)]... [storage-map (add <storage options>) | (delete | remove
533 <id>...)]... [meta <meta options>] [--wait[=n]]
534 Add, remove or change options to specified bundle. If you wish
535 to update a resource encapsulated in the bundle, use the 'pcs
536 resource update' command instead and specify the resource id.
537 If --wait is specified, pcs will wait up to 'n' seconds for the
538 operation to finish (including moving resources if appropriate)
539 and then return 0 on success or 1 on error. If 'n' is not spec‐
540 ified it defaults to 60 minutes.
541
542 manage <resource id | tag id>... [--monitor]
543 Set resources listed to managed mode (default). If --monitor is
544 specified, enable all monitor operations of the resources.
545
546 unmanage <resource id | tag id>... [--monitor]
547 Set resources listed to unmanaged mode. When a resource is in
548 unmanaged mode, the cluster is not allowed to start nor stop the
549 resource. If --monitor is specified, disable all monitor opera‐
550 tions of the resources.
551
552 defaults [config] [--all] [--full] [--no-check-expired]
553 List currently configured default values for resources. If --all
554 is specified, also list expired sets of values. If --full is
555 specified, also list ids. If --no-expire-check is specified, do
556 not evaluate whether sets of values are expired.
557
558 defaults <name>=<value>
559 Set default values for resources.
560 NOTE: Defaults do not apply to resources which override them
561 with their own defined values.
562
563 defaults set create [<set options>] [meta [<name>=<value>]...] [rule
564 [<expression>]]
565 Create a new set of default values for resources. You may spec‐
566 ify a rule describing resources to which the set applies.
567
568 Set options are: id, score
569
570 Expression looks like one of the following:
571 resource [<standard>]:[<provider>]:[<type>]
572 date gt|lt <date>
573 date in_range [<date>] to <date>
574 date in_range <date> to duration <duration options>
575 date-spec <date-spec options>
576 <expression> and|or <expression>
577 (<expression>)
578
579 You may specify all or any of 'standard', 'provider' and 'type'
580 in a resource expression. For example: 'resource ocf::' matches
581 all resources of 'ocf' standard, while 'resource ::Dummy'
582 matches all resources of 'Dummy' type regardless of their stan‐
583 dard and provider.
584
585 Dates are expected to conform to ISO 8601 format.
586
587 Duration options are: hours, monthdays, weekdays, yearsdays,
588 months, weeks, years, weekyears, moon. Value for these options
589 is an integer.
590
591 Date-spec options are: hours, monthdays, weekdays, yearsdays,
592 months, weeks, years, weekyears, moon. Value for these options
593 is an integer or a range written as integer-integer.
594
595 NOTE: Defaults do not apply to resources which override them
596 with their own defined values.
597
598 defaults set delete [<set id>]...
599 Delete specified options sets.
600
601 defaults set remove [<set id>]...
602 Delete specified options sets.
603
604 defaults set update <set id> [meta [<name>=<value>]...]
605 Add, remove or change values in specified set of default values
606 for resources.
607 NOTE: Defaults do not apply to resources which override them
608 with their own defined values.
609
610 defaults update <name>=<value>...
611 Set default values for resources. This is a simplified command
612 useful for cases when you only manage one set of default values.
613 NOTE: Defaults do not apply to resources which override them
614 with their own defined values.
615
616 cleanup [<resource id>] [node=<node>] [operation=<operation> [inter‐
617 val=<interval>]] [--strict]
618 Make the cluster forget failed operations from history of the
619 resource and re-detect its current state. This can be useful to
620 purge knowledge of past failures that have since been resolved.
621 If the named resource is part of a group, or one numbered in‐
622 stance of a clone or bundled resource, the clean-up applies to
623 the whole collective resource unless --strict is given.
624 If a resource id is not specified then all resources / stonith
625 devices will be cleaned up.
626 If a node is not specified then resources / stonith devices on
627 all nodes will be cleaned up.
628
629 refresh [<resource id>] [node=<node>] [--strict]
630 Make the cluster forget the complete operation history (includ‐
631 ing failures) of the resource and re-detect its current state.
632 If you are interested in forgetting failed operations only, use
633 the 'pcs resource cleanup' command.
634 If the named resource is part of a group, or one numbered in‐
635 stance of a clone or bundled resource, the refresh applies to
636 the whole collective resource unless --strict is given.
637 If a resource id is not specified then all resources / stonith
638 devices will be refreshed.
639 If a node is not specified then resources / stonith devices on
640 all nodes will be refreshed.
641
642 failcount show [<resource id>] [node=<node>] [operation=<operation>
643 [interval=<interval>]] [--full]
644 Show current failcount for resources, optionally filtered by a
645 resource, node, operation and its interval. If --full is speci‐
646 fied do not sum failcounts per resource and node. Use 'pcs re‐
647 source cleanup' or 'pcs resource refresh' to reset failcounts.
648
649 relocate dry-run [resource1] [resource2] ...
650 The same as 'relocate run' but has no effect on the cluster.
651
652 relocate run [resource1] [resource2] ...
653 Relocate specified resources to their preferred nodes. If no
654 resources are specified, relocate all resources. This command
655 calculates the preferred node for each resource while ignoring
656 resource stickiness. Then it creates location constraints which
657 will cause the resources to move to their preferred nodes. Once
658 the resources have been moved the constraints are deleted auto‐
659 matically. Note that the preferred node is calculated based on
660 current cluster status, constraints, location of resources and
661 other settings and thus it might change over time.
662
663 relocate show
664 Display current status of resources and their optimal node ig‐
665 noring resource stickiness.
666
667 relocate clear
668 Remove all constraints created by the 'relocate run' command.
669
670 utilization [<resource id> [<name>=<value> ...]]
671 Add specified utilization options to specified resource. If re‐
672 source is not specified, shows utilization of all resources. If
673 utilization options are not specified, shows utilization of
674 specified resource. Utilization option should be in format
675 name=value, value has to be integer. Options may be removed by
676 setting an option without a value. Example: pcs resource uti‐
677 lization TestResource cpu= ram=20
678
679 relations <resource id> [--full]
680 Display relations of a resource specified by its id with other
681 resources in a tree structure. Supported types of resource rela‐
682 tions are: ordering constraints, ordering set constraints, rela‐
683 tions defined by resource hierarchy (clones, groups, bundles).
684 If --full is used, more verbose output will be printed.
685
686 cluster
687 setup <cluster name> (<node name> [addr=<node address>]...)... [trans‐
688 port knet|udp|udpu [<transport options>] [link <link options>]... [com‐
689 pression <compression options>] [crypto <crypto options>]] [totem
690 <totem options>] [quorum <quorum options>] ([--enable] [--start
691 [--wait[=<n>]]] [--no-keys-sync]) | [--corosync_conf <path>]
692 Create a cluster from the listed nodes and synchronize cluster
693 configuration files to them. If --corosync_conf is specified, do
694 not connect to other nodes and save corosync.conf to the speci‐
695 fied path; see 'Local only mode' below for details.
696
697 Nodes are specified by their names and optionally their ad‐
698 dresses. If no addresses are specified for a node, pcs will con‐
699 figure corosync to communicate with that node using an address
700 provided in 'pcs host auth' command. Otherwise, pcs will config‐
701 ure corosync to communicate with the node using the specified
702 addresses.
703
704 Transport knet:
705 This is the default transport. It allows configuring traffic en‐
706 cryption and compression as well as using multiple addresses
707 (links) for nodes.
708 Transport options are: ip_version, knet_pmtud_interval,
709 link_mode
710 Link options are: link_priority, linknumber, mcastport, ping_in‐
711 terval, ping_precision, ping_timeout, pong_count, transport (udp
712 or sctp)
713 Each 'link' followed by options sets options for one link in the
714 order the links are defined by nodes' addresses. You can set
715 link options for a subset of links using a linknumber. See exam‐
716 ples below.
717 Compression options are: level, model, threshold
718 Crypto options are: cipher, hash, model
719 By default, encryption is enabled with cipher=aes256 and
720 hash=sha256. To disable encryption, set cipher=none and
721 hash=none.
722
723 Transports udp and udpu:
724 These transports are limited to one address per node. They do
725 not support traffic encryption nor compression.
726 Transport options are: ip_version, netmtu
727 Link options are: bindnetaddr, broadcast, mcastaddr, mcastport,
728 ttl
729
730 Totem and quorum can be configured regardless of used transport.
731 Totem options are: block_unlisted_ips, consensus, downcheck,
732 fail_recv_const, heartbeat_failures_allowed, hold, join,
733 max_messages, max_network_delay, merge, miss_count_const,
734 send_join, seqno_unchanged_const, token, token_coefficient, to‐
735 ken_retransmit, token_retransmits_before_loss_const, window_size
736 Quorum options are: auto_tie_breaker, last_man_standing,
737 last_man_standing_window, wait_for_all
738
739 Transports and their options, link, compression, crypto and
740 totem options are all documented in corosync.conf(5) man page;
741 knet link options are prefixed 'knet_' there, compression op‐
742 tions are prefixed 'knet_compression_' and crypto options are
743 prefixed 'crypto_'. Quorum options are documented in votequo‐
744 rum(5) man page.
745
746 --enable will configure the cluster to start on nodes boot.
747 --start will start the cluster right after creating it. --wait
748 will wait up to 'n' seconds for the cluster to start.
749 --no-keys-sync will skip creating and distributing pcsd SSL cer‐
750 tificate and key and corosync and pacemaker authkey files. Use
751 this if you provide your own certificates and keys.
752
753 Local only mode:
754 By default, pcs connects to all specified nodes to verify they
755 can be used in the new cluster and to send cluster configuration
756 files to them. If this is not what you want, specify
757 --corosync_conf option followed by a file path. Pcs will save
758 corosync.conf to the specified file and will not connect to
759 cluster nodes. These are the task pcs skips in that case:
760 * make sure the nodes are not running or configured to run a
761 cluster already
762 * make sure cluster packages are installed on all nodes and
763 their versions are compatible
764 * make sure there are no cluster configuration files on any node
765 (run 'pcs cluster destroy' and remove pcs_settings.conf file on
766 all nodes)
767 * synchronize corosync and pacemaker authkeys, /etc/corosync/au‐
768 thkey and /etc/pacemaker/authkey respectively, and the
769 corosync.conf file
770 * authenticate the cluster nodes against each other ('pcs clus‐
771 ter auth' or 'pcs host auth' command)
772 * synchronize pcsd certificates (so that pcs web UI can be used
773 in an HA mode)
774
775 Examples:
776 Create a cluster with default settings:
777 pcs cluster setup newcluster node1 node2
778 Create a cluster using two links:
779 pcs cluster setup newcluster node1 addr=10.0.1.11
780 addr=10.0.2.11 node2 addr=10.0.1.12 addr=10.0.2.12
781 Set link options for all links. Link options are matched to the
782 links in order. The first link (link 0) has sctp transport, the
783 second link (link 1) has mcastport 55405:
784 pcs cluster setup newcluster node1 addr=10.0.1.11
785 addr=10.0.2.11 node2 addr=10.0.1.12 addr=10.0.2.12 transport
786 knet link transport=sctp link mcastport=55405
787 Set link options for the second and fourth links only. Link op‐
788 tions are matched to the links based on the linknumber option
789 (the first link is link 0):
790 pcs cluster setup newcluster node1 addr=10.0.1.11
791 addr=10.0.2.11 addr=10.0.3.11 addr=10.0.4.11 node2
792 addr=10.0.1.12 addr=10.0.2.12 addr=10.0.3.12 addr=10.0.4.12
793 transport knet link linknumber=3 mcastport=55405 link linknum‐
794 ber=1 transport=sctp
795 Create a cluster using udp transport with a non-default port:
796 pcs cluster setup newcluster node1 node2 transport udp link
797 mcastport=55405
798
799 config [show] [--output-format <cmd|json|text>] [--corosync_conf
800 <path>]
801 Show cluster configuration. There are 3 formats of output avail‐
802 able: 'cmd', 'json' and 'text', default is 'text'. Format 'text'
803 is a human friendly output. Format 'cmd' prints a cluster setup
804 command which recreates a cluster with the same configuration.
805 Format 'json' is a machine oriented output with cluster configu‐
806 ration. If --corosync_conf is specified, configuration file
807 specified by <path> is used instead of the current cluster con‐
808 figuration.
809
810 config update [transport <transport options>] [compression <compression
811 options>] [crypto <crypto options>] [totem <totem options>]
812 [--corosync_conf <path>]
813 Update cluster configuration. If --corosync_conf is specified,
814 update cluster configuration in a file specified by <path>. All
815 options are documented in corosync.conf(5) man page. There are
816 different transport options for transport types. Compression and
817 crypto options are only available for knet transport. Totem op‐
818 tions can be set regardless of the transport type.
819 Transport options for knet transport are: ip_version, knet_pm‐
820 tud_interval, link_mode
821 Transport options for udp and updu transports are: ip_version,
822 netmtu
823 Compression options are: level, model, threshold
824 Crypto options are: cipher, hash, model
825 Totem options are: block_unlisted_ips, consensus, downcheck,
826 fail_recv_const, heartbeat_failures_allowed, hold, join,
827 max_messages, max_network_delay, merge, miss_count_const,
828 send_join, seqno_unchanged_const, token, token_coefficient, to‐
829 ken_retransmit, token_retransmits_before_loss_const, window_size
830
831 authkey corosync [<path>]
832 Generate a new corosync authkey and distribute it to all cluster
833 nodes. If <path> is specified, do not generate a key and use key
834 from the file.
835
836 start [--all | <node>... ] [--wait[=<n>]] [--request-timeout=<seconds>]
837 Start a cluster on specified node(s). If no nodes are specified
838 then start a cluster on the local node. If --all is specified
839 then start a cluster on all nodes. If the cluster has many nodes
840 then the start request may time out. In that case you should
841 consider setting --request-timeout to a suitable value. If
842 --wait is specified, pcs waits up to 'n' seconds for the cluster
843 to get ready to provide services after the cluster has success‐
844 fully started.
845
846 stop [--all | <node>... ] [--request-timeout=<seconds>]
847 Stop a cluster on specified node(s). If no nodes are specified
848 then stop a cluster on the local node. If --all is specified
849 then stop a cluster on all nodes. If the cluster is running re‐
850 sources which take long time to stop then the stop request may
851 time out before the cluster actually stops. In that case you
852 should consider setting --request-timeout to a suitable value.
853
854 kill Force corosync and pacemaker daemons to stop on the local node
855 (performs kill -9). Note that init system (e.g. systemd) can de‐
856 tect that cluster is not running and start it again. If you want
857 to stop cluster on a node, run pcs cluster stop on that node.
858
859 enable [--all | <node>... ]
860 Configure cluster to run on node boot on specified node(s). If
861 node is not specified then cluster is enabled on the local node.
862 If --all is specified then cluster is enabled on all nodes.
863
864 disable [--all | <node>... ]
865 Configure cluster to not run on node boot on specified node(s).
866 If node is not specified then cluster is disabled on the local
867 node. If --all is specified then cluster is disabled on all
868 nodes.
869
870 auth [-u <username>] [-p <password>]
871 Authenticate pcs/pcsd to pcsd on nodes configured in the local
872 cluster.
873
874 status View current cluster status (an alias of 'pcs status cluster').
875
876 sync Sync cluster configuration (files which are supported by all
877 subcommands of this command) to all cluster nodes.
878
879 sync corosync
880 Sync corosync configuration to all nodes found from current
881 corosync.conf file.
882
883 cib [filename] [scope=<scope> | --config]
884 Get the raw xml from the CIB (Cluster Information Base). If a
885 filename is provided, we save the CIB to that file, otherwise
886 the CIB is printed. Specify scope to get a specific section of
887 the CIB. Valid values of the scope are: acls, alerts, configura‐
888 tion, constraints, crm_config, fencing-topology, nodes, op_de‐
889 faults, resources, rsc_defaults, tags. --config is the same as
890 scope=configuration. Do not specify a scope if you want to edit
891 the saved CIB using pcs (pcs -f <command>).
892
893 cib-push <filename> [--wait[=<n>]] [diff-against=<filename_original> |
894 scope=<scope> | --config]
895 Push the raw xml from <filename> to the CIB (Cluster Information
896 Base). You can obtain the CIB by running the 'pcs cluster cib'
897 command, which is recommended first step when you want to per‐
898 form desired modifications (pcs -f <command>) for the one-off
899 push.
900 If diff-against is specified, pcs diffs contents of filename
901 against contents of filename_original and pushes the result to
902 the CIB.
903 Specify scope to push a specific section of the CIB. Valid val‐
904 ues of the scope are: acls, alerts, configuration, constraints,
905 crm_config, fencing-topology, nodes, op_defaults, resources,
906 rsc_defaults, tags. --config is the same as scope=configuration.
907 Use of --config is recommended. Do not specify a scope if you
908 need to push the whole CIB or be warned in the case of outdated
909 CIB.
910 If --wait is specified wait up to 'n' seconds for changes to be
911 applied.
912 WARNING: the selected scope of the CIB will be overwritten by
913 the current content of the specified file.
914
915 Example:
916 pcs cluster cib > original.xml
917 cp original.xml new.xml
918 pcs -f new.xml constraint location apache prefers node2
919 pcs cluster cib-push new.xml diff-against=original.xml
920
921 cib-upgrade
922 Upgrade the CIB to conform to the latest version of the document
923 schema.
924
925 edit [scope=<scope> | --config]
926 Edit the cib in the editor specified by the $EDITOR environment
927 variable and push out any changes upon saving. Specify scope to
928 edit a specific section of the CIB. Valid values of the scope
929 are: acls, alerts, configuration, constraints, crm_config, fenc‐
930 ing-topology, nodes, op_defaults, resources, rsc_defaults, tags.
931 --config is the same as scope=configuration. Use of --config is
932 recommended. Do not specify a scope if you need to edit the
933 whole CIB or be warned in the case of outdated CIB.
934
935 node add <node name> [addr=<node address>]... [watchdog=<watchdog
936 path>] [device=<SBD device path>]... [--start [--wait[=<n>]]] [--en‐
937 able] [--no-watchdog-validation]
938 Add the node to the cluster and synchronize all relevant config‐
939 uration files to the new node. This command can only be run on
940 an existing cluster node.
941
942 The new node is specified by its name and optionally its ad‐
943 dresses. If no addresses are specified for the node, pcs will
944 configure corosync to communicate with the node using an address
945 provided in 'pcs host auth' command. Otherwise, pcs will config‐
946 ure corosync to communicate with the node using the specified
947 addresses.
948
949 Use 'watchdog' to specify a path to a watchdog on the new node,
950 when SBD is enabled in the cluster. If SBD is configured with
951 shared storage, use 'device' to specify path to shared device(s)
952 on the new node.
953
954 If --start is specified also start cluster on the new node, if
955 --wait is specified wait up to 'n' seconds for the new node to
956 start. If --enable is specified configure cluster to start on
957 the new node on boot. If --no-watchdog-validation is specified,
958 validation of watchdog will be skipped.
959
960 WARNING: By default, it is tested whether the specified watchdog
961 is supported. This may cause a restart of the system when a
962 watchdog with no-way-out-feature enabled is present. Use
963 --no-watchdog-validation to skip watchdog validation.
964
965 node delete <node name> [<node name>]...
966 Shutdown specified nodes and remove them from the cluster.
967
968 node remove <node name> [<node name>]...
969 Shutdown specified nodes and remove them from the cluster.
970
971 node add-remote <node name> [<node address>] [options] [op <operation
972 action> <operation options> [<operation action> <operation op‐
973 tions>]...] [meta <meta options>...] [--wait[=<n>]]
974 Add the node to the cluster as a remote node. Sync all relevant
975 configuration files to the new node. Start the node and config‐
976 ure it to start the cluster on boot. Options are port and recon‐
977 nect_interval. Operations and meta belong to an underlying con‐
978 nection resource (ocf:pacemaker:remote). If node address is not
979 specified for the node, pcs will configure pacemaker to communi‐
980 cate with the node using an address provided in 'pcs host auth'
981 command. Otherwise, pcs will configure pacemaker to communicate
982 with the node using the specified addresses. If --wait is speci‐
983 fied, wait up to 'n' seconds for the node to start.
984
985 node delete-remote <node identifier>
986 Shutdown specified remote node and remove it from the cluster.
987 The node-identifier can be the name of the node or the address
988 of the node.
989
990 node remove-remote <node identifier>
991 Shutdown specified remote node and remove it from the cluster.
992 The node-identifier can be the name of the node or the address
993 of the node.
994
995 node add-guest <node name> <resource id> [options] [--wait[=<n>]]
996 Make the specified resource a guest node resource. Sync all rel‐
997 evant configuration files to the new node. Start the node and
998 configure it to start the cluster on boot. Options are re‐
999 mote-addr, remote-port and remote-connect-timeout. If re‐
1000 mote-addr is not specified for the node, pcs will configure
1001 pacemaker to communicate with the node using an address provided
1002 in 'pcs host auth' command. Otherwise, pcs will configure pace‐
1003 maker to communicate with the node using the specified ad‐
1004 dresses. If --wait is specified, wait up to 'n' seconds for the
1005 node to start.
1006
1007 node delete-guest <node identifier>
1008 Shutdown specified guest node and remove it from the cluster.
1009 The node-identifier can be the name of the node or the address
1010 of the node or id of the resource that is used as the guest
1011 node.
1012
1013 node remove-guest <node identifier>
1014 Shutdown specified guest node and remove it from the cluster.
1015 The node-identifier can be the name of the node or the address
1016 of the node or id of the resource that is used as the guest
1017 node.
1018
1019 node clear <node name>
1020 Remove specified node from various cluster caches. Use this if a
1021 removed node is still considered by the cluster to be a member
1022 of the cluster.
1023
1024 link add <node_name>=<node_address>... [options <link options>]
1025 Add a corosync link. One address must be specified for each
1026 cluster node. If no linknumber is specified, pcs will use the
1027 lowest available linknumber.
1028 Link options (documented in corosync.conf(5) man page) are:
1029 link_priority, linknumber, mcastport, ping_interval, ping_preci‐
1030 sion, ping_timeout, pong_count, transport (udp or sctp)
1031
1032 link delete <linknumber> [<linknumber>]...
1033 Remove specified corosync links.
1034
1035 link remove <linknumber> [<linknumber>]...
1036 Remove specified corosync links.
1037
1038 link update <linknumber> [<node_name>=<node_address>...] [options <link
1039 options>]
1040 Change node addresses / link options of an existing corosync
1041 link. Use this if you cannot add / remove links which is the
1042 preferred way.
1043 Link options (documented in corosync.conf(5) man page) are:
1044 for knet transport: link_priority, mcastport, ping_interval,
1045 ping_precision, ping_timeout, pong_count, transport (udp or
1046 sctp)
1047 for udp and udpu transports: bindnetaddr, broadcast, mcastaddr,
1048 mcastport, ttl
1049
1050 uidgid List the current configured uids and gids of users allowed to
1051 connect to corosync.
1052
1053 uidgid add [uid=<uid>] [gid=<gid>]
1054 Add the specified uid and/or gid to the list of users/groups al‐
1055 lowed to connect to corosync.
1056
1057 uidgid delete [uid=<uid>] [gid=<gid>]
1058 Remove the specified uid and/or gid from the list of
1059 users/groups allowed to connect to corosync.
1060
1061 uidgid remove [uid=<uid>] [gid=<gid>]
1062 Remove the specified uid and/or gid from the list of
1063 users/groups allowed to connect to corosync.
1064
1065 corosync [node]
1066 Get the corosync.conf from the specified node or from the cur‐
1067 rent node if node not specified.
1068
1069 reload corosync
1070 Reload the corosync configuration on the current node.
1071
1072 destroy [--all]
1073 Permanently destroy the cluster on the current node, killing all
1074 cluster processes and removing all cluster configuration files.
1075 Using --all will attempt to destroy the cluster on all nodes in
1076 the local cluster.
1077
1078 WARNING: This command permanently removes any cluster configura‐
1079 tion that has been created. It is recommended to run 'pcs clus‐
1080 ter stop' before destroying the cluster.
1081
1082 verify [--full] [-f <filename>]
1083 Checks the pacemaker configuration (CIB) for syntax and common
1084 conceptual errors. If no filename is specified the check is per‐
1085 formed on the currently running cluster. If --full is used more
1086 verbose output will be printed.
1087
1088 report [--from "YYYY-M-D H:M:S" [--to "YYYY-M-D H:M:S"]] <dest>
1089 Create a tarball containing everything needed when reporting
1090 cluster problems. If --from and --to are not used, the report
1091 will include the past 24 hours.
1092
1093 stonith
1094 [status [<resource id | tag id>] [node=<node>] [--hide-inactive]]
1095 Show status of all currently configured stonith devices. If
1096 --hide-inactive is specified, only show active stonith devices.
1097 If a resource or tag id is specified, only show status of the
1098 specified resource or resources in the specified tag. If node is
1099 specified, only show status of resources configured for the
1100 specified node.
1101
1102 config [<stonith id>]...
1103 Show options of all currently configured stonith devices or if
1104 stonith ids are specified show the options for the specified
1105 stonith device ids.
1106
1107 list [filter] [--nodesc]
1108 Show list of all available stonith agents (if filter is provided
1109 then only stonith agents matching the filter will be shown). If
1110 --nodesc is used then descriptions of stonith agents are not
1111 printed.
1112
1113 describe <stonith agent> [--full]
1114 Show options for specified stonith agent. If --full is speci‐
1115 fied, all options including advanced and deprecated ones are
1116 shown.
1117
1118 create <stonith id> <stonith device type> [stonith device options] [op
1119 <operation action> <operation options> [<operation action> <operation
1120 options>]...] [meta <meta options>...] [--group <group id> [--before
1121 <stonith id> | --after <stonith id>]] [--disabled] [--wait[=n]]
1122 Create stonith device with specified type and options. If
1123 --group is specified the stonith device is added to the group
1124 named. You can use --before or --after to specify the position
1125 of the added stonith device relatively to some stonith device
1126 already existing in the group. If--disabled is specified the
1127 stonith device is not used. If --wait is specified, pcs will
1128 wait up to 'n' seconds for the stonith device to start and then
1129 return 0 if the stonith device is started, or 1 if the stonith
1130 device has not yet started. If 'n' is not specified it defaults
1131 to 60 minutes.
1132
1133 Example: Create a device for nodes node1 and node2
1134 pcs stonith create MyFence fence_virt pcmk_host_list=node1,node2
1135 Example: Use port p1 for node n1 and ports p2 and p3 for node n2
1136 pcs stonith create MyFence fence_virt
1137 'pcmk_host_map=n1:p1;n2:p2,p3'
1138
1139 update <stonith id> [stonith device options]
1140 Add/Change options to specified stonith id.
1141
1142 update-scsi-devices <stonith id> (set <device-path> [<device-path>...])
1143 | (add <device-path> [<device-path>...] delete|remove <device-path>
1144 [<device-path>...] )
1145 Update scsi fencing devices without affecting other resources.
1146 You must specify either list of set devices or at least one de‐
1147 vice for add or delete/remove devices. Stonith resource must be
1148 running on one cluster node. Each device will be unfenced on
1149 each cluster node running cluster. Supported fence agents:
1150 fence_scsi.
1151
1152 delete <stonith id>
1153 Remove stonith id from configuration.
1154
1155 remove <stonith id>
1156 Remove stonith id from configuration.
1157
1158 enable <stonith id>... [--wait[=n]]
1159 Allow the cluster to use the stonith devices. If --wait is spec‐
1160 ified, pcs will wait up to 'n' seconds for the stonith devices
1161 to start and then return 0 if the stonith devices are started,
1162 or 1 if the stonith devices have not yet started. If 'n' is not
1163 specified it defaults to 60 minutes.
1164
1165 disable <stonith id>... [--wait[=n]]
1166 Attempt to stop the stonith devices if they are running and dis‐
1167 allow the cluster to use them. If --wait is specified, pcs will
1168 wait up to 'n' seconds for the stonith devices to stop and then
1169 return 0 if the stonith devices are stopped or 1 if the stonith
1170 devices have not stopped. If 'n' is not specified it defaults to
1171 60 minutes.
1172
1173 cleanup [<stonith id>] [--node <node>] [--strict]
1174 Make the cluster forget failed operations from history of the
1175 stonith device and re-detect its current state. This can be use‐
1176 ful to purge knowledge of past failures that have since been re‐
1177 solved.
1178 If the named stonith device is part of a group, or one numbered
1179 instance of a clone or bundled resource, the clean-up applies to
1180 the whole collective resource unless --strict is given.
1181 If a stonith id is not specified then all resources / stonith
1182 devices will be cleaned up.
1183 If a node is not specified then resources / stonith devices on
1184 all nodes will be cleaned up.
1185
1186 refresh [<stonith id>] [--node <node>] [--strict]
1187 Make the cluster forget the complete operation history (includ‐
1188 ing failures) of the stonith device and re-detect its current
1189 state. If you are interested in forgetting failed operations
1190 only, use the 'pcs stonith cleanup' command.
1191 If the named stonith device is part of a group, or one numbered
1192 instance of a clone or bundled resource, the refresh applies to
1193 the whole collective resource unless --strict is given.
1194 If a stonith id is not specified then all resources / stonith
1195 devices will be refreshed.
1196 If a node is not specified then resources / stonith devices on
1197 all nodes will be refreshed.
1198
1199 level [config]
1200 Lists all of the fencing levels currently configured.
1201
1202 level add <level> <target> <stonith id> [stonith id]...
1203 Add the fencing level for the specified target with the list of
1204 stonith devices to attempt for that target at that level. Fence
1205 levels are attempted in numerical order (starting with 1). If a
1206 level succeeds (meaning all devices are successfully fenced in
1207 that level) then no other levels are tried, and the target is
1208 considered fenced. Target may be a node name <node_name> or
1209 %<node_name> or node%<node_name>, a node name regular expression
1210 regexp%<node_pattern> or a node attribute value at‐
1211 trib%<name>=<value>.
1212
1213 level delete <level> [target <target>] [stonith <stonith id>...]
1214 Removes the fence level for the level, target and/or devices
1215 specified. If no target or devices are specified then the fence
1216 level is removed. Target may be a node name <node_name> or
1217 %<node_name> or node%<node_name>, a node name regular expression
1218 regexp%<node_pattern> or a node attribute value at‐
1219 trib%<name>=<value>.
1220
1221 level remove <level> [target <target>] [stonith <stonith id>...]
1222 Removes the fence level for the level, target and/or devices
1223 specified. If no target or devices are specified then the fence
1224 level is removed. Target may be a node name <node_name> or
1225 %<node_name> or node%<node_name>, a node name regular expression
1226 regexp%<node_pattern> or a node attribute value at‐
1227 trib%<name>=<value>.
1228
1229 level clear [target <target> | stonith <stonith id>...]
1230 Clears the fence levels on the target (or stonith id) specified
1231 or clears all fence levels if a target/stonith id is not speci‐
1232 fied. Target may be a node name <node_name> or %<node_name> or
1233 node%<node_name>, a node name regular expression reg‐
1234 exp%<node_pattern> or a node attribute value at‐
1235 trib%<name>=<value>. Example: pcs stonith level clear stonith
1236 dev_a dev_b
1237
1238 level verify
1239 Verifies all fence devices and nodes specified in fence levels
1240 exist.
1241
1242 fence <node> [--off]
1243 Fence the node specified (if --off is specified, use the 'off'
1244 API call to stonith which will turn the node off instead of re‐
1245 booting it).
1246
1247 confirm <node> [--force]
1248 Confirm to the cluster that the specified node is powered off.
1249 This allows the cluster to recover from a situation where no
1250 stonith device is able to fence the node. This command should
1251 ONLY be used after manually ensuring that the node is powered
1252 off and has no access to shared resources.
1253
1254 WARNING: If this node is not actually powered off or it does
1255 have access to shared resources, data corruption/cluster failure
1256 can occur. To prevent accidental running of this command,
1257 --force or interactive user response is required in order to
1258 proceed.
1259
1260 NOTE: It is not checked if the specified node exists in the
1261 cluster in order to be able to work with nodes not visible from
1262 the local cluster partition.
1263
1264 history [show [<node>]]
1265 Show fencing history for the specified node or all nodes if no
1266 node specified.
1267
1268 history cleanup [<node>]
1269 Cleanup fence history of the specified node or all nodes if no
1270 node specified.
1271
1272 history update
1273 Update fence history from all nodes.
1274
1275 sbd enable [watchdog=<path>[@<node>]]... [device=<path>[@<node>]]...
1276 [<SBD_OPTION>=<value>]... [--no-watchdog-validation]
1277 Enable SBD in cluster. Default path for watchdog device is
1278 /dev/watchdog. Allowed SBD options: SBD_WATCHDOG_TIMEOUT (de‐
1279 fault: 5), SBD_DELAY_START (default: no), SBD_STARTMODE (de‐
1280 fault: always) and SBD_TIMEOUT_ACTION. SBD options are docu‐
1281 mented in sbd(8) man page. It is possible to specify up to 3 de‐
1282 vices per node. If --no-watchdog-validation is specified, vali‐
1283 dation of watchdogs will be skipped.
1284
1285 WARNING: Cluster has to be restarted in order to apply these
1286 changes.
1287
1288 WARNING: By default, it is tested whether the specified watchdog
1289 is supported. This may cause a restart of the system when a
1290 watchdog with no-way-out-feature enabled is present. Use
1291 --no-watchdog-validation to skip watchdog validation.
1292
1293 Example of enabling SBD in cluster with watchdogs on node1 will
1294 be /dev/watchdog2, on node2 /dev/watchdog1, /dev/watchdog0 on
1295 all other nodes, device /dev/sdb on node1, device /dev/sda on
1296 all other nodes and watchdog timeout will bet set to 10 seconds:
1297
1298 pcs stonith sbd enable watchdog=/dev/watchdog2@node1 watch‐
1299 dog=/dev/watchdog1@node2 watchdog=/dev/watchdog0 de‐
1300 vice=/dev/sdb@node1 device=/dev/sda SBD_WATCHDOG_TIMEOUT=10
1301
1302
1303 sbd disable
1304 Disable SBD in cluster.
1305
1306 WARNING: Cluster has to be restarted in order to apply these
1307 changes.
1308
1309 sbd device setup device=<path> [device=<path>]... [watchdog-time‐
1310 out=<integer>] [allocate-timeout=<integer>] [loop-timeout=<integer>]
1311 [msgwait-timeout=<integer>]
1312 Initialize SBD structures on device(s) with specified timeouts.
1313
1314 WARNING: All content on device(s) will be overwritten.
1315
1316 sbd device message <device-path> <node> <message-type>
1317 Manually set a message of the specified type on the device for
1318 the node. Possible message types (they are documented in sbd(8)
1319 man page): test, reset, off, crashdump, exit, clear
1320
1321 sbd status [--full]
1322 Show status of SBD services in cluster and local device(s) con‐
1323 figured. If --full is specified, also dump of SBD headers on de‐
1324 vice(s) will be shown.
1325
1326 sbd config
1327 Show SBD configuration in cluster.
1328
1329
1330 sbd watchdog list
1331 Show all available watchdog devices on the local node.
1332
1333 WARNING: Listing available watchdogs may cause a restart of the
1334 system when a watchdog with no-way-out-feature enabled is
1335 present.
1336
1337
1338 sbd watchdog test [<watchdog-path>]
1339 This operation is expected to force-reboot the local system
1340 without following any shutdown procedures using a watchdog. If
1341 no watchdog is specified, available watchdog will be used if
1342 only one watchdog device is available on the local system.
1343
1344
1345 acl
1346 [config | show]
1347 List all current access control lists.
1348
1349 enable Enable access control lists.
1350
1351 disable
1352 Disable access control lists.
1353
1354 role create <role id> [description=<description>] [((read | write |
1355 deny) (xpath <query> | id <id>))...]
1356 Create a role with the id and (optional) description specified.
1357 Each role can also have an unlimited number of permissions
1358 (read/write/deny) applied to either an xpath query or the id of
1359 a specific element in the cib.
1360 Permissions are applied to the selected XML element's entire XML
1361 subtree (all elements enclosed within it). Write permission
1362 grants the ability to create, modify, or remove the element and
1363 its subtree, and also the ability to create any "scaffolding"
1364 elements (enclosing elements that do not have attributes other
1365 than an ID). Permissions for more specific matches (more deeply
1366 nested elements) take precedence over more general ones. If mul‐
1367 tiple permissions are configured for the same match (for exam‐
1368 ple, in different roles applied to the same user), any deny per‐
1369 mission takes precedence, then write, then lastly read.
1370 An xpath may include an attribute expression to select only ele‐
1371 ments that match the expression, but the permission still ap‐
1372 plies to the entire element (and its subtree), not to the attri‐
1373 bute alone. For example, using the xpath "//*[@name]" to give
1374 write permission would allow changes to the entirety of all ele‐
1375 ments that have a "name" attribute and everything enclosed by
1376 those elements. There is no way currently to give permissions
1377 for just one attribute of an element. That is to say, you can
1378 not define an ACL that allows someone to read just the dc-uuid
1379 attribute of the cib tag - that would select the cib element and
1380 give read access to the entire CIB.
1381
1382 role delete <role id>
1383 Delete the role specified and remove it from any users/groups it
1384 was assigned to.
1385
1386 role remove <role id>
1387 Delete the role specified and remove it from any users/groups it
1388 was assigned to.
1389
1390 role assign <role id> [to] [user|group] <username/group>
1391 Assign a role to a user or group already created with 'pcs acl
1392 user/group create'. If there is user and group with the same id
1393 and it is not specified which should be used, user will be pri‐
1394 oritized. In cases like this specify whenever user or group
1395 should be used.
1396
1397 role unassign <role id> [from] [user|group] <username/group>
1398 Remove a role from the specified user. If there is user and
1399 group with the same id and it is not specified which should be
1400 used, user will be prioritized. In cases like this specify when‐
1401 ever user or group should be used.
1402
1403 user create <username> [<role id>]...
1404 Create an ACL for the user specified and assign roles to the
1405 user.
1406
1407 user delete <username>
1408 Remove the user specified (and roles assigned will be unassigned
1409 for the specified user).
1410
1411 user remove <username>
1412 Remove the user specified (and roles assigned will be unassigned
1413 for the specified user).
1414
1415 group create <group> [<role id>]...
1416 Create an ACL for the group specified and assign roles to the
1417 group.
1418
1419 group delete <group>
1420 Remove the group specified (and roles assigned will be unas‐
1421 signed for the specified group).
1422
1423 group remove <group>
1424 Remove the group specified (and roles assigned will be unas‐
1425 signed for the specified group).
1426
1427 permission add <role id> ((read | write | deny) (xpath <query> | id
1428 <id>))...
1429 Add the listed permissions to the role specified. Permissions
1430 are applied to either an xpath query or the id of a specific el‐
1431 ement in the CIB.
1432 Permissions are applied to the selected XML element's entire XML
1433 subtree (all elements enclosed within it). Write permission
1434 grants the ability to create, modify, or remove the element and
1435 its subtree, and also the ability to create any "scaffolding"
1436 elements (enclosing elements that do not have attributes other
1437 than an ID). Permissions for more specific matches (more deeply
1438 nested elements) take precedence over more general ones. If mul‐
1439 tiple permissions are configured for the same match (for exam‐
1440 ple, in different roles applied to the same user), any deny per‐
1441 mission takes precedence, then write, then lastly read.
1442 An xpath may include an attribute expression to select only ele‐
1443 ments that match the expression, but the permission still ap‐
1444 plies to the entire element (and its subtree), not to the attri‐
1445 bute alone. For example, using the xpath "//*[@name]" to give
1446 write permission would allow changes to the entirety of all ele‐
1447 ments that have a "name" attribute and everything enclosed by
1448 those elements. There is no way currently to give permissions
1449 for just one attribute of an element. That is to say, you can
1450 not define an ACL that allows someone to read just the dc-uuid
1451 attribute of the cib tag - that would select the cib element and
1452 give read access to the entire CIB.
1453
1454 permission delete <permission id>
1455 Remove the permission id specified (permission id's are listed
1456 in parenthesis after permissions in 'pcs acl' output).
1457
1458 permission remove <permission id>
1459 Remove the permission id specified (permission id's are listed
1460 in parenthesis after permissions in 'pcs acl' output).
1461
1462 property
1463 [config | list | show [<property> | --all | --defaults]] | [--all |
1464 --defaults]
1465 List property settings (default: lists configured properties).
1466 If --defaults is specified will show all property defaults, if
1467 --all is specified, current configured properties will be shown
1468 with unset properties and their defaults. See pacemaker-con‐
1469 trold(7) and pacemaker-schedulerd(7) man pages for a description
1470 of the properties.
1471
1472 set <property>=[<value>] ... [--force]
1473 Set specific pacemaker properties (if the value is blank then
1474 the property is removed from the configuration). If a property
1475 is not recognized by pcs the property will not be created unless
1476 the --force is used. See pacemaker-controld(7) and pacemaker-
1477 schedulerd(7) man pages for a description of the properties.
1478
1479 unset <property> ...
1480 Remove property from configuration. See pacemaker-controld(7)
1481 and pacemaker-schedulerd(7) man pages for a description of the
1482 properties.
1483
1484 constraint
1485 [config | list | show] [--full] [--all]
1486 List all current constraints that are not expired. If --all is
1487 specified also show expired constraints. If --full is specified
1488 also list the constraint ids.
1489
1490 location <resource> prefers <node>[=<score>] [<node>[=<score>]]...
1491 Create a location constraint on a resource to prefer the speci‐
1492 fied node with score (default score: INFINITY). Resource may be
1493 either a resource id <resource_id> or %<resource_id> or re‐
1494 source%<resource_id>, or a resource name regular expression reg‐
1495 exp%<resource_pattern>.
1496
1497 location <resource> avoids <node>[=<score>] [<node>[=<score>]]...
1498 Create a location constraint on a resource to avoid the speci‐
1499 fied node with score (default score: INFINITY). Resource may be
1500 either a resource id <resource_id> or %<resource_id> or re‐
1501 source%<resource_id>, or a resource name regular expression reg‐
1502 exp%<resource_pattern>.
1503
1504 location <resource> rule [id=<rule id>] [resource-discovery=<option>]
1505 [role=master|slave] [constraint-id=<id>] [score=<score> | score-attri‐
1506 bute=<attribute>] <expression>
1507 Creates a location constraint with a rule on the specified re‐
1508 source where expression looks like one of the following:
1509 defined|not_defined <node attribute>
1510 <node attribute> lt|gt|lte|gte|eq|ne [string|integer|num‐
1511 ber|version] <value>
1512 date gt|lt <date>
1513 date in_range <date> to <date>
1514 date in_range <date> to duration <duration options>...
1515 date-spec <date spec options>...
1516 <expression> and|or <expression>
1517 ( <expression> )
1518 where duration options and date spec options are: hours, month‐
1519 days, weekdays, yeardays, months, weeks, years, weekyears, moon.
1520 Resource may be either a resource id <resource_id> or %<re‐
1521 source_id> or resource%<resource_id>, or a resource name regular
1522 expression regexp%<resource_pattern>. If score is omitted it de‐
1523 faults to INFINITY. If id is omitted one is generated from the
1524 resource id. If resource-discovery is omitted it defaults to
1525 'always'.
1526
1527 location [config | show [resources [<resource>...]] | [nodes
1528 [<node>...]]] [--full] [--all]
1529 List all the current location constraints that are not expired.
1530 If 'resources' is specified, location constraints are displayed
1531 per resource (default). If 'nodes' is specified, location con‐
1532 straints are displayed per node. If specific nodes or resources
1533 are specified then we only show information about them. Resource
1534 may be either a resource id <resource_id> or %<resource_id> or
1535 resource%<resource_id>, or a resource name regular expression
1536 regexp%<resource_pattern>. If --full is specified show the in‐
1537 ternal constraint id's as well. If --all is specified show the
1538 expired constraints.
1539
1540 location add <id> <resource> <node> <score> [resource-discovery=<op‐
1541 tion>]
1542 Add a location constraint with the appropriate id for the speci‐
1543 fied resource, node name and score. Resource may be either a re‐
1544 source id <resource_id> or %<resource_id> or resource%<re‐
1545 source_id>, or a resource name regular expression regexp%<re‐
1546 source_pattern>.
1547
1548 location delete <id>
1549 Remove a location constraint with the appropriate id.
1550
1551 location remove <id>
1552 Remove a location constraint with the appropriate id.
1553
1554 order [config | show] [--full]
1555 List all current ordering constraints (if --full is specified
1556 show the internal constraint id's as well).
1557
1558 order [action] <resource id> then [action] <resource id> [options]
1559 Add an ordering constraint specifying actions (start, stop, pro‐
1560 mote, demote) and if no action is specified the default action
1561 will be start. Available options are kind=Optional/Manda‐
1562 tory/Serialize, symmetrical=true/false, require-all=true/false
1563 and id=<constraint-id>.
1564
1565 order set <resource1> [resourceN]... [options] [set <resourceX> ...
1566 [options]] [setoptions [constraint_options]]
1567 Create an ordered set of resources. Available options are se‐
1568 quential=true/false, require-all=true/false and ac‐
1569 tion=start/promote/demote/stop. Available constraint_options are
1570 id=<constraint-id>, kind=Optional/Mandatory/Serialize and sym‐
1571 metrical=true/false.
1572
1573 order delete <resource1> [resourceN]...
1574 Remove resource from any ordering constraint
1575
1576 order remove <resource1> [resourceN]...
1577 Remove resource from any ordering constraint
1578
1579 colocation [config | show] [--full]
1580 List all current colocation constraints (if --full is specified
1581 show the internal constraint id's as well).
1582
1583 colocation add [<role>] <source resource id> with [<role>] <target re‐
1584 source id> [score] [options] [id=constraint-id]
1585 Request <source resource> to run on the same node where pace‐
1586 maker has determined <target resource> should run. Positive
1587 values of score mean the resources should be run on the same
1588 node, negative values mean the resources should not be run on
1589 the same node. Specifying 'INFINITY' (or '-INFINITY') for the
1590 score forces <source resource> to run (or not run) with <target
1591 resource> (score defaults to "INFINITY"). A role can be: 'Mas‐
1592 ter', 'Slave', 'Started', 'Stopped' (if no role is specified, it
1593 defaults to 'Started').
1594
1595 colocation set <resource1> [resourceN]... [options] [set <resourceX>
1596 ... [options]] [setoptions [constraint_options]]
1597 Create a colocation constraint with a resource set. Available
1598 options are sequential=true/false and role=Stopped/Started/Mas‐
1599 ter/Slave. Available constraint_options are id and either of:
1600 score, score-attribute, score-attribute-mangle.
1601
1602 colocation delete <source resource id> <target resource id>
1603 Remove colocation constraints with specified resources.
1604
1605 colocation remove <source resource id> <target resource id>
1606 Remove colocation constraints with specified resources.
1607
1608 ticket [config | show] [--full]
1609 List all current ticket constraints (if --full is specified show
1610 the internal constraint id's as well).
1611
1612 ticket add <ticket> [<role>] <resource id> [<options>] [id=<con‐
1613 straint-id>]
1614 Create a ticket constraint for <resource id>. Available option
1615 is loss-policy=fence/stop/freeze/demote. A role can be master,
1616 slave, started or stopped.
1617
1618 ticket set <resource1> [<resourceN>]... [<options>] [set <resourceX>
1619 ... [<options>]] setoptions <constraint_options>
1620 Create a ticket constraint with a resource set. Available op‐
1621 tions are role=Stopped/Started/Master/Slave. Required constraint
1622 option is ticket=<ticket>. Optional constraint options are
1623 id=<constraint-id> and loss-policy=fence/stop/freeze/demote.
1624
1625 ticket delete <ticket> <resource id>
1626 Remove all ticket constraints with <ticket> from <resource id>.
1627
1628 ticket remove <ticket> <resource id>
1629 Remove all ticket constraints with <ticket> from <resource id>.
1630
1631 delete <constraint id>...
1632 Remove constraint(s) or constraint rules with the specified
1633 id(s).
1634
1635 remove <constraint id>...
1636 Remove constraint(s) or constraint rules with the specified
1637 id(s).
1638
1639 ref <resource>...
1640 List constraints referencing specified resource.
1641
1642 rule add <constraint id> [id=<rule id>] [role=master|slave]
1643 [score=<score>|score-attribute=<attribute>] <expression>
1644 Add a rule to a location constraint specified by 'constraint id'
1645 where the expression looks like one of the following:
1646 defined|not_defined <node attribute>
1647 <node attribute> lt|gt|lte|gte|eq|ne [string|integer|num‐
1648 ber|version] <value>
1649 date gt|lt <date>
1650 date in_range <date> to <date>
1651 date in_range <date> to duration <duration options>...
1652 date-spec <date spec options>...
1653 <expression> and|or <expression>
1654 ( <expression> )
1655 where duration options and date spec options are: hours, month‐
1656 days, weekdays, yeardays, months, weeks, years, weekyears, moon.
1657 If score is omitted it defaults to INFINITY. If id is omitted
1658 one is generated from the constraint id.
1659
1660 rule delete <rule id>
1661 Remove a rule from its location constraint and if it's the last
1662 rule, the constraint will also be removed.
1663
1664 rule remove <rule id>
1665 Remove a rule from its location constraint and if it's the last
1666 rule, the constraint will also be removed.
1667
1668 qdevice
1669 status <device model> [--full] [<cluster name>]
1670 Show runtime status of specified model of quorum device
1671 provider. Using --full will give more detailed output. If
1672 <cluster name> is specified, only information about the speci‐
1673 fied cluster will be displayed.
1674
1675 setup model <device model> [--enable] [--start]
1676 Configure specified model of quorum device provider. Quorum de‐
1677 vice then can be added to clusters by running "pcs quorum device
1678 add" command in a cluster. --start will also start the
1679 provider. --enable will configure the provider to start on
1680 boot.
1681
1682 destroy <device model>
1683 Disable and stop specified model of quorum device provider and
1684 delete its configuration files.
1685
1686 start <device model>
1687 Start specified model of quorum device provider.
1688
1689 stop <device model>
1690 Stop specified model of quorum device provider.
1691
1692 kill <device model>
1693 Force specified model of quorum device provider to stop (per‐
1694 forms kill -9). Note that init system (e.g. systemd) can detect
1695 that the qdevice is not running and start it again. If you want
1696 to stop the qdevice, run "pcs qdevice stop" command.
1697
1698 enable <device model>
1699 Configure specified model of quorum device provider to start on
1700 boot.
1701
1702 disable <device model>
1703 Configure specified model of quorum device provider to not start
1704 on boot.
1705
1706 quorum
1707 [config]
1708 Show quorum configuration.
1709
1710 status Show quorum runtime status.
1711
1712 device add [<generic options>] model <device model> [<model options>]
1713 [heuristics <heuristics options>]
1714 Add a quorum device to the cluster. Quorum device should be con‐
1715 figured first with "pcs qdevice setup". It is not possible to
1716 use more than one quorum device in a cluster simultaneously.
1717 Currently the only supported model is 'net'. It requires model
1718 options 'algorithm' and 'host' to be specified. Options are doc‐
1719 umented in corosync-qdevice(8) man page; generic options are
1720 'sync_timeout' and 'timeout', for model net options check the
1721 quorum.device.net section, for heuristics options see the quo‐
1722 rum.device.heuristics section. Pcs automatically creates and
1723 distributes TLS certificates and sets the 'tls' model option to
1724 the default value 'on'.
1725 Example: pcs quorum device add model net algorithm=lms
1726 host=qnetd.internal.example.com
1727
1728 device heuristics delete
1729 Remove all heuristics settings of the configured quorum device.
1730
1731 device heuristics remove
1732 Remove all heuristics settings of the configured quorum device.
1733
1734 device delete
1735 Remove a quorum device from the cluster.
1736
1737 device remove
1738 Remove a quorum device from the cluster.
1739
1740 device status [--full]
1741 Show quorum device runtime status. Using --full will give more
1742 detailed output.
1743
1744 device update [<generic options>] [model <model options>] [heuristics
1745 <heuristics options>]
1746 Add/Change quorum device options. Requires the cluster to be
1747 stopped. Model and options are all documented in corosync-qde‐
1748 vice(8) man page; for heuristics options check the quorum.de‐
1749 vice.heuristics subkey section, for model options check the quo‐
1750 rum.device.<device model> subkey sections.
1751
1752 WARNING: If you want to change "host" option of qdevice model
1753 net, use "pcs quorum device remove" and "pcs quorum device add"
1754 commands to set up configuration properly unless old and new
1755 host is the same machine.
1756
1757 expected-votes <votes>
1758 Set expected votes in the live cluster to specified value. This
1759 only affects the live cluster, not changes any configuration
1760 files.
1761
1762 unblock [--force]
1763 Cancel waiting for all nodes when establishing quorum. Useful
1764 in situations where you know the cluster is inquorate, but you
1765 are confident that the cluster should proceed with resource man‐
1766 agement regardless. This command should ONLY be used when nodes
1767 which the cluster is waiting for have been confirmed to be pow‐
1768 ered off and to have no access to shared resources.
1769
1770 WARNING: If the nodes are not actually powered off or they do
1771 have access to shared resources, data corruption/cluster failure
1772 can occur. To prevent accidental running of this command,
1773 --force or interactive user response is required in order to
1774 proceed.
1775
1776 update [auto_tie_breaker=[0|1]] [last_man_standing=[0|1]]
1777 [last_man_standing_window=[<time in ms>]] [wait_for_all=[0|1]]
1778 Add/Change quorum options. At least one option must be speci‐
1779 fied. Options are documented in corosync's votequorum(5) man
1780 page. Requires the cluster to be stopped.
1781
1782 booth
1783 setup sites <address> <address> [<address>...] [arbitrators <address>
1784 ...] [--force]
1785 Write new booth configuration with specified sites and arbitra‐
1786 tors. Total number of peers (sites and arbitrators) must be
1787 odd. When the configuration file already exists, command fails
1788 unless --force is specified.
1789
1790 destroy
1791 Remove booth configuration files.
1792
1793 ticket add <ticket> [<name>=<value> ...]
1794 Add new ticket to the current configuration. Ticket options are
1795 specified in booth manpage.
1796
1797 ticket delete <ticket>
1798 Remove the specified ticket from the current configuration.
1799
1800 ticket remove <ticket>
1801 Remove the specified ticket from the current configuration.
1802
1803 config [<node>]
1804 Show booth configuration from the specified node or from the
1805 current node if node not specified.
1806
1807 create ip <address>
1808 Make the cluster run booth service on the specified ip address
1809 as a cluster resource. Typically this is used to run booth
1810 site.
1811
1812 delete Remove booth resources created by the "pcs booth create" com‐
1813 mand.
1814
1815 remove Remove booth resources created by the "pcs booth create" com‐
1816 mand.
1817
1818 restart
1819 Restart booth resources created by the "pcs booth create" com‐
1820 mand.
1821
1822 ticket grant <ticket> [<site address>]
1823 Grant the ticket to the site specified by the address, hence to
1824 the booth formation this site is a member of. When this specifi‐
1825 cation is omitted, site address that has been specified with
1826 'pcs booth create' command is used. Specifying site address is
1827 therefore mandatory when running this command at a host in an
1828 arbitrator role.
1829 Note that the ticket must not be already granted in given booth
1830 formation; for an ad-hoc (and, in the worst case, abrupt, for a
1831 lack of a direct atomicity) change of this preference baring di‐
1832 rect interventions at the sites, the ticket needs to be revoked
1833 first, only then it can be granted at another site again.
1834
1835 ticket revoke <ticket> [<site address>]
1836 Revoke the ticket in the booth formation as identified with one
1837 of its member sites specified by the address. When this specifi‐
1838 cation is omitted, site address that has been specified with a
1839 prior 'pcs booth create' command is used. Specifying site ad‐
1840 dress is therefore mandatory when running this command at a host
1841 in an arbitrator role.
1842
1843 status Print current status of booth on the local node.
1844
1845 pull <node>
1846 Pull booth configuration from the specified node.
1847
1848 sync [--skip-offline]
1849 Send booth configuration from the local node to all nodes in the
1850 cluster.
1851
1852 enable Enable booth arbitrator service.
1853
1854 disable
1855 Disable booth arbitrator service.
1856
1857 start Start booth arbitrator service.
1858
1859 stop Stop booth arbitrator service.
1860
1861 status
1862 [status] [--full] [--hide-inactive]
1863 View all information about the cluster and resources (--full
1864 provides more details, --hide-inactive hides inactive re‐
1865 sources).
1866
1867 resources [<resource id | tag id>] [node=<node>] [--hide-inactive]
1868 Show status of all currently configured resources. If --hide-in‐
1869 active is specified, only show active resources. If a resource
1870 or tag id is specified, only show status of the specified re‐
1871 source or resources in the specified tag. If node is specified,
1872 only show status of resources configured for the specified node.
1873
1874 cluster
1875 View current cluster status.
1876
1877 corosync
1878 View current membership information as seen by corosync.
1879
1880 quorum View current quorum status.
1881
1882 qdevice <device model> [--full] [<cluster name>]
1883 Show runtime status of specified model of quorum device
1884 provider. Using --full will give more detailed output. If
1885 <cluster name> is specified, only information about the speci‐
1886 fied cluster will be displayed.
1887
1888 booth Print current status of booth on the local node.
1889
1890 nodes [corosync | both | config]
1891 View current status of nodes from pacemaker. If 'corosync' is
1892 specified, view current status of nodes from corosync instead.
1893 If 'both' is specified, view current status of nodes from both
1894 corosync & pacemaker. If 'config' is specified, print nodes from
1895 corosync & pacemaker configuration.
1896
1897 pcsd [<node>]...
1898 Show current status of pcsd on nodes specified, or on all nodes
1899 configured in the local cluster if no nodes are specified.
1900
1901 xml View xml version of status (output from crm_mon -r -1 -X).
1902
1903 config
1904 [show] View full cluster configuration.
1905
1906 backup [filename]
1907 Creates the tarball containing the cluster configuration files.
1908 If filename is not specified the standard output will be used.
1909
1910 restore [--local] [filename]
1911 Restores the cluster configuration files on all nodes from the
1912 backup. If filename is not specified the standard input will be
1913 used. If --local is specified only the files on the current
1914 node will be restored.
1915
1916 checkpoint
1917 List all available configuration checkpoints.
1918
1919 checkpoint view <checkpoint_number>
1920 Show specified configuration checkpoint.
1921
1922 checkpoint diff <checkpoint_number> <checkpoint_number>
1923 Show differences between the two specified checkpoints. Use
1924 checkpoint number 'live' to compare a checkpoint to the current
1925 live configuration.
1926
1927 checkpoint restore <checkpoint_number>
1928 Restore cluster configuration to specified checkpoint.
1929
1930 pcsd
1931 certkey <certificate file> <key file>
1932 Load custom certificate and key files for use in pcsd.
1933
1934 status [<node>]...
1935 Show current status of pcsd on nodes specified, or on all nodes
1936 configured in the local cluster if no nodes are specified.
1937
1938 sync-certificates
1939 Sync pcsd certificates to all nodes in the local cluster.
1940
1941 deauth [<token>]...
1942 Delete locally stored authentication tokens used by remote sys‐
1943 tems to connect to the local pcsd instance. If no tokens are
1944 specified all tokens will be deleted. After this command is run
1945 other nodes will need to re-authenticate against this node to be
1946 able to connect to it.
1947
1948 host
1949 auth (<host name> [addr=<address>[:<port>]])... [-u <username>] [-p
1950 <password>]
1951 Authenticate local pcs/pcsd against pcsd on specified hosts. It
1952 is possible to specify an address and a port via which pcs/pcsd
1953 will communicate with each host. If an address is not specified
1954 a host name will be used. If a port is not specified 2224 will
1955 be used.
1956
1957 deauth [<host name>]...
1958 Delete authentication tokens which allow pcs/pcsd on the current
1959 system to connect to remote pcsd instances on specified host
1960 names. If the current system is a member of a cluster, the to‐
1961 kens will be deleted from all nodes in the cluster. If no host
1962 names are specified all tokens will be deleted. After this com‐
1963 mand is run this node will need to re-authenticate against other
1964 nodes to be able to connect to them.
1965
1966 node
1967 attribute [[<node>] [--name <name>] | <node> <name>=<value> ...]
1968 Manage node attributes. If no parameters are specified, show
1969 attributes of all nodes. If one parameter is specified, show
1970 attributes of specified node. If --name is specified, show
1971 specified attribute's value from all nodes. If more parameters
1972 are specified, set attributes of specified node. Attributes can
1973 be removed by setting an attribute without a value.
1974
1975 maintenance [--all | <node>...] [--wait[=n]]
1976 Put specified node(s) into maintenance mode, if no nodes or op‐
1977 tions are specified the current node will be put into mainte‐
1978 nance mode, if --all is specified all nodes will be put into
1979 maintenance mode. If --wait is specified, pcs will wait up to
1980 'n' seconds for the node(s) to be put into maintenance mode and
1981 then return 0 on success or 1 if the operation not succeeded
1982 yet. If 'n' is not specified it defaults to 60 minutes.
1983
1984 unmaintenance [--all | <node>...] [--wait[=n]]
1985 Remove node(s) from maintenance mode, if no nodes or options are
1986 specified the current node will be removed from maintenance
1987 mode, if --all is specified all nodes will be removed from main‐
1988 tenance mode. If --wait is specified, pcs will wait up to 'n'
1989 seconds for the node(s) to be removed from maintenance mode and
1990 then return 0 on success or 1 if the operation not succeeded
1991 yet. If 'n' is not specified it defaults to 60 minutes.
1992
1993 standby [--all | <node>...] [--wait[=n]]
1994 Put specified node(s) into standby mode (the node specified will
1995 no longer be able to host resources), if no nodes or options are
1996 specified the current node will be put into standby mode, if
1997 --all is specified all nodes will be put into standby mode. If
1998 --wait is specified, pcs will wait up to 'n' seconds for the
1999 node(s) to be put into standby mode and then return 0 on success
2000 or 1 if the operation not succeeded yet. If 'n' is not specified
2001 it defaults to 60 minutes.
2002
2003 unstandby [--all | <node>...] [--wait[=n]]
2004 Remove node(s) from standby mode (the node specified will now be
2005 able to host resources), if no nodes or options are specified
2006 the current node will be removed from standby mode, if --all is
2007 specified all nodes will be removed from standby mode. If --wait
2008 is specified, pcs will wait up to 'n' seconds for the node(s) to
2009 be removed from standby mode and then return 0 on success or 1
2010 if the operation not succeeded yet. If 'n' is not specified it
2011 defaults to 60 minutes.
2012
2013 utilization [[<node>] [--name <name>] | <node> <name>=<value> ...]
2014 Add specified utilization options to specified node. If node is
2015 not specified, shows utilization of all nodes. If --name is
2016 specified, shows specified utilization value from all nodes. If
2017 utilization options are not specified, shows utilization of
2018 specified node. Utilization option should be in format
2019 name=value, value has to be integer. Options may be removed by
2020 setting an option without a value. Example: pcs node utiliza‐
2021 tion node1 cpu=4 ram=
2022
2023 alert
2024 [config|show]
2025 Show all configured alerts.
2026
2027 create path=<path> [id=<alert-id>] [description=<description>] [options
2028 [<option>=<value>]...] [meta [<meta-option>=<value>]...]
2029 Define an alert handler with specified path. Id will be automat‐
2030 ically generated if it is not specified.
2031
2032 update <alert-id> [path=<path>] [description=<description>] [options
2033 [<option>=<value>]...] [meta [<meta-option>=<value>]...]
2034 Update an existing alert handler with specified id.
2035
2036 delete <alert-id> ...
2037 Remove alert handlers with specified ids.
2038
2039 remove <alert-id> ...
2040 Remove alert handlers with specified ids.
2041
2042 recipient add <alert-id> value=<recipient-value> [id=<recipient-id>]
2043 [description=<description>] [options [<option>=<value>]...] [meta
2044 [<meta-option>=<value>]...]
2045 Add new recipient to specified alert handler.
2046
2047 recipient update <recipient-id> [value=<recipient-value>] [descrip‐
2048 tion=<description>] [options [<option>=<value>]...] [meta [<meta-op‐
2049 tion>=<value>]...]
2050 Update an existing recipient identified by its id.
2051
2052 recipient delete <recipient-id> ...
2053 Remove specified recipients.
2054
2055 recipient remove <recipient-id> ...
2056 Remove specified recipients.
2057
2058 client
2059 local-auth [<pcsd-port>] [-u <username>] [-p <password>]
2060 Authenticate current user to local pcsd. This is required to run
2061 some pcs commands which may require permissions of root user
2062 such as 'pcs cluster start'.
2063
2064 dr
2065 config Display disaster-recovery configuration from the local node.
2066
2067 status [--full] [--hide-inactive]
2068 Display status of the local and the remote site cluster (--full
2069 provides more details, --hide-inactive hides inactive re‐
2070 sources).
2071
2072 set-recovery-site <recovery site node>
2073 Set up disaster-recovery with the local cluster being the pri‐
2074 mary site. The recovery site is defined by a name of one of its
2075 nodes.
2076
2077 destroy
2078 Permanently destroy disaster-recovery configuration on all
2079 sites.
2080
2081 tag
2082 [config|list [<tag id>...]]
2083 Display configured tags.
2084
2085 create <tag id> <id> [<id>]...
2086 Create a tag containing the specified ids.
2087
2088 delete <tag id>...
2089 Delete specified tags.
2090
2091 remove <tag id>...
2092 Delete specified tags.
2093
2094 update <tag id> [add <id> [<id>]... [--before <id> | --after <id>]]
2095 [remove <id> [<id>]...]
2096 Update a tag using the specified ids. Ids can be added, removed
2097 or moved in a tag. You can use --before or --after to specify
2098 the position of the added ids relatively to some id already ex‐
2099 isting in the tag. By adding ids to a tag they are already in
2100 and specifying --after or --before you can move the ids in the
2101 tag.
2102
2104 Show all resources
2105 # pcs resource config
2106
2107 Show options specific to the 'VirtualIP' resource
2108 # pcs resource config VirtualIP
2109
2110 Create a new resource called 'VirtualIP' with options
2111 # pcs resource create VirtualIP ocf:heartbeat:IPaddr2
2112 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s
2113
2114 Create a new resource called 'VirtualIP' with options
2115 # pcs resource create VirtualIP IPaddr2 ip=192.168.0.99
2116 cidr_netmask=32 nic=eth2 op monitor interval=30s
2117
2118 Change the ip address of VirtualIP and remove the nic option
2119 # pcs resource update VirtualIP ip=192.168.0.98 nic=
2120
2121 Delete the VirtualIP resource
2122 # pcs resource delete VirtualIP
2123
2124 Create the MyStonith stonith fence_virt device which can fence host
2125 'f1'
2126 # pcs stonith create MyStonith fence_virt pcmk_host_list=f1
2127
2128 Set the stonith-enabled property to false on the cluster (which dis‐
2129 ables stonith)
2130 # pcs property set stonith-enabled=false
2131
2133 Various pcs commands accept the --force option. Its purpose is to over‐
2134 ride some of checks that pcs is doing or some of errors that may occur
2135 when a pcs command is run. When such error occurs, pcs will print the
2136 error with a note it may be overridden. The exact behavior of the op‐
2137 tion is different for each pcs command. Using the --force option can
2138 lead into situations that would normally be prevented by logic of pcs
2139 commands and therefore its use is strongly discouraged unless you know
2140 what you are doing.
2141
2143 EDITOR
2144 Path to a plain-text editor. This is used when pcs is requested
2145 to present a text for the user to edit.
2146
2147 no_proxy, https_proxy, all_proxy, NO_PROXY, HTTPS_PROXY, ALL_PROXY
2148 These environment variables (listed according to their priori‐
2149 ties) control how pcs handles proxy servers when connecting to
2150 cluster nodes. See curl(1) man page for details.
2151
2153 This section summarizes the most important changes in commands done in
2154 pcs-0.10.x compared to pcs-0.9.x. For detailed description of current
2155 commands see above.
2156
2157 acl
2158 show The 'pcs acl show' command has been deprecated and will be re‐
2159 moved. Please use 'pcs acl config' instead. Applicable in
2160 pcs-0.10.9 and newer.
2161
2162 alert
2163 show The 'pcs alert show' command has been deprecated and will be re‐
2164 moved. Please use 'pcs alert config' instead. Applicable in
2165 pcs-0.10.9 and newer.
2166
2167 cluster
2168 auth The 'pcs cluster auth' command only authenticates nodes in a lo‐
2169 cal cluster and does not accept a node list. The new command for
2170 authentication is 'pcs host auth'. It allows to specify host
2171 names, addresses and pcsd ports.
2172
2173 node add
2174 Custom node names and Corosync 3.x with knet are fully supported
2175 now, therefore the syntax has been completely changed.
2176 The --device and --watchdog options have been replaced with 'de‐
2177 vice' and 'watchdog' options, respectively.
2178
2179 pcsd-status
2180 The 'pcs cluster pcsd-status' command has been deprecated and
2181 will be removed. Please use 'pcs pcsd status' or 'pcs status
2182 pcsd' instead. Applicable in pcs-0.10.9 and newer.
2183
2184 quorum This command has been replaced with 'pcs quorum'.
2185
2186 remote-node add
2187 This command has been replaced with 'pcs cluster node
2188 add-guest'.
2189
2190 remote-node remove
2191 This command has been replaced with 'pcs cluster node
2192 delete-guest' and its alias 'pcs cluster node remove-guest'.
2193
2194 setup Custom node names and Corosync 3.x with knet are fully supported
2195 now, therefore the syntax has been completely changed.
2196 The --name option has been removed. The first parameter of the
2197 command is the cluster name now.
2198 The --local option has been replaced with --corosync_conf
2199 <path>.
2200
2201 standby
2202 This command has been replaced with 'pcs node standby'.
2203
2204 uidgid rm
2205 This command has been deprecated, use 'pcs cluster uidgid
2206 delete' or 'pcs cluster uidgid remove' instead.
2207
2208 unstandby
2209 This command has been replaced with 'pcs node unstandby'.
2210
2211 verify The -V option has been replaced with --full.
2212 To specify a filename, use the -f option.
2213
2214 constraint
2215 list The 'pcs constraint list' command, as well as its variants 'pcs
2216 constraint [location | colocation | order | ticket] list', has
2217 been deprecated and will be removed. Please use 'pcs constraint
2218 [location | colocation | order | ticket] config' instead. Appli‐
2219 cable in pcs-0.10.9 and newer.
2220
2221 show The 'pcs constraint show' command, as well as its variants 'pcs
2222 constraint [location | colocation | order | ticket] show', has
2223 been deprecated and will be removed. Please use 'pcs constraint
2224 [location | colocation | order | ticket] config' instead. Appli‐
2225 cable in pcs-0.10.9 and newer.
2226
2227 pcsd
2228 clear-auth
2229 This command has been replaced with 'pcs host deauth' and 'pcs
2230 pcsd deauth'.
2231
2232 property
2233 list The 'pcs property list' command has been deprecated and will be
2234 removed. Please use 'pcs property config' instead. Applicable in
2235 pcs-0.10.9 and newer.
2236
2237 set The --node option is no longer supported. Use the 'pcs node at‐
2238 tribute' command to set node attributes.
2239
2240 show The --node option is no longer supported. Use the 'pcs node at‐
2241 tribute' command to view node attributes.
2242 The 'pcs property show' command has been deprecated and will be
2243 removed. Please use 'pcs property config' instead. Applicable in
2244 pcs-0.10.9 and newer.
2245
2246 unset The --node option is no longer supported. Use the 'pcs node at‐
2247 tribute' command to unset node attributes.
2248
2249 resource
2250 create The 'master' keyword has been changed to 'promotable'.
2251
2252 failcount reset
2253 The command has been removed as 'pcs resource cleanup' is doing
2254 exactly the same job.
2255
2256 master This command has been replaced with 'pcs resource promotable'.
2257
2258 show Previously, this command displayed either status or configura‐
2259 tion of resources depending on the parameters specified. This
2260 was confusing, therefore the command was replaced by several new
2261 commands. To display resources status, run 'pcs resource' or
2262 'pcs resource status'. To display resources configuration, run
2263 'pcs resource config' or 'pcs resource config <resource name>'.
2264 To display configured resource groups, run 'pcs resource group
2265 list'.
2266
2267 status
2268 groups This command has been replaced with 'pcs resource group list'.
2269
2270 stonith
2271 level add | clear | delete | remove
2272 Delimiting stonith devices with a comma is deprecated, use a
2273 space instead. Applicable in pcs-0.10.9 and newer.
2274
2275 level clear
2276 Syntax of the command has been fixed so that it is not ambiguous
2277 any more. New syntax is 'pcs stonith level clear [target <tar‐
2278 get> | stonith <stonith id>...]'. Old syntax 'pcs stonith level
2279 clear [<target> | <stonith ids>]' is deprecated but still func‐
2280 tional in pcs-0.10.x. Applicable in pcs-0.10.9 and newer.
2281
2282 level delete | remove
2283 Syntax of the command has been fixed so that it is not ambiguous
2284 any more. New syntax is 'pcs stonith level delete | remove [tar‐
2285 get <target>] [stonith <stonith id>]...'. Old syntax 'pcs
2286 stonith level delete | remove [<target>] [<stonith id>]...' is
2287 deprecated but still functional in pcs-0.10.x. Applicable in
2288 pcs-0.10.9 and newer.
2289
2290 sbd device setup
2291 The --device option has been replaced with the 'device' option.
2292
2293 sbd enable
2294 The --device and --watchdog options have been replaced with 'de‐
2295 vice' and 'watchdog' options, respectively.
2296
2297 show Previously, this command displayed either status or configura‐
2298 tion of stonith resources depending on the parameters specified.
2299 This was confusing, therefore the command was replaced by sev‐
2300 eral new commands. To display stonith resources status, run 'pcs
2301 stonith' or 'pcs stonith status'. To display stonith resources
2302 configuration, run 'pcs stonith config' or 'pcs stonith config
2303 <stonith name>'.
2304
2305 tag
2306 list The 'pcs tag list' command has been deprecated and will be re‐
2307 moved. Please use 'pcs tag config' instead. Applicable in
2308 pcs-0.10.9 and newer.
2309
2311 http://clusterlabs.org/doc/
2312
2313 pcsd(8), pcs_snmp_agent(8)
2314
2315 corosync_overview(8), votequorum(5), corosync.conf(5), corosync-qde‐
2316 vice(8), corosync-qdevice-tool(8), corosync-qnetd(8),
2317 corosync-qnetd-tool(8)
2318
2319 pacemaker-controld(7), pacemaker-fenced(7), pacemaker-schedulerd(7),
2320 crm_mon(8), crm_report(8), crm_simulate(8)
2321
2322 boothd(8), sbd(8)
2323
2324 clufter(1)
2325
2326
2327
2328pcs 0.10.11 2021-10-05 PCS(8)