1PCS(8) System Administration Utilities PCS(8)
2
3
4
6 pcs - pacemaker/corosync configuration system
7
9 pcs [-f file] [-h] [commands]...
10
12 Control and configure pacemaker and corosync.
13
15 -h, --help
16 Display usage and exit.
17
18 -f file
19 Perform actions on file instead of active CIB.
20
21 --debug
22 Print all network traffic and external commands run.
23
24 --version
25 Print pcs version information. List pcs capabilities if --full
26 is specified.
27
28 --request-timeout=<timeout>
29 Timeout for each outgoing request to another node in seconds.
30 Default is 60s.
31
32 Commands:
33 cluster
34 Configure cluster options and nodes.
35
36 resource
37 Manage cluster resources.
38
39 stonith
40 Manage fence devices.
41
42 constraint
43 Manage resource constraints.
44
45 property
46 Manage pacemaker properties.
47
48 acl
49 Manage pacemaker access control lists.
50
51 qdevice
52 Manage quorum device provider on the local host.
53
54 quorum
55 Manage cluster quorum settings.
56
57 booth
58 Manage booth (cluster ticket manager).
59
60 status
61 View cluster status.
62
63 config
64 View and manage cluster configuration.
65
66 pcsd
67 Manage pcs daemon.
68
69 host
70 Manage hosts known to pcs/pcsd.
71
72 node
73 Manage cluster nodes.
74
75 alert
76 Manage pacemaker alerts.
77
78 client
79 Manage pcsd client configuration.
80
81 resource
82 [status [--hide-inactive]]
83 Show status of all currently configured resources. If
84 --hide-inactive is specified, only show active resources.
85
86 config [<resource id>]...
87 Show options of all currently configured resources or if
88 resource ids are specified show the options for the specified
89 resource ids.
90
91 list [filter] [--nodesc]
92 Show list of all available resource agents (if filter is pro‐
93 vided then only resource agents matching the filter will be
94 shown). If --nodesc is used then descriptions of resource agents
95 are not printed.
96
97 describe [<standard>:[<provider>:]]<type> [--full]
98 Show options for the specified resource. If --full is specified,
99 all options including advanced and deprecated ones are shown.
100
101 create <resource id> [<standard>:[<provider>:]]<type> [resource
102 options] [op <operation action> <operation options> [<operation action>
103 <operation options>]...] [meta <meta options>...] [clone [<clone
104 options>] | promotable <promotable options> | --group <group id>
105 [--before <resource id> | --after <resource id>] | bundle <bundle id>]
106 [--disabled] [--no-default-ops] [--wait[=n]]
107 Create specified resource. If clone is used a clone resource is
108 created. If promotable is used a promotable clone resource is
109 created. If --group is specified the resource is added to the
110 group named. You can use --before or --after to specify the
111 position of the added resource relatively to some resource
112 already existing in the group. If bundle is specified, resource
113 will be created inside of the specified bundle. If --disabled is
114 specified the resource is not started automatically. If
115 --no-default-ops is specified, only monitor operations are cre‐
116 ated for the resource and all other operations use default set‐
117 tings. If --wait is specified, pcs will wait up to 'n' seconds
118 for the resource to start and then return 0 if the resource is
119 started, or 1 if the resource has not yet started. If 'n' is not
120 specified it defaults to 60 minutes.
121
122 Example: Create a new resource called 'VirtualIP' with IP
123 address 192.168.0.99, netmask of 32, monitored everything 30
124 seconds, on eth2: pcs resource create VirtualIP ocf:heart‐
125 beat:IPaddr2 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor
126 interval=30s
127
128 delete <resource id|group id|bundle id|clone id>
129 Deletes the resource, group, bundle or clone (and all resources
130 within the group/bundle/clone).
131
132 remove <resource id|group id|bundle id|clone id>
133 Deletes the resource, group, bundle or clone (and all resources
134 within the group/bundle/clone).
135
136 enable <resource id>... [--wait[=n]]
137 Allow the cluster to start the resources. Depending on the rest
138 of the configuration (constraints, options, failures, etc), the
139 resources may remain stopped. If --wait is specified, pcs will
140 wait up to 'n' seconds for the resources to start and then
141 return 0 if the resources are started, or 1 if the resources
142 have not yet started. If 'n' is not specified it defaults to 60
143 minutes.
144
145 disable <resource id>... [--wait[=n]]
146 Attempt to stop the resources if they are running and forbid the
147 cluster from starting them again. Depending on the rest of the
148 configuration (constraints, options, failures, etc), the
149 resources may remain started. If --wait is specified, pcs will
150 wait up to 'n' seconds for the resources to stop and then return
151 0 if the resources are stopped or 1 if the resources have not
152 stopped. If 'n' is not specified it defaults to 60 minutes.
153
154 restart <resource id> [node] [--wait=n]
155 Restart the resource specified. If a node is specified and if
156 the resource is a clone or bundle it will be restarted only on
157 the node specified. If --wait is specified, then we will wait up
158 to 'n' seconds for the resource to be restarted and return 0 if
159 the restart was successful or 1 if it was not.
160
161 debug-start <resource id> [--full]
162 This command will force the specified resource to start on this
163 node ignoring the cluster recommendations and print the output
164 from starting the resource. Using --full will give more
165 detailed output. This is mainly used for debugging resources
166 that fail to start.
167
168 debug-stop <resource id> [--full]
169 This command will force the specified resource to stop on this
170 node ignoring the cluster recommendations and print the output
171 from stopping the resource. Using --full will give more
172 detailed output. This is mainly used for debugging resources
173 that fail to stop.
174
175 debug-promote <resource id> [--full]
176 This command will force the specified resource to be promoted on
177 this node ignoring the cluster recommendations and print the
178 output from promoting the resource. Using --full will give more
179 detailed output. This is mainly used for debugging resources
180 that fail to promote.
181
182 debug-demote <resource id> [--full]
183 This command will force the specified resource to be demoted on
184 this node ignoring the cluster recommendations and print the
185 output from demoting the resource. Using --full will give more
186 detailed output. This is mainly used for debugging resources
187 that fail to demote.
188
189 debug-monitor <resource id> [--full]
190 This command will force the specified resource to be monitored
191 on this node ignoring the cluster recommendations and print the
192 output from monitoring the resource. Using --full will give
193 more detailed output. This is mainly used for debugging
194 resources that fail to be monitored.
195
196 move <resource id> [destination node] [--master] [lifetime=<lifetime>]
197 [--wait[=n]]
198 Move the resource off the node it is currently running on by
199 creating a -INFINITY location constraint to ban the node. If
200 destination node is specified the resource will be moved to that
201 node by creating an INFINITY location constraint to prefer the
202 destination node. If --master is used the scope of the command
203 is limited to the master role and you must use the promotable
204 clone id (instead of the resource id). If lifetime is specified
205 then the constraint will expire after that time, otherwise it
206 defaults to infinity and the constraint can be cleared manually
207 with 'pcs resource clear' or 'pcs constraint delete'. If --wait
208 is specified, pcs will wait up to 'n' seconds for the resource
209 to move and then return 0 on success or 1 on error. If 'n' is
210 not specified it defaults to 60 minutes. If you want the
211 resource to preferably avoid running on some nodes but be able
212 to failover to them use 'pcs constraint location avoids'.
213
214 ban <resource id> [node] [--master] [lifetime=<lifetime>] [--wait[=n]]
215 Prevent the resource id specified from running on the node (or
216 on the current node it is running on if no node is specified) by
217 creating a -INFINITY location constraint. If --master is used
218 the scope of the command is limited to the master role and you
219 must use the promotable clone id (instead of the resource id).
220 If lifetime is specified then the constraint will expire after
221 that time, otherwise it defaults to infinity and the constraint
222 can be cleared manually with 'pcs resource clear' or 'pcs con‐
223 straint delete'. If --wait is specified, pcs will wait up to 'n'
224 seconds for the resource to move and then return 0 on success or
225 1 on error. If 'n' is not specified it defaults to 60 minutes.
226 If you want the resource to preferably avoid running on some
227 nodes but be able to failover to them use 'pcs constraint loca‐
228 tion avoids'.
229
230 clear <resource id> [node] [--master] [--wait[=n]]
231 Remove constraints created by move and/or ban on the specified
232 resource (and node if specified). If --master is used the scope
233 of the command is limited to the master role and you must use
234 the master id (instead of the resource id). If --wait is speci‐
235 fied, pcs will wait up to 'n' seconds for the operation to fin‐
236 ish (including starting and/or moving resources if appropriate)
237 and then return 0 on success or 1 on error. If 'n' is not spec‐
238 ified it defaults to 60 minutes.
239
240 standards
241 List available resource agent standards supported by this
242 installation (OCF, LSB, etc.).
243
244 providers
245 List available OCF resource agent providers.
246
247 agents [standard[:provider]]
248 List available agents optionally filtered by standard and
249 provider.
250
251 update <resource id> [resource options] [op [<operation action> <opera‐
252 tion options>]...] [meta <meta operations>...] [--wait[=n]]
253 Add/Change options to specified resource, clone or multi-state
254 resource. If an operation (op) is specified it will update the
255 first found operation with the same action on the specified
256 resource, if no operation with that action exists then a new
257 operation will be created. (WARNING: all existing options on
258 the updated operation will be reset if not specified.) If you
259 want to create multiple monitor operations you should use the
260 'op add' & 'op remove' commands. If --wait is specified, pcs
261 will wait up to 'n' seconds for the changes to take effect and
262 then return 0 if the changes have been processed or 1 otherwise.
263 If 'n' is not specified it defaults to 60 minutes.
264
265 op add <resource id> <operation action> [operation properties]
266 Add operation for specified resource.
267
268 op delete <resource id> <operation action> [<operation properties>...]
269 Remove specified operation (note: you must specify the exact
270 operation properties to properly remove an existing operation).
271
272 op delete <operation id>
273 Remove the specified operation id.
274
275 op remove <resource id> <operation action> [<operation properties>...]
276 Remove specified operation (note: you must specify the exact
277 operation properties to properly remove an existing operation).
278
279 op remove <operation id>
280 Remove the specified operation id.
281
282 op defaults [options]
283 Set default values for operations, if no options are passed,
284 lists currently configured defaults. Defaults do not apply to
285 resources which override them with their own defined operations.
286
287 meta <resource id | group id | clone id> <meta options> [--wait[=n]]
288 Add specified options to the specified resource, group or clone.
289 Meta options should be in the format of name=value, options may
290 be removed by setting an option without a value. If --wait is
291 specified, pcs will wait up to 'n' seconds for the changes to
292 take effect and then return 0 if the changes have been processed
293 or 1 otherwise. If 'n' is not specified it defaults to 60 min‐
294 utes.
295 Example: pcs resource meta TestResource failure-timeout=50
296 stickiness=
297
298 group list
299 Show all currently configured resource groups and their
300 resources.
301
302 group add <group id> <resource id> [resource id] ... [resource id]
303 [--before <resource id> | --after <resource id>] [--wait[=n]]
304 Add the specified resource to the group, creating the group if
305 it does not exist. If the resource is present in another group
306 it is moved to the new group. You can use --before or --after to
307 specify the position of the added resources relatively to some
308 resource already existing in the group. By adding resources to a
309 group they are already in and specifying --after or --before you
310 can move the resources in the group. If --wait is specified, pcs
311 will wait up to 'n' seconds for the operation to finish (includ‐
312 ing moving resources if appropriate) and then return 0 on suc‐
313 cess or 1 on error. If 'n' is not specified it defaults to 60
314 minutes.
315
316 group delete <group id> <resource id> [resource id] ... [resource id]
317 [--wait[=n]]
318 Remove the specified resource(s) from the group, removing the
319 group if no resources remain in it. If --wait is specified, pcs
320 will wait up to 'n' seconds for the operation to finish (includ‐
321 ing moving resources if appropriate) and then return 0 on suc‐
322 cess or 1 on error. If 'n' is not specified it defaults to 60
323 minutes.
324
325 group remove <group id> <resource id> [resource id] ... [resource id]
326 [--wait[=n]]
327 Remove the specified resource(s) from the group, removing the
328 group if no resources remain in it. If --wait is specified, pcs
329 will wait up to 'n' seconds for the operation to finish (includ‐
330 ing moving resources if appropriate) and then return 0 on suc‐
331 cess or 1 on error. If 'n' is not specified it defaults to 60
332 minutes.
333
334 ungroup <group id> [resource id] ... [resource id] [--wait[=n]]
335 Remove the group (note: this does not remove any resources from
336 the cluster) or if resources are specified, remove the specified
337 resources from the group. If --wait is specified, pcs will wait
338 up to 'n' seconds for the operation to finish (including moving
339 resources if appropriate) and the return 0 on success or 1 on
340 error. If 'n' is not specified it defaults to 60 minutes.
341
342 clone <resource id | group id> [clone options]... [--wait[=n]]
343 Set up the specified resource or group as a clone. If --wait is
344 specified, pcs will wait up to 'n' seconds for the operation to
345 finish (including starting clone instances if appropriate) and
346 then return 0 on success or 1 on error. If 'n' is not specified
347 it defaults to 60 minutes.
348
349 promotable <resource id | group id> [clone options]... [--wait[=n]]
350 Set up the specified resource or group as a promotable clone.
351 This is an alias for 'pcs resource clone <resource id> pro‐
352 motable=true'.
353
354 unclone <resource id | group id> [--wait[=n]]
355 Remove the clone which contains the specified group or resource
356 (the resource or group will not be removed). If --wait is spec‐
357 ified, pcs will wait up to 'n' seconds for the operation to fin‐
358 ish (including stopping clone instances if appropriate) and then
359 return 0 on success or 1 on error. If 'n' is not specified it
360 defaults to 60 minutes.
361
362 bundle create <bundle id> container <container type> [<container
363 options>] [network <network options>] [port-map <port options>]...
364 [storage-map <storage options>]... [meta <meta options>] [--disabled]
365 [--wait[=n]]
366 Create a new bundle encapsulating no resources. The bundle can
367 be used either as it is or a resource may be put into it at any
368 time. If --disabled is specified, the bundle is not started
369 automatically. If --wait is specified, pcs will wait up to 'n'
370 seconds for the bundle to start and then return 0 on success or
371 1 on error. If 'n' is not specified it defaults to 60 minutes.
372
373 bundle update <bundle id> [container <container options>] [network
374 <network options>] [port-map (add <port options>) | (delete | remove
375 <id>...)]... [storage-map (add <storage options>) | (delete | remove
376 <id>...)]... [meta <meta options>] [--wait[=n]]
377 Add, remove or change options to specified bundle. If you wish
378 to update a resource encapsulated in the bundle, use the 'pcs
379 resource update' command instead and specify the resource id.
380 If --wait is specified, pcs will wait up to 'n' seconds for the
381 operation to finish (including moving resources if appropriate)
382 and then return 0 on success or 1 on error. If 'n' is not spec‐
383 ified it defaults to 60 minutes.
384
385 manage <resource id>... [--monitor]
386 Set resources listed to managed mode (default). If --monitor is
387 specified, enable all monitor operations of the resources.
388
389 unmanage <resource id>... [--monitor]
390 Set resources listed to unmanaged mode. When a resource is in
391 unmanaged mode, the cluster is not allowed to start nor stop the
392 resource. If --monitor is specified, disable all monitor opera‐
393 tions of the resources.
394
395 defaults [options]
396 Set default values for resources, if no options are passed,
397 lists currently configured defaults. Defaults do not apply to
398 resources which override them with their own defined values.
399
400 cleanup [<resource id>] [node=<node>] [operation=<operation> [inter‐
401 val=<interval>]]
402 Make the cluster forget failed operations from history of the
403 resource and re-detect its current state. This can be useful to
404 purge knowledge of past failures that have since been resolved.
405 If a resource id is not specified then all resources / stonith
406 devices will be cleaned up. If a node is not specified then
407 resources / stonith devices on all nodes will be cleaned up.
408
409 refresh [<resource id>] [node=<node>] [--full]
410 Make the cluster forget the complete operation history (includ‐
411 ing failures) of the resource and re-detect its current state.
412 If you are interested in forgetting failed operations only, use
413 the 'pcs resource cleanup' command. If a resource id is not
414 specified then all resources / stonith devices will be
415 refreshed. If a node is not specified then resources / stonith
416 devices on all nodes will be refreshed. Use --full to refresh a
417 resource on all nodes, otherwise only nodes where the resource's
418 state is known will be considered.
419
420 failcount show [<resource id>] [node=<node>] [operation=<operation>
421 [interval=<interval>]] [--full]
422 Show current failcount for resources, optionally filtered by a
423 resource, node, operation and its interval. If --full is speci‐
424 fied do not sum failcounts per resource and node. Use 'pcs
425 resource cleanup' or 'pcs resource refresh' to reset failcounts.
426
427 relocate dry-run [resource1] [resource2] ...
428 The same as 'relocate run' but has no effect on the cluster.
429
430 relocate run [resource1] [resource2] ...
431 Relocate specified resources to their preferred nodes. If no
432 resources are specified, relocate all resources. This command
433 calculates the preferred node for each resource while ignoring
434 resource stickiness. Then it creates location constraints which
435 will cause the resources to move to their preferred nodes. Once
436 the resources have been moved the constraints are deleted auto‐
437 matically. Note that the preferred node is calculated based on
438 current cluster status, constraints, location of resources and
439 other settings and thus it might change over time.
440
441 relocate show
442 Display current status of resources and their optimal node
443 ignoring resource stickiness.
444
445 relocate clear
446 Remove all constraints created by the 'relocate run' command.
447
448 utilization [<resource id> [<name>=<value> ...]]
449 Add specified utilization options to specified resource. If
450 resource is not specified, shows utilization of all resources.
451 If utilization options are not specified, shows utilization of
452 specified resource. Utilization option should be in format
453 name=value, value has to be integer. Options may be removed by
454 setting an option without a value. Example: pcs resource uti‐
455 lization TestResource cpu= ram=20
456
457 cluster
458 setup <cluster name> (<node name> [addr=<node address>]...)... [trans‐
459 port knet|udp|udpu [<transport options>] [link <link options>] [com‐
460 pression <compression options>] [crypto <crypto options>]] [totem
461 <totem options>] [quorum <quorum options>] [--enable] [--start
462 [--wait[=<n>]]] [--no-keys-sync]
463 Create a cluster from the listed nodes and synchronize cluster
464 configuration files to them.
465 Nodes are specified by their names and optionally their
466 addresses. If no addresses are specified for a node, pcs will
467 configure corosync to communicate with that node using an
468 address provided in 'pcs host auth' command. Otherwise, pcs will
469 configure corosync to communicate with the node using the speci‐
470 fied addresses.
471
472 Transport knet:
473 This is the default transport. It allows configuring traffic
474 encryption and compression as well as using multiple addresses
475 (links) for nodes.
476 Transport options are: ip_version, knet_pmtud_interval,
477 link_mode
478 Link options are: ip_version, link_priority, linknumber, mcast‐
479 port, ping_interval, ping_precision, ping_timeout, pong_count,
480 transport (udp or sctp)
481 Compression options are: level, model, threshold
482 Crypto options are: cipher, hash, model
483 By default, encryption is enabled with cipher=aes256 and
484 hash=sha256. To disable encryption, set cipher=none and
485 hash=none.
486
487 Transports udp and udpu:
488 These transports are limited to one address per node. They do
489 not support traffic encryption nor compression.
490 Transport options are: ip_version, netmtu
491 Link options are: bindnetaddr, broadcast, mcastaddr, mcastport,
492 ttl
493
494 Totem and quorum can be configured regardles of used transport.
495 Totem options are: consensus, downcheck, fail_recv_const, heart‐
496 beat_failures_allowed, hold, join, max_messages, max_net‐
497 work_delay, merge, miss_count_const, send_join,
498 seqno_unchanged_const, token, token_coefficient, token_retrans‐
499 mit, token_retransmits_before_loss_const, window_size
500 Quorum options are: auto_tie_breaker, last_man_standing,
501 last_man_standing_window, wait_for_all
502
503 Transports and their options, link, compression, crypto and
504 totem options are all documented in corosync.conf(5) man page;
505 knet link options are prefixed 'knet_' there, compression
506 options are prefixed 'knet_compression_' and crypto options are
507 prefixed 'crypto_'. Quorum options are documented in votequo‐
508 rum(5) man page.
509
510 --enable will configure the cluster to start on nodes boot.
511 --start will start the cluster right after creating it. --wait
512 will wait up to 'n' seconds for the cluster to start.
513 --no-keys-sync will skip creating and distributing pcsd SSL cer‐
514 tificate and key and corosync and pacemaker authkey files. Use
515 this if you provide your own certificates and keys.
516
517 Examples:
518 Create a cluster with default settings:
519 pcs cluster setup newcluster node1 node2
520 Create a cluster using two links:
521 pcs cluster setup newcluster node1 addr=10.0.1.11
522 addr=10.0.2.11 node2 addr=10.0.1.12 addr=10.0.2.12
523 Create a cluster using udp transport with a non-default port:
524 pcs cluster setup newcluster node1 node2 transport udp link
525 mcastport=55405
526
527 start [--all | <node>... ] [--wait[=<n>]] [--request-timeout=<seconds>]
528 Start a cluster on specified node(s). If no nodes are specified
529 then start a cluster on the local node. If --all is specified
530 then start a cluster on all nodes. If the cluster has many nodes
531 then the start request may time out. In that case you should
532 consider setting --request-timeout to a suitable value. If
533 --wait is specified, pcs waits up to 'n' seconds for the cluster
534 to get ready to provide services after the cluster has success‐
535 fully started.
536
537 stop [--all | <node>... ] [--request-timeout=<seconds>]
538 Stop a cluster on specified node(s). If no nodes are specified
539 then stop a cluster on the local node. If --all is specified
540 then stop a cluster on all nodes. If the cluster is running
541 resources which take long time to stop then the stop request may
542 time out before the cluster actually stops. In that case you
543 should consider setting --request-timeout to a suitable value.
544
545 kill Force corosync and pacemaker daemons to stop on the local node
546 (performs kill -9). Note that init system (e.g. systemd) can
547 detect that cluster is not running and start it again. If you
548 want to stop cluster on a node, run pcs cluster stop on that
549 node.
550
551 enable [--all | <node>... ]
552 Configure cluster to run on node boot on specified node(s). If
553 node is not specified then cluster is enabled on the local node.
554 If --all is specified then cluster is enabled on all nodes.
555
556 disable [--all | <node>... ]
557 Configure cluster to not run on node boot on specified node(s).
558 If node is not specified then cluster is disabled on the local
559 node. If --all is specified then cluster is disabled on all
560 nodes.
561
562 auth [-u <username>] [-p <password>]
563 Authenticate pcs/pcsd to pcsd on nodes configured in the local
564 cluster.
565
566 status View current cluster status (an alias of 'pcs status cluster').
567
568 pcsd-status [<node>]...
569 Show current status of pcsd on nodes specified, or on all nodes
570 configured in the local cluster if no nodes are specified.
571
572 sync Sync cluster configuration (files which are supported by all
573 subcommands of this command) to all cluster nodes.
574
575 sync corosync
576 Sync corosync configuration to all nodes found from current
577 corosync.conf file.
578
579 cib [filename] [scope=<scope> | --config]
580 Get the raw xml from the CIB (Cluster Information Base). If a
581 filename is provided, we save the CIB to that file, otherwise
582 the CIB is printed. Specify scope to get a specific section of
583 the CIB. Valid values of the scope are: configuration, nodes,
584 resources, constraints, crm_config, rsc_defaults, op_defaults,
585 status. --config is the same as scope=configuration. Do not
586 specify a scope if you want to edit the saved CIB using pcs (pcs
587 -f <command>).
588
589 cib-push <filename> [--wait[=<n>]] [diff-against=<filename_original> |
590 scope=<scope> | --config]
591 Push the raw xml from <filename> to the CIB (Cluster Information
592 Base). You can obtain the CIB by running the 'pcs cluster cib'
593 command, which is recommended first step when you want to per‐
594 form desired modifications (pcs -f <command>) for the one-off
595 push. If diff-against is specified, pcs diffs contents of file‐
596 name against contents of filename_original and pushes the result
597 to the CIB. Specify scope to push a specific section of the
598 CIB. Valid values of the scope are: configuration, nodes,
599 resources, constraints, crm_config, rsc_defaults, op_defaults.
600 --config is the same as scope=configuration. Use of --config is
601 recommended. Do not specify a scope if you need to push the
602 whole CIB or be warned in the case of outdated CIB. If --wait
603 is specified wait up to 'n' seconds for changes to be applied.
604 WARNING: the selected scope of the CIB will be overwritten by
605 the current content of the specified file.
606
607 Example:
608 pcs cluster cib > original.xml
609 cp original.xml new.xml
610 pcs -f new.xml constraint location apache prefers node2
611 pcs cluster cib-push new.xml diff-against=original.xml
612
613 cib-upgrade
614 Upgrade the CIB to conform to the latest version of the document
615 schema.
616
617 edit [scope=<scope> | --config]
618 Edit the cib in the editor specified by the $EDITOR environment
619 variable and push out any changes upon saving. Specify scope to
620 edit a specific section of the CIB. Valid values of the scope
621 are: configuration, nodes, resources, constraints, crm_config,
622 rsc_defaults, op_defaults. --config is the same as scope=con‐
623 figuration. Use of --config is recommended. Do not specify a
624 scope if you need to edit the whole CIB or be warned in the case
625 of outdated CIB.
626
627 node add <node name> [addr=<node address>]... [watchdog=<watchdog
628 path>] [device=<SBD device path>]... [--start [--wait[=<n>]]]
629 [--enable] [--no-watchdog-validation]
630 Add the node to the cluster and synchronize all relevant config‐
631 uration files to the new node. This command can only be run on
632 an existing cluster node.
633
634 The new node is specified by its name and optionally its
635 addresses. If no addresses are specified for the node, pcs will
636 configure corosync to communicate with the node using an address
637 provided in 'pcs host auth' command. Otherwise, pcs will config‐
638 ure corosync to communicate with the node using the specified
639 addresses.
640
641 Use 'watchdog' to specify a path to a watchdog on the new node,
642 when SBD is enabled in the cluster. If SBD is configured with
643 shared storage, use 'device' to specify path to shared device(s)
644 on the new node.
645
646 If --start is specified also start cluster on the new node, if
647 --wait is specified wait up to 'n' seconds for the new node to
648 start. If --enable is specified configure cluster to start on
649 the new node on boot. If --no-watchdog-validation is specified,
650 validation of watchdog will be skipped.
651
652 WARNING: By default, it is tested whether the specified watchdog
653 is supported. This may cause a restart of the system when a
654 watchdog with no-way-out-feature enabled is present. Use
655 --no-watchdog-validation to skip watchdog validation.
656
657 node delete <node name> [<node name>]...
658 Shutdown specified nodes and remove them from the cluster.
659
660 node remove <node name> [<node name>]...
661 Shutdown specified nodes and remove them from the cluster.
662
663 node add-remote <node name> [<node address>] [options] [op <operation
664 action> <operation options> [<operation action> <operation
665 options>]...] [meta <meta options>...] [--wait[=<n>]]
666 Add the node to the cluster as a remote node. Sync all relevant
667 configuration files to the new node. Start the node and config‐
668 ure it to start the cluster on boot. Options are port and recon‐
669 nect_interval. Operations and meta belong to an underlying con‐
670 nection resource (ocf:pacemaker:remote). If node address is not
671 specified for the node, pcs will configure pacemaker to communi‐
672 cate with the node using an address provided in 'pcs host auth'
673 command. Otherwise, pcs will configure pacemaker to communicate
674 with the node using the specified addresses. If --wait is speci‐
675 fied, wait up to 'n' seconds for the node to start.
676
677 node delete-remote <node identifier>
678 Shutdown specified remote node and remove it from the cluster.
679 The node-identifier can be the name of the node or the address
680 of the node.
681
682 node remove-remote <node identifier>
683 Shutdown specified remote node and remove it from the cluster.
684 The node-identifier can be the name of the node or the address
685 of the node.
686
687 node add-guest <node name> <resource id> [options] [--wait[=<n>]]
688 Make the specified resource a guest node resource. Sync all rel‐
689 evant configuration files to the new node. Start the node and
690 configure it to start the cluster on boot. Options are
691 remote-addr, remote-port and remote-connect-timeout. If
692 remote-addr is not specified for the node, pcs will configure
693 pacemaker to communicate with the node using an address provided
694 in 'pcs host auth' command. Otherwise, pcs will configure pace‐
695 maker to communicate with the node using the specified
696 addresses. If --wait is specified, wait up to 'n' seconds for
697 the node to start.
698
699 node delete-guest <node identifier>
700 Shutdown specified guest node and remove it from the cluster.
701 The node-identifier can be the name of the node or the address
702 of the node or id of the resource that is used as the guest
703 node.
704
705 node remove-guest <node identifier>
706 Shutdown specified guest node and remove it from the cluster.
707 The node-identifier can be the name of the node or the address
708 of the node or id of the resource that is used as the guest
709 node.
710
711 node clear <node name>
712 Remove specified node from various cluster caches. Use this if a
713 removed node is still considered by the cluster to be a member
714 of the cluster.
715
716 uidgid List the current configured uids and gids of users allowed to
717 connect to corosync.
718
719 uidgid add [uid=<uid>] [gid=<gid>]
720 Add the specified uid and/or gid to the list of users/groups
721 allowed to connect to corosync.
722
723 uidgid delete [uid=<uid>] [gid=<gid>]
724 Remove the specified uid and/or gid from the list of
725 users/groups allowed to connect to corosync.
726
727 uidgid remove [uid=<uid>] [gid=<gid>]
728 Remove the specified uid and/or gid from the list of
729 users/groups allowed to connect to corosync.
730
731 corosync [node]
732 Get the corosync.conf from the specified node or from the cur‐
733 rent node if node not specified.
734
735 reload corosync
736 Reload the corosync configuration on the current node.
737
738 destroy [--all]
739 Permanently destroy the cluster on the current node, killing all
740 cluster processes and removing all cluster configuration files.
741 Using --all will attempt to destroy the cluster on all nodes in
742 the local cluster.
743
744 WARNING: This command permanently removes any cluster configura‐
745 tion that has been created. It is recommended to run 'pcs clus‐
746 ter stop' before destroying the cluster.
747
748 verify [--full] [-f <filename>]
749 Checks the pacemaker configuration (CIB) for syntax and common
750 conceptual errors. If no filename is specified the check is per‐
751 formed on the currently running cluster. If --full is used more
752 verbose output will be printed.
753
754 report [--from "YYYY-M-D H:M:S" [--to "YYYY-M-D H:M:S"]] <dest>
755 Create a tarball containing everything needed when reporting
756 cluster problems. If --from and --to are not used, the report
757 will include the past 24 hours.
758
759 stonith
760 [status [--hide-inactive]]
761 Show status of all currently configured stonith devices. If
762 --hide-inactive is specified, only show active stonith devices.
763
764 config [<stonith id>]...
765 Show options of all currently configured stonith devices or if
766 stonith ids are specified show the options for the specified
767 stonith device ids.
768
769 list [filter] [--nodesc]
770 Show list of all available stonith agents (if filter is provided
771 then only stonith agents matching the filter will be shown). If
772 --nodesc is used then descriptions of stonith agents are not
773 printed.
774
775 describe <stonith agent> [--full]
776 Show options for specified stonith agent. If --full is speci‐
777 fied, all options including advanced and deprecated ones are
778 shown.
779
780 create <stonith id> <stonith device type> [stonith device options] [op
781 <operation action> <operation options> [<operation action> <operation
782 options>]...] [meta <meta options>...] [--group <group id> [--before
783 <stonith id> | --after <stonith id>]] [--disabled] [--wait[=n]]
784 Create stonith device with specified type and options. If
785 --group is specified the stonith device is added to the group
786 named. You can use --before or --after to specify the position
787 of the added stonith device relatively to some stonith device
788 already existing in the group. If--disabled is specified the
789 stonith device is not used. If --wait is specified, pcs will
790 wait up to 'n' seconds for the stonith device to start and then
791 return 0 if the stonith device is started, or 1 if the stonith
792 device has not yet started. If 'n' is not specified it defaults
793 to 60 minutes.
794
795 Example: Create a device for nodes node1 and node2
796 pcs stonith create MyFence fence_virt pcmk_host_list=node1,node2
797 Example: Use port p1 for node n1 and ports p2 and p3 for node n2
798 pcs stonith create MyFence fence_virt
799 'pcmk_host_map=n1:p1;n2:p2,p3'
800
801 update <stonith id> [stonith device options]
802 Add/Change options to specified stonith id.
803
804 delete <stonith id>
805 Remove stonith id from configuration.
806
807 remove <stonith id>
808 Remove stonith id from configuration.
809
810 enable <stonith id>... [--wait[=n]]
811 Allow the cluster to use the stonith devices. If --wait is spec‐
812 ified, pcs will wait up to 'n' seconds for the stonith devices
813 to start and then return 0 if the stonith devices are started,
814 or 1 if the stonith devices have not yet started. If 'n' is not
815 specified it defaults to 60 minutes.
816
817 disable <stonith id>... [--wait[=n]]
818 Attempt to stop the stonith devices if they are running and dis‐
819 allow the cluster to use them. If --wait is specified, pcs will
820 wait up to 'n' seconds for the stonith devices to stop and then
821 return 0 if the stonith devices are stopped or 1 if the stonith
822 devices have not stopped. If 'n' is not specified it defaults to
823 60 minutes.
824
825 cleanup [<stonith id>] [--node <node>]
826 Make the cluster forget failed operations from history of the
827 stonith device and re-detect its current state. This can be use‐
828 ful to purge knowledge of past failures that have since been
829 resolved. If a stonith id is not specified then all resources /
830 stonith devices will be cleaned up. If a node is not specified
831 then resources / stonith devices on all nodes will be cleaned
832 up.
833
834 refresh [<stonith id>] [--node <node>] [--full]
835 Make the cluster forget the complete operation history (includ‐
836 ing failures) of the stonith device and re-detect its current
837 state. If you are interested in forgetting failed operations
838 only, use the 'pcs stonith cleanup' command. If a stonith id is
839 not specified then all resources / stonith devices will be
840 refreshed. If a node is not specified then resources / stonith
841 devices on all nodes will be refreshed. Use --full to refresh a
842 stonith device on all nodes, otherwise only nodes where the
843 stonith device's state is known will be considered.
844
845 level [config]
846 Lists all of the fencing levels currently configured.
847
848 level add <level> <target> <stonith id> [stonith id]...
849 Add the fencing level for the specified target with the list of
850 stonith devices to attempt for that target at that level. Fence
851 levels are attempted in numerical order (starting with 1). If a
852 level succeeds (meaning all devices are successfully fenced in
853 that level) then no other levels are tried, and the target is
854 considered fenced. Target may be a node name <node_name> or
855 %<node_name> or node%<node_name>, a node name regular expression
856 regexp%<node_pattern> or a node attribute value
857 attrib%<name>=<value>.
858
859 level delete <level> [target] [stonith id]...
860 Removes the fence level for the level, target and/or devices
861 specified. If no target or devices are specified then the fence
862 level is removed. Target may be a node name <node_name> or
863 %<node_name> or node%<node_name>, a node name regular expression
864 regexp%<node_pattern> or a node attribute value
865 attrib%<name>=<value>.
866
867 level remove <level> [target] [stonith id]...
868 Removes the fence level for the level, target and/or devices
869 specified. If no target or devices are specified then the fence
870 level is removed. Target may be a node name <node_name> or
871 %<node_name> or node%<node_name>, a node name regular expression
872 regexp%<node_pattern> or a node attribute value
873 attrib%<name>=<value>.
874
875 level clear [target|stonith id(s)]
876 Clears the fence levels on the target (or stonith id) specified
877 or clears all fence levels if a target/stonith id is not speci‐
878 fied. If more than one stonith id is specified they must be sep‐
879 arated by a comma and no spaces. Target may be a node name
880 <node_name> or %<node_name> or node%<node_name>, a node name
881 regular expression regexp%<node_pattern> or a node attribute
882 value attrib%<name>=<value>. Example: pcs stonith level clear
883 dev_a,dev_b
884
885 level verify
886 Verifies all fence devices and nodes specified in fence levels
887 exist.
888
889 fence <node> [--off]
890 Fence the node specified (if --off is specified, use the 'off'
891 API call to stonith which will turn the node off instead of
892 rebooting it).
893
894 confirm <node> [--force]
895 Confirm to the cluster that the specified node is powered off.
896 This allows the cluster to recover from a situation where no
897 stonith device is able to fence the node. This command should
898 ONLY be used after manually ensuring that the node is powered
899 off and has no access to shared resources.
900
901 WARNING: If this node is not actually powered off or it does
902 have access to shared resources, data corruption/cluster failure
903 can occur. To prevent accidental running of this command,
904 --force or interactive user response is required in order to
905 proceed.
906
907 NOTE: It is not checked if the specified node exists in the
908 cluster in order to be able to work with nodes not visible from
909 the local cluster partition.
910
911 history [show [<node>]]
912 Show fencing history for the specified node or all nodes if no
913 node specified.
914
915 history cleanup [<node>]
916 Cleanup fence history of the specified node or all nodes if no
917 node specified.
918
919 history update
920 Update fence history from all nodes.
921
922 sbd enable [watchdog=<path>[@<node>]]... [device=<path>[@<node>]]...
923 [<SBD_OPTION>=<value>]... [--no-watchdog-validation]
924 Enable SBD in cluster. Default path for watchdog device is
925 /dev/watchdog. Allowed SBD options: SBD_WATCHDOG_TIMEOUT
926 (default: 5), SBD_DELAY_START (default: no) and SBD_STARTMODE
927 (default: always). It is possible to specify up to 3 devices per
928 node. If --no-watchdog-validation is specified, validation of
929 watchdogs will be skipped.
930
931 WARNING: Cluster has to be restarted in order to apply these
932 changes.
933
934 WARNING: By default, it is tested whether the specified watchdog
935 is supported. This may cause a restart of the system when a
936 watchdog with no-way-out-feature enabled is present. Use
937 --no-watchdog-validation to skip watchdog validation.
938
939 Example of enabling SBD in cluster with watchdogs on node1 will
940 be /dev/watchdog2, on node2 /dev/watchdog1, /dev/watchdog0 on
941 all other nodes, device /dev/sdb on node1, device /dev/sda on
942 all other nodes and watchdog timeout will bet set to 10 seconds:
943
944 pcs stonith sbd enable watchdog=/dev/watchdog2@node1 watch‐
945 dog=/dev/watchdog1@node2 watchdog=/dev/watchdog0
946 device=/dev/sdb@node1 device=/dev/sda SBD_WATCHDOG_TIMEOUT=10
947
948
949 sbd disable
950 Disable SBD in cluster.
951
952 WARNING: Cluster has to be restarted in order to apply these
953 changes.
954
955 sbd device setup device=<path> [device=<path>]... [watchdog-time‐
956 out=<integer>] [allocate-timeout=<integer>] [loop-timeout=<integer>]
957 [msgwait-timeout=<integer>]
958 Initialize SBD structures on device(s) with specified timeouts.
959
960 WARNING: All content on device(s) will be overwritten.
961
962 sbd device message <device-path> <node> <message-type>
963 Manually set a message of the specified type on the device for
964 the node. Possible message types (they are documented in sbd(8)
965 man page): test, reset, off, crashdump, exit, clear
966
967 sbd status [--full]
968 Show status of SBD services in cluster and local device(s) con‐
969 figured. If --full is specified, also dump of SBD headers on
970 device(s) will be shown.
971
972 sbd config
973 Show SBD configuration in cluster.
974
975
976 sbd watchdog list
977 Show all available watchdog devices on the local node.
978
979 WARNING: Listing available watchdogs may cause a restart of the
980 system when a watchdog with no-way-out-feature enabled is
981 present.
982
983
984 sbd watchdog test [<watchdog-path>]
985 This operation is expected to force-reboot the local system
986 without following any shutdown procedures using a watchdog. If
987 no watchdog is specified, available watchdog will be used if
988 only one watchdog device is available on the local system.
989
990
991 acl
992 [show] List all current access control lists.
993
994 enable Enable access control lists.
995
996 disable
997 Disable access control lists.
998
999 role create <role id> [description=<description>] [((read | write |
1000 deny) (xpath <query> | id <id>))...]
1001 Create a role with the id and (optional) description specified.
1002 Each role can also have an unlimited number of permissions
1003 (read/write/deny) applied to either an xpath query or the id of
1004 a specific element in the cib.
1005
1006 role delete <role id>
1007 Delete the role specified and remove it from any users/groups it
1008 was assigned to.
1009
1010 role remove <role id>
1011 Delete the role specified and remove it from any users/groups it
1012 was assigned to.
1013
1014 role assign <role id> [to] [user|group] <username/group>
1015 Assign a role to a user or group already created with 'pcs acl
1016 user/group create'. If there is user and group with the same id
1017 and it is not specified which should be used, user will be pri‐
1018 oritized. In cases like this specify whenever user or group
1019 should be used.
1020
1021 role unassign <role id> [from] [user|group] <username/group>
1022 Remove a role from the specified user. If there is user and
1023 group with the same id and it is not specified which should be
1024 used, user will be prioritized. In cases like this specify when‐
1025 ever user or group should be used.
1026
1027 user create <username> [<role id>]...
1028 Create an ACL for the user specified and assign roles to the
1029 user.
1030
1031 user delete <username>
1032 Remove the user specified (and roles assigned will be unassigned
1033 for the specified user).
1034
1035 user remove <username>
1036 Remove the user specified (and roles assigned will be unassigned
1037 for the specified user).
1038
1039 group create <group> [<role id>]...
1040 Create an ACL for the group specified and assign roles to the
1041 group.
1042
1043 group delete <group>
1044 Remove the group specified (and roles assigned will be unas‐
1045 signed for the specified group).
1046
1047 group remove <group>
1048 Remove the group specified (and roles assigned will be unas‐
1049 signed for the specified group).
1050
1051 permission add <role id> ((read | write | deny) (xpath <query> | id
1052 <id>))...
1053 Add the listed permissions to the role specified.
1054
1055 permission delete <permission id>
1056 Remove the permission id specified (permission id's are listed
1057 in parenthesis after permissions in 'pcs acl' output).
1058
1059 permission remove <permission id>
1060 Remove the permission id specified (permission id's are listed
1061 in parenthesis after permissions in 'pcs acl' output).
1062
1063 property
1064 [list|show [<property> | --all | --defaults]] | [--all | --defaults]
1065 List property settings (default: lists configured properties).
1066 If --defaults is specified will show all property defaults, if
1067 --all is specified, current configured properties will be shown
1068 with unset properties and their defaults. See pacemaker-con‐
1069 trold(7) and pacemaker-schedulerd(7) man pages for a description
1070 of the properties.
1071
1072 set <property>=[<value>] ... [--force]
1073 Set specific pacemaker properties (if the value is blank then
1074 the property is removed from the configuration). If a property
1075 is not recognized by pcs the property will not be created unless
1076 the --force is used. See pacemaker-controld(7) and pacemaker-
1077 schedulerd(7) man pages for a description of the properties.
1078
1079 unset <property> ...
1080 Remove property from configuration. See pacemaker-controld(7)
1081 and pacemaker-schedulerd(7) man pages for a description of the
1082 properties.
1083
1084 constraint
1085 [list|show] --full
1086 List all current constraints. If --full is specified also list
1087 the constraint ids.
1088
1089 location <resource> prefers <node>[=<score>] [<node>[=<score>]]...
1090 Create a location constraint on a resource to prefer the speci‐
1091 fied node with score (default score: INFINITY). Resource may be
1092 either a resource id <resource_id> or %<resource_id> or
1093 resource%<resource_id>, or a resource name regular expression
1094 regexp%<resource_pattern>.
1095
1096 location <resource> avoids <node>[=<score>] [<node>[=<score>]]...
1097 Create a location constraint on a resource to avoid the speci‐
1098 fied node with score (default score: INFINITY). Resource may be
1099 either a resource id <resource_id> or %<resource_id> or
1100 resource%<resource_id>, or a resource name regular expression
1101 regexp%<resource_pattern>.
1102
1103 location <resource> rule [id=<rule id>] [resource-discovery=<option>]
1104 [role=master|slave] [constraint-id=<id>] [score=<score> |
1105 score-attribute=<attribute>] <expression>
1106 Creates a location rule on the specified resource where the
1107 expression looks like one of the following:
1108 defined|not_defined <attribute>
1109 <attribute> lt|gt|lte|gte|eq|ne [string|integer|version]
1110 <value>
1111 date gt|lt <date>
1112 date in_range <date> to <date>
1113 date in_range <date> to duration <duration options>...
1114 date-spec <date spec options>...
1115 <expression> and|or <expression>
1116 ( <expression> )
1117 where duration options and date spec options are: hours, month‐
1118 days, weekdays, yeardays, months, weeks, years, weekyears, moon.
1119 Resource may be either a resource id <resource_id> or
1120 %<resource_id> or resource%<resource_id>, or a resource name
1121 regular expression regexp%<resource_pattern>. If score is omit‐
1122 ted it defaults to INFINITY. If id is omitted one is generated
1123 from the resource id. If resource-discovery is omitted it
1124 defaults to 'always'.
1125
1126 location [show [resources|nodes [<node>|<resource>]...] [--full]]
1127 List all the current location constraints. If 'resources' is
1128 specified, location constraints are displayed per resource
1129 (default). If 'nodes' is specified, location constraints are
1130 displayed per node. If specific nodes or resources are specified
1131 then we only show information about them. Resource may be either
1132 a resource id <resource_id> or %<resource_id> or
1133 resource%<resource_id>, or a resource name regular expression
1134 regexp%<resource_pattern>. If --full is specified show the
1135 internal constraint id's as well.
1136
1137 location add <id> <resource> <node> <score> [resource-discov‐
1138 ery=<option>]
1139 Add a location constraint with the appropriate id for the speci‐
1140 fied resource, node name and score. Resource may be either a
1141 resource id <resource_id> or %<resource_id> or
1142 resource%<resource_id>, or a resource name regular expression
1143 regexp%<resource_pattern>.
1144
1145 location delete <id>
1146 Remove a location constraint with the appropriate id.
1147
1148 location remove <id>
1149 Remove a location constraint with the appropriate id.
1150
1151 order [show] [--full]
1152 List all current ordering constraints (if --full is specified
1153 show the internal constraint id's as well).
1154
1155 order [action] <resource id> then [action] <resource id> [options]
1156 Add an ordering constraint specifying actions (start, stop, pro‐
1157 mote, demote) and if no action is specified the default action
1158 will be start. Available options are kind=Optional/Manda‐
1159 tory/Serialize, symmetrical=true/false, require-all=true/false
1160 and id=<constraint-id>.
1161
1162 order set <resource1> [resourceN]... [options] [set <resourceX> ...
1163 [options]] [setoptions [constraint_options]]
1164 Create an ordered set of resources. Available options are
1165 sequential=true/false, require-all=true/false and
1166 action=start/promote/demote/stop. Available constraint_options
1167 are id=<constraint-id>, kind=Optional/Mandatory/Serialize and
1168 symmetrical=true/false.
1169
1170 order delete <resource1> [resourceN]...
1171 Remove resource from any ordering constraint
1172
1173 order remove <resource1> [resourceN]...
1174 Remove resource from any ordering constraint
1175
1176 colocation [show] [--full]
1177 List all current colocation constraints (if --full is specified
1178 show the internal constraint id's as well).
1179
1180 colocation add [master|slave] <source resource id> with [master|slave]
1181 <target resource id> [score] [options] [id=constraint-id]
1182 Request <source resource> to run on the same node where pace‐
1183 maker has determined <target resource> should run. Positive
1184 values of score mean the resources should be run on the same
1185 node, negative values mean the resources should not be run on
1186 the same node. Specifying 'INFINITY' (or '-INFINITY') for the
1187 score forces <source resource> to run (or not run) with <target
1188 resource> (score defaults to "INFINITY"). A role can be master
1189 or slave (if no role is specified, it defaults to 'started').
1190
1191 colocation set <resource1> [resourceN]... [options] [set <resourceX>
1192 ... [options]] [setoptions [constraint_options]]
1193 Create a colocation constraint with a resource set. Available
1194 options are sequential=true/false and role=Stopped/Started/Mas‐
1195 ter/Slave. Available constraint_options are id and either of:
1196 score, score-attribute, score-attribute-mangle.
1197
1198 colocation delete <source resource id> <target resource id>
1199 Remove colocation constraints with specified resources.
1200
1201 colocation remove <source resource id> <target resource id>
1202 Remove colocation constraints with specified resources.
1203
1204 ticket [show] [--full]
1205 List all current ticket constraints (if --full is specified show
1206 the internal constraint id's as well).
1207
1208 ticket add <ticket> [<role>] <resource id> [<options>] [id=<con‐
1209 straint-id>]
1210 Create a ticket constraint for <resource id>. Available option
1211 is loss-policy=fence/stop/freeze/demote. A role can be master,
1212 slave, started or stopped.
1213
1214 ticket set <resource1> [<resourceN>]... [<options>] [set <resourceX>
1215 ... [<options>]] setoptions <constraint_options>
1216 Create a ticket constraint with a resource set. Available
1217 options are role=Stopped/Started/Master/Slave. Required con‐
1218 straint option is ticket=<ticket>. Optional constraint options
1219 are id=<constraint-id> and loss-policy=fence/stop/freeze/demote.
1220
1221 ticket delete <ticket> <resource id>
1222 Remove all ticket constraints with <ticket> from <resource id>.
1223
1224 ticket remove <ticket> <resource id>
1225 Remove all ticket constraints with <ticket> from <resource id>.
1226
1227 delete <constraint id>...
1228 Remove constraint(s) or constraint rules with the specified
1229 id(s).
1230
1231 remove <constraint id>...
1232 Remove constraint(s) or constraint rules with the specified
1233 id(s).
1234
1235 ref <resource>...
1236 List constraints referencing specified resource.
1237
1238 rule add <constraint id> [id=<rule id>] [role=master|slave]
1239 [score=<score>|score-attribute=<attribute>] <expression>
1240 Add a rule to a constraint where the expression looks like one
1241 of the following:
1242 defined|not_defined <attribute>
1243 <attribute> lt|gt|lte|gte|eq|ne [string|integer|version]
1244 <value>
1245 date gt|lt <date>
1246 date in_range <date> to <date>
1247 date in_range <date> to duration <duration options>...
1248 date-spec <date spec options>...
1249 <expression> and|or <expression>
1250 ( <expression> )
1251 where duration options and date spec options are: hours, month‐
1252 days, weekdays, yeardays, months, weeks, years, weekyears, moon.
1253 If score is omitted it defaults to INFINITY. If id is omitted
1254 one is generated from the constraint id.
1255
1256 rule delete <rule id>
1257 Remove a rule if a rule id is specified, if rule is last rule in
1258 its constraint, the constraint will be removed.
1259
1260 rule remove <rule id>
1261 Remove a rule if a rule id is specified, if rule is last rule in
1262 its constraint, the constraint will be removed.
1263
1264 qdevice
1265 status <device model> [--full] [<cluster name>]
1266 Show runtime status of specified model of quorum device
1267 provider. Using --full will give more detailed output. If
1268 <cluster name> is specified, only information about the speci‐
1269 fied cluster will be displayed.
1270
1271 setup model <device model> [--enable] [--start]
1272 Configure specified model of quorum device provider. Quorum
1273 device then can be added to clusters by running "pcs quorum
1274 device add" command in a cluster. --start will also start the
1275 provider. --enable will configure the provider to start on
1276 boot.
1277
1278 destroy <device model>
1279 Disable and stop specified model of quorum device provider and
1280 delete its configuration files.
1281
1282 start <device model>
1283 Start specified model of quorum device provider.
1284
1285 stop <device model>
1286 Stop specified model of quorum device provider.
1287
1288 kill <device model>
1289 Force specified model of quorum device provider to stop (per‐
1290 forms kill -9). Note that init system (e.g. systemd) can detect
1291 that the qdevice is not running and start it again. If you want
1292 to stop the qdevice, run "pcs qdevice stop" command.
1293
1294 enable <device model>
1295 Configure specified model of quorum device provider to start on
1296 boot.
1297
1298 disable <device model>
1299 Configure specified model of quorum device provider to not start
1300 on boot.
1301
1302 quorum
1303 [config]
1304 Show quorum configuration.
1305
1306 status Show quorum runtime status.
1307
1308 device add [<generic options>] model <device model> [<model options>]
1309 [heuristics <heuristics options>]
1310 Add a quorum device to the cluster. Quorum device should be con‐
1311 figured first with "pcs qdevice setup". It is not possible to
1312 use more than one quorum device in a cluster simultaneously.
1313 Currently the only supported model is 'net'. It requires model
1314 options 'algorithm' and 'host' to be specified. Options are doc‐
1315 umented in corosync-qdevice(8) man page; generic options are
1316 'sync_timeout' and 'timeout', for model net options check the
1317 quorum.device.net section, for heuristics options see the quo‐
1318 rum.device.heuristics section. Pcs automatically creates and
1319 distributes TLS certificates and sets the 'tls' model option to
1320 the default value 'on'.
1321 Example: pcs quorum device add model net algorithm=lms
1322 host=qnetd.internal.example.com
1323
1324 device heuristics delete
1325 Remove all heuristics settings of the configured quorum device.
1326
1327 device heuristics remove
1328 Remove all heuristics settings of the configured quorum device.
1329
1330 device delete
1331 Remove a quorum device from the cluster.
1332
1333 device remove
1334 Remove a quorum device from the cluster.
1335
1336 device status [--full]
1337 Show quorum device runtime status. Using --full will give more
1338 detailed output.
1339
1340 device update [<generic options>] [model <model options>] [heuristics
1341 <heuristics options>]
1342 Add/Change quorum device options. Requires the cluster to be
1343 stopped. Model and options are all documented in corosync-qde‐
1344 vice(8) man page; for heuristics options check the quo‐
1345 rum.device.heuristics subkey section, for model options check
1346 the quorum.device.<device model> subkey sections.
1347
1348 WARNING: If you want to change "host" option of qdevice model
1349 net, use "pcs quorum device remove" and "pcs quorum device add"
1350 commands to set up configuration properly unless old and new
1351 host is the same machine.
1352
1353 expected-votes <votes>
1354 Set expected votes in the live cluster to specified value. This
1355 only affects the live cluster, not changes any configuration
1356 files.
1357
1358 unblock [--force]
1359 Cancel waiting for all nodes when establishing quorum. Useful
1360 in situations where you know the cluster is inquorate, but you
1361 are confident that the cluster should proceed with resource man‐
1362 agement regardless. This command should ONLY be used when nodes
1363 which the cluster is waiting for have been confirmed to be pow‐
1364 ered off and to have no access to shared resources.
1365
1366 WARNING: If the nodes are not actually powered off or they do
1367 have access to shared resources, data corruption/cluster failure
1368 can occur. To prevent accidental running of this command,
1369 --force or interactive user response is required in order to
1370 proceed.
1371
1372 update [auto_tie_breaker=[0|1]] [last_man_standing=[0|1]]
1373 [last_man_standing_window=[<time in ms>]] [wait_for_all=[0|1]]
1374 Add/Change quorum options. At least one option must be speci‐
1375 fied. Options are documented in corosync's votequorum(5) man
1376 page. Requires the cluster to be stopped.
1377
1378 booth
1379 setup sites <address> <address> [<address>...] [arbitrators <address>
1380 ...] [--force]
1381 Write new booth configuration with specified sites and arbitra‐
1382 tors. Total number of peers (sites and arbitrators) must be
1383 odd. When the configuration file already exists, command fails
1384 unless --force is specified.
1385
1386 destroy
1387 Remove booth configuration files.
1388
1389 ticket add <ticket> [<name>=<value> ...]
1390 Add new ticket to the current configuration. Ticket options are
1391 specified in booth manpage.
1392
1393 ticket delete <ticket>
1394 Remove the specified ticket from the current configuration.
1395
1396 ticket remove <ticket>
1397 Remove the specified ticket from the current configuration.
1398
1399 config [<node>]
1400 Show booth configuration from the specified node or from the
1401 current node if node not specified.
1402
1403 create ip <address>
1404 Make the cluster run booth service on the specified ip address
1405 as a cluster resource. Typically this is used to run booth
1406 site.
1407
1408 delete Remove booth resources created by the "pcs booth create" com‐
1409 mand.
1410
1411 remove Remove booth resources created by the "pcs booth create" com‐
1412 mand.
1413
1414 restart
1415 Restart booth resources created by the "pcs booth create" com‐
1416 mand.
1417
1418 ticket grant <ticket> [<site address>]
1419 Grant the ticket for the site specified by address. Site
1420 address which has been specified with 'pcs booth create' command
1421 is used if 'site address' is omitted. Specifying site address
1422 is mandatory when running this command on an arbitrator.
1423
1424 ticket revoke <ticket> [<site address>]
1425 Revoke the ticket for the site specified by address. Site
1426 address which has been specified with 'pcs booth create' command
1427 is used if 'site address' is omitted. Specifying site address
1428 is mandatory when running this command on an arbitrator.
1429
1430 status Print current status of booth on the local node.
1431
1432 pull <node>
1433 Pull booth configuration from the specified node.
1434
1435 sync [--skip-offline]
1436 Send booth configuration from the local node to all nodes in the
1437 cluster.
1438
1439 enable Enable booth arbitrator service.
1440
1441 disable
1442 Disable booth arbitrator service.
1443
1444 start Start booth arbitrator service.
1445
1446 stop Stop booth arbitrator service.
1447
1448 status
1449 [status] [--full | --hide-inactive]
1450 View all information about the cluster and resources (--full
1451 provides more details, --hide-inactive hides inactive
1452 resources).
1453
1454 resources [--hide-inactive]
1455 Show status of all currently configured resources. If
1456 --hide-inactive is specified, only show active resources.
1457
1458 cluster
1459 View current cluster status.
1460
1461 corosync
1462 View current membership information as seen by corosync.
1463
1464 quorum View current quorum status.
1465
1466 qdevice <device model> [--full] [<cluster name>]
1467 Show runtime status of specified model of quorum device
1468 provider. Using --full will give more detailed output. If
1469 <cluster name> is specified, only information about the speci‐
1470 fied cluster will be displayed.
1471
1472 booth Print current status of booth on the local node.
1473
1474 nodes [corosync | both | config]
1475 View current status of nodes from pacemaker. If 'corosync' is
1476 specified, view current status of nodes from corosync instead.
1477 If 'both' is specified, view current status of nodes from both
1478 corosync & pacemaker. If 'config' is specified, print nodes from
1479 corosync & pacemaker configuration.
1480
1481 pcsd [<node>]...
1482 Show current status of pcsd on nodes specified, or on all nodes
1483 configured in the local cluster if no nodes are specified.
1484
1485 xml View xml version of status (output from crm_mon -r -1 -X).
1486
1487 config
1488 [show] View full cluster configuration.
1489
1490 backup [filename]
1491 Creates the tarball containing the cluster configuration files.
1492 If filename is not specified the standard output will be used.
1493
1494 restore [--local] [filename]
1495 Restores the cluster configuration files on all nodes from the
1496 backup. If filename is not specified the standard input will be
1497 used. If --local is specified only the files on the current
1498 node will be restored.
1499
1500 checkpoint
1501 List all available configuration checkpoints.
1502
1503 checkpoint view <checkpoint_number>
1504 Show specified configuration checkpoint.
1505
1506 checkpoint restore <checkpoint_number>
1507 Restore cluster configuration to specified checkpoint.
1508
1509 import-cman output=<filename> [input=<filename>] [--interactive] [out‐
1510 put-format=corosync.conf] [dist=<dist>]
1511 Converts CMAN cluster configuration to Pacemaker cluster config‐
1512 uration. Converted configuration will be saved to 'output' file.
1513 To send the configuration to the cluster nodes the 'pcs config
1514 restore' command can be used. If --interactive is specified you
1515 will be prompted to solve incompatibilities manually. If no
1516 input is specified /etc/cluster/cluster.conf will be used.
1517 Optionally you can specify output version by setting 'dist'
1518 option e. g. redhat,7.3 or debian,7 or ubuntu,trusty. You can
1519 get the list of supported dist values by running the "clufter
1520 --list-dists" command. If 'dist' is not specified, it defaults
1521 to this node's version.
1522
1523 import-cman output=<filename> [input=<filename>] [--interactive] out‐
1524 put-format=pcs-commands|pcs-commands-verbose [dist=<dist>]
1525 Converts CMAN cluster configuration to a list of pcs commands
1526 which recreates the same cluster as Pacemaker cluster when exe‐
1527 cuted. Commands will be saved to 'output' file. For other
1528 options see above.
1529
1530 export pcs-commands|pcs-commands-verbose [output=<filename>]
1531 [dist=<dist>]
1532 Creates a list of pcs commands which upon execution recreates
1533 the current cluster running on this node. Commands will be saved
1534 to 'output' file or written to stdout if 'output' is not speci‐
1535 fied. Use pcs-commands to get a simple list of commands, whereas
1536 pcs-commands-verbose creates a list including comments and debug
1537 messages. Optionally specify output version by setting 'dist'
1538 option e. g. redhat,7.3 or debian,7 or ubuntu,trusty. You can
1539 get the list of supported dist values by running the "clufter
1540 --list-dists" command. If 'dist' is not specified, it defaults
1541 to this node's version.
1542
1543 pcsd
1544 certkey <certificate file> <key file>
1545 Load custom certificate and key files for use in pcsd.
1546
1547 sync-certificates
1548 Sync pcsd certificates to all nodes in the local cluster.
1549
1550 deauth [<token>]...
1551 Delete locally stored authentication tokens used by remote sys‐
1552 tems to connect to the local pcsd instance. If no tokens are
1553 specified all tokens will be deleted. After this command is run
1554 other nodes will need to re-authenticate against this node to be
1555 able to connect to it.
1556
1557 host
1558 auth (<host name> [addr=<address>[:<port>]])... [-u <username>] [-p
1559 <password>]
1560 Authenticate local pcs/pcsd against pcsd on specified hosts. It
1561 is possible to specify an address and a port via which pcs/pcsd
1562 will communicate with each host. If an address is not specified
1563 a host name will be used. If a port is not specified 2224 will
1564 be used.
1565
1566 deauth [<host name>]...
1567 Delete authentication tokens which allow pcs/pcsd on the current
1568 system to connect to remote pcsd instances on specified host
1569 names. If the current system is a member of a cluster, the
1570 tokens will be deleted from all nodes in the cluster. If no host
1571 names are specified all tokens will be deleted. After this com‐
1572 mand is run this node will need to re-authenticate against other
1573 nodes to be able to connect to them.
1574
1575 node
1576 attribute [[<node>] [--name <name>] | <node> <name>=<value> ...]
1577 Manage node attributes. If no parameters are specified, show
1578 attributes of all nodes. If one parameter is specified, show
1579 attributes of specified node. If --name is specified, show
1580 specified attribute's value from all nodes. If more parameters
1581 are specified, set attributes of specified node. Attributes can
1582 be removed by setting an attribute without a value.
1583
1584 maintenance [--all | <node>...] [--wait[=n]]
1585 Put specified node(s) into maintenance mode, if no nodes or
1586 options are specified the current node will be put into mainte‐
1587 nance mode, if --all is specified all nodes will be put into
1588 maintenance mode. If --wait is specified, pcs will wait up to
1589 'n' seconds for the node(s) to be put into maintenance mode and
1590 then return 0 on success or 1 if the operation not succeeded
1591 yet. If 'n' is not specified it defaults to 60 minutes.
1592
1593 unmaintenance [--all | <node>...] [--wait[=n]]
1594 Remove node(s) from maintenance mode, if no nodes or options are
1595 specified the current node will be removed from maintenance
1596 mode, if --all is specified all nodes will be removed from main‐
1597 tenance mode. If --wait is specified, pcs will wait up to 'n'
1598 seconds for the node(s) to be removed from maintenance mode and
1599 then return 0 on success or 1 if the operation not succeeded
1600 yet. If 'n' is not specified it defaults to 60 minutes.
1601
1602 standby [--all | <node>...] [--wait[=n]]
1603 Put specified node(s) into standby mode (the node specified will
1604 no longer be able to host resources), if no nodes or options are
1605 specified the current node will be put into standby mode, if
1606 --all is specified all nodes will be put into standby mode. If
1607 --wait is specified, pcs will wait up to 'n' seconds for the
1608 node(s) to be put into standby mode and then return 0 on success
1609 or 1 if the operation not succeeded yet. If 'n' is not specified
1610 it defaults to 60 minutes.
1611
1612 unstandby [--all | <node>...] [--wait[=n]]
1613 Remove node(s) from standby mode (the node specified will now be
1614 able to host resources), if no nodes or options are specified
1615 the current node will be removed from standby mode, if --all is
1616 specified all nodes will be removed from standby mode. If --wait
1617 is specified, pcs will wait up to 'n' seconds for the node(s) to
1618 be removed from standby mode and then return 0 on success or 1
1619 if the operation not succeeded yet. If 'n' is not specified it
1620 defaults to 60 minutes.
1621
1622 utilization [[<node>] [--name <name>] | <node> <name>=<value> ...]
1623 Add specified utilization options to specified node. If node is
1624 not specified, shows utilization of all nodes. If --name is
1625 specified, shows specified utilization value from all nodes. If
1626 utilization options are not specified, shows utilization of
1627 specified node. Utilization option should be in format
1628 name=value, value has to be integer. Options may be removed by
1629 setting an option without a value. Example: pcs node utiliza‐
1630 tion node1 cpu=4 ram=
1631
1632 alert
1633 [config|show]
1634 Show all configured alerts.
1635
1636 create path=<path> [id=<alert-id>] [description=<description>] [options
1637 [<option>=<value>]...] [meta [<meta-option>=<value>]...]
1638 Define an alert handler with specified path. Id will be automat‐
1639 ically generated if it is not specified.
1640
1641 update <alert-id> [path=<path>] [description=<description>] [options
1642 [<option>=<value>]...] [meta [<meta-option>=<value>]...]
1643 Update an existing alert handler with specified id.
1644
1645 delete <alert-id> ...
1646 Remove alert handlers with specified ids.
1647
1648 remove <alert-id> ...
1649 Remove alert handlers with specified ids.
1650
1651 recipient add <alert-id> value=<recipient-value> [id=<recipient-id>]
1652 [description=<description>] [options [<option>=<value>]...] [meta
1653 [<meta-option>=<value>]...]
1654 Add new recipient to specified alert handler.
1655
1656 recipient update <recipient-id> [value=<recipient-value>] [descrip‐
1657 tion=<description>] [options [<option>=<value>]...] [meta
1658 [<meta-option>=<value>]...]
1659 Update an existing recipient identified by its id.
1660
1661 recipient delete <recipient-id> ...
1662 Remove specified recipients.
1663
1664 recipient remove <recipient-id> ...
1665 Remove specified recipients.
1666
1667 client
1668 local-auth [<pcsd-port>] [-u <username>] [-p <password>]
1669 Authenticate current user to local pcsd. This is requiered to
1670 run some pcs commands which may require permissions of root user
1671 such as 'pcs cluster start'.
1672
1674 Show all resources
1675 # pcs resource config
1676
1677 Show options specific to the 'VirtualIP' resource
1678 # pcs resource config VirtualIP
1679
1680 Create a new resource called 'VirtualIP' with options
1681 # pcs resource create VirtualIP ocf:heartbeat:IPaddr2
1682 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s
1683
1684 Create a new resource called 'VirtualIP' with options
1685 # pcs resource create VirtualIP IPaddr2 ip=192.168.0.99
1686 cidr_netmask=32 nic=eth2 op monitor interval=30s
1687
1688 Change the ip address of VirtualIP and remove the nic option
1689 # pcs resource update VirtualIP ip=192.168.0.98 nic=
1690
1691 Delete the VirtualIP resource
1692 # pcs resource delete VirtualIP
1693
1694 Create the MyStonith stonith fence_virt device which can fence host
1695 'f1'
1696 # pcs stonith create MyStonith fence_virt pcmk_host_list=f1
1697
1698 Set the stonith-enabled property to false on the cluster (which dis‐
1699 ables stonith)
1700 # pcs property set stonith-enabled=false
1701
1703 Various pcs commands accept the --force option. Its purpose is to over‐
1704 ride some of checks that pcs is doing or some of errors that may occur
1705 when a pcs command is run. When such error occurs, pcs will print the
1706 error with a note it may be overridden. The exact behavior of the
1707 option is different for each pcs command. Using the --force option can
1708 lead into situations that would normally be prevented by logic of pcs
1709 commands and therefore its use is strongly discouraged unless you know
1710 what you are doing.
1711
1713 EDITOR
1714 Path to a plain-text editor. This is used when pcs is requested
1715 to present a text for the user to edit.
1716
1717 no_proxy, https_proxy, all_proxy, NO_PROXY, HTTPS_PROXY, ALL_PROXY
1718 These environment variables (listed according to their priori‐
1719 ties) control how pcs handles proxy servers when connecting to
1720 cluster nodes. See curl(1) man page for details.
1721
1723 http://clusterlabs.org/doc/
1724
1725 pcsd(8), pcs_snmp_agent(8)
1726
1727 corosync_overview(8), votequorum(5), corosync.conf(5), corosync-qde‐
1728 vice(8), corosync-qdevice-tool(8), corosync-qnetd(8),
1729 corosync-qnetd-tool(8)
1730
1731 pacemaker-controld(7), pacemaker-fenced(7), pacemaker-schedulerd(7),
1732 crm_mon(8), crm_report(8), crm_simulate(8)
1733
1734 boothd(8), sbd(8)
1735
1736 clufter(1)
1737
1738
1739
1740pcs 0.10.1 November 2018 PCS(8)