1PCS(8) System Administration Utilities PCS(8)
2
3
4
6 pcs - pacemaker/corosync configuration system
7
9 pcs [-f file] [-h] [commands]...
10
12 Control and configure pacemaker and corosync.
13
15 -h, --help
16 Display usage and exit.
17
18 -f file
19 Perform actions on file instead of active CIB.
20 Commands supporting the option use the initial state of the
21 specified file as their input and then overwrite the file with
22 the state reflecting the requested operation(s).
23 A few commands only use the specified file in read-only mode
24 since their effect is not a CIB modification.
25
26 --debug
27 Print all network traffic and external commands run.
28
29 --version
30 Print pcs version information. List pcs capabilities if --full
31 is specified.
32
33 --request-timeout=<timeout>
34 Timeout for each outgoing request to another node in seconds.
35 Default is 60s.
36
37 Commands:
38 cluster
39 Configure cluster options and nodes.
40
41 resource
42 Manage cluster resources.
43
44 stonith
45 Manage fence devices.
46
47 constraint
48 Manage resource constraints.
49
50 property
51 Manage pacemaker properties.
52
53 acl
54 Manage pacemaker access control lists.
55
56 qdevice
57 Manage quorum device provider on the local host.
58
59 quorum
60 Manage cluster quorum settings.
61
62 booth
63 Manage booth (cluster ticket manager).
64
65 status
66 View cluster status.
67
68 config
69 View and manage cluster configuration.
70
71 pcsd
72 Manage pcs daemon.
73
74 host
75 Manage hosts known to pcs/pcsd.
76
77 node
78 Manage cluster nodes.
79
80 alert
81 Manage pacemaker alerts.
82
83 client
84 Manage pcsd client configuration.
85
86 dr
87 Manage disaster recovery configuration.
88
89 resource
90 [status [--hide-inactive]]
91 Show status of all currently configured resources. If
92 --hide-inactive is specified, only show active resources.
93
94 config [<resource id>]...
95 Show options of all currently configured resources or if
96 resource ids are specified show the options for the specified
97 resource ids.
98
99 list [filter] [--nodesc]
100 Show list of all available resource agents (if filter is pro‐
101 vided then only resource agents matching the filter will be
102 shown). If --nodesc is used then descriptions of resource agents
103 are not printed.
104
105 describe [<standard>:[<provider>:]]<type> [--full]
106 Show options for the specified resource. If --full is specified,
107 all options including advanced and deprecated ones are shown.
108
109 create <resource id> [<standard>:[<provider>:]]<type> [resource
110 options] [op <operation action> <operation options> [<operation action>
111 <operation options>]...] [meta <meta options>...] [clone [<clone
112 options>] | promotable [<promotable options>] | --group <group id>
113 [--before <resource id> | --after <resource id>] | bundle <bundle id>]
114 [--disabled] [--no-default-ops] [--wait[=n]]
115 Create specified resource. If clone is used a clone resource is
116 created. If promotable is used a promotable clone resource is
117 created. If --group is specified the resource is added to the
118 group named. You can use --before or --after to specify the
119 position of the added resource relatively to some resource
120 already existing in the group. If bundle is specified, resource
121 will be created inside of the specified bundle. If --disabled is
122 specified the resource is not started automatically. If
123 --no-default-ops is specified, only monitor operations are cre‐
124 ated for the resource and all other operations use default set‐
125 tings. If --wait is specified, pcs will wait up to 'n' seconds
126 for the resource to start and then return 0 if the resource is
127 started, or 1 if the resource has not yet started. If 'n' is not
128 specified it defaults to 60 minutes.
129
130 Example: Create a new resource called 'VirtualIP' with IP
131 address 192.168.0.99, netmask of 32, monitored everything 30
132 seconds, on eth2: pcs resource create VirtualIP ocf:heart‐
133 beat:IPaddr2 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor
134 interval=30s
135
136 delete <resource id|group id|bundle id|clone id>
137 Deletes the resource, group, bundle or clone (and all resources
138 within the group/bundle/clone).
139
140 remove <resource id|group id|bundle id|clone id>
141 Deletes the resource, group, bundle or clone (and all resources
142 within the group/bundle/clone).
143
144 enable <resource id>... [--wait[=n]]
145 Allow the cluster to start the resources. Depending on the rest
146 of the configuration (constraints, options, failures, etc), the
147 resources may remain stopped. If --wait is specified, pcs will
148 wait up to 'n' seconds for the resources to start and then
149 return 0 if the resources are started, or 1 if the resources
150 have not yet started. If 'n' is not specified it defaults to 60
151 minutes.
152
153 disable <resource id>... [--safe [--no-strict]] [--simulate]
154 [--wait[=n]]
155 Attempt to stop the resources if they are running and forbid the
156 cluster from starting them again. Depending on the rest of the
157 configuration (constraints, options, failures, etc), the
158 resources may remain started.
159 If --safe is specified, no changes to the cluster configuration
160 will be made if other than specified resources would be affected
161 in any way.
162 If --no-strict is specified, no changes to the cluster configu‐
163 ration will be made if other than specified resources would get
164 stopped or demoted. Moving resources between nodes is allowed.
165 If --simulate is specified, no changes to the cluster configura‐
166 tion will be made and the effect of the changes will be printed
167 instead.
168 If --wait is specified, pcs will wait up to 'n' seconds for the
169 resources to stop and then return 0 if the resources are stopped
170 or 1 if the resources have not stopped. If 'n' is not specified
171 it defaults to 60 minutes.
172
173 safe-disable <resource id>... [--no-strict] [--simulate] [--wait[=n]]
174 [--force]
175 Attempt to stop the resources if they are running and forbid the
176 cluster from starting them again. Depending on the rest of the
177 configuration (constraints, options, failures, etc), the
178 resources may remain started. No changes to the cluster configu‐
179 ration will be made if other than specified resources would be
180 affected in any way.
181 If --no-strict is specified, no changes to the cluster configu‐
182 ration will be made if other than specified resources would get
183 stopped or demoted. Moving resources between nodes is allowed.
184 If --simulate is specified, no changes to the cluster configura‐
185 tion will be made and the effect of the changes will be printed
186 instead.
187 If --wait is specified, pcs will wait up to 'n' seconds for the
188 resources to stop and then return 0 if the resources are stopped
189 or 1 if the resources have not stopped. If 'n' is not specified
190 it defaults to 60 minutes.
191 If --force is specified, checks for safe disable will be
192 skipped.
193
194 restart <resource id> [node] [--wait=n]
195 Restart the resource specified. If a node is specified and if
196 the resource is a clone or bundle it will be restarted only on
197 the node specified. If --wait is specified, then we will wait up
198 to 'n' seconds for the resource to be restarted and return 0 if
199 the restart was successful or 1 if it was not.
200
201 debug-start <resource id> [--full]
202 This command will force the specified resource to start on this
203 node ignoring the cluster recommendations and print the output
204 from starting the resource. Using --full will give more
205 detailed output. This is mainly used for debugging resources
206 that fail to start.
207
208 debug-stop <resource id> [--full]
209 This command will force the specified resource to stop on this
210 node ignoring the cluster recommendations and print the output
211 from stopping the resource. Using --full will give more
212 detailed output. This is mainly used for debugging resources
213 that fail to stop.
214
215 debug-promote <resource id> [--full]
216 This command will force the specified resource to be promoted on
217 this node ignoring the cluster recommendations and print the
218 output from promoting the resource. Using --full will give more
219 detailed output. This is mainly used for debugging resources
220 that fail to promote.
221
222 debug-demote <resource id> [--full]
223 This command will force the specified resource to be demoted on
224 this node ignoring the cluster recommendations and print the
225 output from demoting the resource. Using --full will give more
226 detailed output. This is mainly used for debugging resources
227 that fail to demote.
228
229 debug-monitor <resource id> [--full]
230 This command will force the specified resource to be monitored
231 on this node ignoring the cluster recommendations and print the
232 output from monitoring the resource. Using --full will give
233 more detailed output. This is mainly used for debugging
234 resources that fail to be monitored.
235
236 move <resource id> [destination node] [--master] [lifetime=<lifetime>]
237 [--wait[=n]]
238 Move the resource off the node it is currently running on by
239 creating a -INFINITY location constraint to ban the node. If
240 destination node is specified the resource will be moved to that
241 node by creating an INFINITY location constraint to prefer the
242 destination node. If --master is used the scope of the command
243 is limited to the master role and you must use the promotable
244 clone id (instead of the resource id). If lifetime is specified
245 then the constraint will expire after that time, otherwise it
246 defaults to infinity and the constraint can be cleared manually
247 with 'pcs resource clear' or 'pcs constraint delete'. If --wait
248 is specified, pcs will wait up to 'n' seconds for the resource
249 to move and then return 0 on success or 1 on error. If 'n' is
250 not specified it defaults to 60 minutes. If you want the
251 resource to preferably avoid running on some nodes but be able
252 to failover to them use 'pcs constraint location avoids'.
253
254 ban <resource id> [node] [--master] [lifetime=<lifetime>] [--wait[=n]]
255 Prevent the resource id specified from running on the node (or
256 on the current node it is running on if no node is specified) by
257 creating a -INFINITY location constraint. If --master is used
258 the scope of the command is limited to the master role and you
259 must use the promotable clone id (instead of the resource id).
260 If lifetime is specified then the constraint will expire after
261 that time, otherwise it defaults to infinity and the constraint
262 can be cleared manually with 'pcs resource clear' or 'pcs con‐
263 straint delete'. If --wait is specified, pcs will wait up to 'n'
264 seconds for the resource to move and then return 0 on success or
265 1 on error. If 'n' is not specified it defaults to 60 minutes.
266 If you want the resource to preferably avoid running on some
267 nodes but be able to failover to them use 'pcs constraint loca‐
268 tion avoids'.
269
270 clear <resource id> [node] [--master] [--expired] [--wait[=n]]
271 Remove constraints created by move and/or ban on the specified
272 resource (and node if specified). If --master is used the scope
273 of the command is limited to the master role and you must use
274 the master id (instead of the resource id). If --expired is
275 specified, only constraints with expired lifetimes will be
276 removed. If --wait is specified, pcs will wait up to 'n' seconds
277 for the operation to finish (including starting and/or moving
278 resources if appropriate) and then return 0 on success or 1 on
279 error. If 'n' is not specified it defaults to 60 minutes.
280
281 standards
282 List available resource agent standards supported by this
283 installation (OCF, LSB, etc.).
284
285 providers
286 List available OCF resource agent providers.
287
288 agents [standard[:provider]]
289 List available agents optionally filtered by standard and
290 provider.
291
292 update <resource id> [resource options] [op [<operation action> <opera‐
293 tion options>]...] [meta <meta operations>...] [--wait[=n]]
294 Add/Change options to specified resource, clone or multi-state
295 resource. If an operation (op) is specified it will update the
296 first found operation with the same action on the specified
297 resource, if no operation with that action exists then a new
298 operation will be created. (WARNING: all existing options on
299 the updated operation will be reset if not specified.) If you
300 want to create multiple monitor operations you should use the
301 'op add' & 'op remove' commands. If --wait is specified, pcs
302 will wait up to 'n' seconds for the changes to take effect and
303 then return 0 if the changes have been processed or 1 otherwise.
304 If 'n' is not specified it defaults to 60 minutes.
305
306 op add <resource id> <operation action> [operation properties]
307 Add operation for specified resource.
308
309 op delete <resource id> <operation action> [<operation properties>...]
310 Remove specified operation (note: you must specify the exact
311 operation properties to properly remove an existing operation).
312
313 op delete <operation id>
314 Remove the specified operation id.
315
316 op remove <resource id> <operation action> [<operation properties>...]
317 Remove specified operation (note: you must specify the exact
318 operation properties to properly remove an existing operation).
319
320 op remove <operation id>
321 Remove the specified operation id.
322
323 op defaults [options]
324 Set default values for operations, if no options are passed,
325 lists currently configured defaults. Defaults do not apply to
326 resources which override them with their own defined operations.
327
328 meta <resource id | group id | clone id> <meta options> [--wait[=n]]
329 Add specified options to the specified resource, group or clone.
330 Meta options should be in the format of name=value, options may
331 be removed by setting an option without a value. If --wait is
332 specified, pcs will wait up to 'n' seconds for the changes to
333 take effect and then return 0 if the changes have been processed
334 or 1 otherwise. If 'n' is not specified it defaults to 60 min‐
335 utes.
336 Example: pcs resource meta TestResource failure-timeout=50
337 stickiness=
338
339 group list
340 Show all currently configured resource groups and their
341 resources.
342
343 group add <group id> <resource id> [resource id] ... [resource id]
344 [--before <resource id> | --after <resource id>] [--wait[=n]]
345 Add the specified resource to the group, creating the group if
346 it does not exist. If the resource is present in another group
347 it is moved to the new group. You can use --before or --after to
348 specify the position of the added resources relatively to some
349 resource already existing in the group. By adding resources to a
350 group they are already in and specifying --after or --before you
351 can move the resources in the group. If --wait is specified, pcs
352 will wait up to 'n' seconds for the operation to finish (includ‐
353 ing moving resources if appropriate) and then return 0 on suc‐
354 cess or 1 on error. If 'n' is not specified it defaults to 60
355 minutes.
356
357 group delete <group id> <resource id> [resource id] ... [resource id]
358 [--wait[=n]]
359 Remove the specified resource(s) from the group, removing the
360 group if no resources remain in it. If --wait is specified, pcs
361 will wait up to 'n' seconds for the operation to finish (includ‐
362 ing moving resources if appropriate) and then return 0 on suc‐
363 cess or 1 on error. If 'n' is not specified it defaults to 60
364 minutes.
365
366 group remove <group id> <resource id> [resource id] ... [resource id]
367 [--wait[=n]]
368 Remove the specified resource(s) from the group, removing the
369 group if no resources remain in it. If --wait is specified, pcs
370 will wait up to 'n' seconds for the operation to finish (includ‐
371 ing moving resources if appropriate) and then return 0 on suc‐
372 cess or 1 on error. If 'n' is not specified it defaults to 60
373 minutes.
374
375 ungroup <group id> [resource id] ... [resource id] [--wait[=n]]
376 Remove the group (note: this does not remove any resources from
377 the cluster) or if resources are specified, remove the specified
378 resources from the group. If --wait is specified, pcs will wait
379 up to 'n' seconds for the operation to finish (including moving
380 resources if appropriate) and the return 0 on success or 1 on
381 error. If 'n' is not specified it defaults to 60 minutes.
382
383 clone <resource id | group id> [clone options]... [--wait[=n]]
384 Set up the specified resource or group as a clone. If --wait is
385 specified, pcs will wait up to 'n' seconds for the operation to
386 finish (including starting clone instances if appropriate) and
387 then return 0 on success or 1 on error. If 'n' is not specified
388 it defaults to 60 minutes.
389
390 promotable <resource id | group id> [clone options]... [--wait[=n]]
391 Set up the specified resource or group as a promotable clone.
392 This is an alias for 'pcs resource clone <resource id> pro‐
393 motable=true'.
394
395 unclone <resource id | group id> [--wait[=n]]
396 Remove the clone which contains the specified group or resource
397 (the resource or group will not be removed). If --wait is spec‐
398 ified, pcs will wait up to 'n' seconds for the operation to fin‐
399 ish (including stopping clone instances if appropriate) and then
400 return 0 on success or 1 on error. If 'n' is not specified it
401 defaults to 60 minutes.
402
403 bundle create <bundle id> container <container type> [<container
404 options>] [network <network options>] [port-map <port options>]...
405 [storage-map <storage options>]... [meta <meta options>] [--disabled]
406 [--wait[=n]]
407 Create a new bundle encapsulating no resources. The bundle can
408 be used either as it is or a resource may be put into it at any
409 time. If --disabled is specified, the bundle is not started
410 automatically. If --wait is specified, pcs will wait up to 'n'
411 seconds for the bundle to start and then return 0 on success or
412 1 on error. If 'n' is not specified it defaults to 60 minutes.
413
414 bundle reset <bundle id> [container <container options>] [network <net‐
415 work options>] [port-map <port options>]... [storage-map <storage
416 options>]... [meta <meta options>] [--disabled] [--wait[=n]]
417 Configure specified bundle with given options. Unlike bundle
418 update, this command resets the bundle according given options -
419 no previous options are kept. Resources inside the bundle are
420 kept as they are. If --disabled is specified, the bundle is not
421 started automatically. If --wait is specified, pcs will wait up
422 to 'n' seconds for the bundle to start and then return 0 on suc‐
423 cess or 1 on error. If 'n' is not specified it defaults to 60
424 minutes.
425
426 bundle update <bundle id> [container <container options>] [network
427 <network options>] [port-map (add <port options>) | (delete | remove
428 <id>...)]... [storage-map (add <storage options>) | (delete | remove
429 <id>...)]... [meta <meta options>] [--wait[=n]]
430 Add, remove or change options to specified bundle. If you wish
431 to update a resource encapsulated in the bundle, use the 'pcs
432 resource update' command instead and specify the resource id.
433 If --wait is specified, pcs will wait up to 'n' seconds for the
434 operation to finish (including moving resources if appropriate)
435 and then return 0 on success or 1 on error. If 'n' is not spec‐
436 ified it defaults to 60 minutes.
437
438 manage <resource id>... [--monitor]
439 Set resources listed to managed mode (default). If --monitor is
440 specified, enable all monitor operations of the resources.
441
442 unmanage <resource id>... [--monitor]
443 Set resources listed to unmanaged mode. When a resource is in
444 unmanaged mode, the cluster is not allowed to start nor stop the
445 resource. If --monitor is specified, disable all monitor opera‐
446 tions of the resources.
447
448 defaults [options]
449 Set default values for resources, if no options are passed,
450 lists currently configured defaults. Defaults do not apply to
451 resources which override them with their own defined values.
452
453 cleanup [<resource id>] [node=<node>] [operation=<operation> [inter‐
454 val=<interval>]] [--strict]
455 Make the cluster forget failed operations from history of the
456 resource and re-detect its current state. This can be useful to
457 purge knowledge of past failures that have since been resolved.
458 If the named resource is part of a group, or one numbered
459 instance of a clone or bundled resource, the clean-up applies to
460 the whole collective resource unless --strict is given.
461 If a resource id is not specified then all resources / stonith
462 devices will be cleaned up.
463 If a node is not specified then resources / stonith devices on
464 all nodes will be cleaned up.
465
466 refresh [<resource id>] [node=<node>] [--strict]
467 Make the cluster forget the complete operation history (includ‐
468 ing failures) of the resource and re-detect its current state.
469 If you are interested in forgetting failed operations only, use
470 the 'pcs resource cleanup' command.
471 If the named resource is part of a group, or one numbered
472 instance of a clone or bundled resource, the clean-up applies to
473 the whole collective resource unless --strict is given.
474 If a resource id is not specified then all resources / stonith
475 devices will be refreshed.
476 If a node is not specified then resources / stonith devices on
477 all nodes will be refreshed.
478
479 failcount show [<resource id>] [node=<node>] [operation=<operation>
480 [interval=<interval>]] [--full]
481 Show current failcount for resources, optionally filtered by a
482 resource, node, operation and its interval. If --full is speci‐
483 fied do not sum failcounts per resource and node. Use 'pcs
484 resource cleanup' or 'pcs resource refresh' to reset failcounts.
485
486 relocate dry-run [resource1] [resource2] ...
487 The same as 'relocate run' but has no effect on the cluster.
488
489 relocate run [resource1] [resource2] ...
490 Relocate specified resources to their preferred nodes. If no
491 resources are specified, relocate all resources. This command
492 calculates the preferred node for each resource while ignoring
493 resource stickiness. Then it creates location constraints which
494 will cause the resources to move to their preferred nodes. Once
495 the resources have been moved the constraints are deleted auto‐
496 matically. Note that the preferred node is calculated based on
497 current cluster status, constraints, location of resources and
498 other settings and thus it might change over time.
499
500 relocate show
501 Display current status of resources and their optimal node
502 ignoring resource stickiness.
503
504 relocate clear
505 Remove all constraints created by the 'relocate run' command.
506
507 utilization [<resource id> [<name>=<value> ...]]
508 Add specified utilization options to specified resource. If
509 resource is not specified, shows utilization of all resources.
510 If utilization options are not specified, shows utilization of
511 specified resource. Utilization option should be in format
512 name=value, value has to be integer. Options may be removed by
513 setting an option without a value. Example: pcs resource uti‐
514 lization TestResource cpu= ram=20
515
516 relations <resource id> [--full]
517 Display relations of a resource specified by its id with other
518 resources in a tree structure. Supported types of resource rela‐
519 tions are: ordering constraints, ordering set constraints, rela‐
520 tions defined by resource hierarchy (clones, groups, bundles).
521 If --full is used, more verbose output will be printed.
522
523 cluster
524 setup <cluster name> (<node name> [addr=<node address>]...)... [trans‐
525 port knet|udp|udpu [<transport options>] [link <link options>]... [com‐
526 pression <compression options>] [crypto <crypto options>]] [totem
527 <totem options>] [quorum <quorum options>] [--enable] [--start
528 [--wait[=<n>]]] [--no-keys-sync]
529 Create a cluster from the listed nodes and synchronize cluster
530 configuration files to them.
531 Nodes are specified by their names and optionally their
532 addresses. If no addresses are specified for a node, pcs will
533 configure corosync to communicate with that node using an
534 address provided in 'pcs host auth' command. Otherwise, pcs will
535 configure corosync to communicate with the node using the speci‐
536 fied addresses.
537
538 Transport knet:
539 This is the default transport. It allows configuring traffic
540 encryption and compression as well as using multiple addresses
541 (links) for nodes.
542 Transport options are: ip_version, knet_pmtud_interval,
543 link_mode
544 Link options are: link_priority, linknumber, mcastport,
545 ping_interval, ping_precision, ping_timeout, pong_count, trans‐
546 port (udp or sctp)
547 Each 'link' followed by options sets options for one link in the
548 order the links are defined by nodes' addresses. You can set
549 link options for a subset of links using a linknumber. See exam‐
550 ples below.
551 Compression options are: level, model, threshold
552 Crypto options are: cipher, hash, model
553 By default, encryption is enabled with cipher=aes256 and
554 hash=sha256. To disable encryption, set cipher=none and
555 hash=none.
556
557 Transports udp and udpu:
558 These transports are limited to one address per node. They do
559 not support traffic encryption nor compression.
560 Transport options are: ip_version, netmtu
561 Link options are: bindnetaddr, broadcast, mcastaddr, mcastport,
562 ttl
563
564 Totem and quorum can be configured regardless of used transport.
565 Totem options are: consensus, downcheck, fail_recv_const, heart‐
566 beat_failures_allowed, hold, join, max_messages, max_net‐
567 work_delay, merge, miss_count_const, send_join,
568 seqno_unchanged_const, token, token_coefficient, token_retrans‐
569 mit, token_retransmits_before_loss_const, window_size
570 Quorum options are: auto_tie_breaker, last_man_standing,
571 last_man_standing_window, wait_for_all
572
573 Transports and their options, link, compression, crypto and
574 totem options are all documented in corosync.conf(5) man page;
575 knet link options are prefixed 'knet_' there, compression
576 options are prefixed 'knet_compression_' and crypto options are
577 prefixed 'crypto_'. Quorum options are documented in votequo‐
578 rum(5) man page.
579
580 --enable will configure the cluster to start on nodes boot.
581 --start will start the cluster right after creating it. --wait
582 will wait up to 'n' seconds for the cluster to start.
583 --no-keys-sync will skip creating and distributing pcsd SSL cer‐
584 tificate and key and corosync and pacemaker authkey files. Use
585 this if you provide your own certificates and keys.
586
587 Examples:
588 Create a cluster with default settings:
589 pcs cluster setup newcluster node1 node2
590 Create a cluster using two links:
591 pcs cluster setup newcluster node1 addr=10.0.1.11
592 addr=10.0.2.11 node2 addr=10.0.1.12 addr=10.0.2.12
593 Set link options for all links. Link options are matched to the
594 links in order. The first link (link 0) has sctp transport, the
595 second link (link 1) has mcastport 55405:
596 pcs cluster setup newcluster node1 addr=10.0.1.11
597 addr=10.0.2.11 node2 addr=10.0.1.12 addr=10.0.2.12 transport
598 knet link transport=sctp link mcastport=55405
599 Set link options for the second and fourth links only. Link
600 options are matched to the links based on the linknumber option
601 (the first link is link 0):
602 pcs cluster setup newcluster node1 addr=10.0.1.11
603 addr=10.0.2.11 addr=10.0.3.11 addr=10.0.4.11 node2
604 addr=10.0.1.12 addr=10.0.2.12 addr=10.0.3.12 addr=10.0.4.12
605 transport knet link linknumber=3 mcastport=55405 link linknum‐
606 ber=1 transport=sctp
607 Create a cluster using udp transport with a non-default port:
608 pcs cluster setup newcluster node1 node2 transport udp link
609 mcastport=55405
610
611 start [--all | <node>... ] [--wait[=<n>]] [--request-timeout=<seconds>]
612 Start a cluster on specified node(s). If no nodes are specified
613 then start a cluster on the local node. If --all is specified
614 then start a cluster on all nodes. If the cluster has many nodes
615 then the start request may time out. In that case you should
616 consider setting --request-timeout to a suitable value. If
617 --wait is specified, pcs waits up to 'n' seconds for the cluster
618 to get ready to provide services after the cluster has success‐
619 fully started.
620
621 stop [--all | <node>... ] [--request-timeout=<seconds>]
622 Stop a cluster on specified node(s). If no nodes are specified
623 then stop a cluster on the local node. If --all is specified
624 then stop a cluster on all nodes. If the cluster is running
625 resources which take long time to stop then the stop request may
626 time out before the cluster actually stops. In that case you
627 should consider setting --request-timeout to a suitable value.
628
629 kill Force corosync and pacemaker daemons to stop on the local node
630 (performs kill -9). Note that init system (e.g. systemd) can
631 detect that cluster is not running and start it again. If you
632 want to stop cluster on a node, run pcs cluster stop on that
633 node.
634
635 enable [--all | <node>... ]
636 Configure cluster to run on node boot on specified node(s). If
637 node is not specified then cluster is enabled on the local node.
638 If --all is specified then cluster is enabled on all nodes.
639
640 disable [--all | <node>... ]
641 Configure cluster to not run on node boot on specified node(s).
642 If node is not specified then cluster is disabled on the local
643 node. If --all is specified then cluster is disabled on all
644 nodes.
645
646 auth [-u <username>] [-p <password>]
647 Authenticate pcs/pcsd to pcsd on nodes configured in the local
648 cluster.
649
650 status View current cluster status (an alias of 'pcs status cluster').
651
652 pcsd-status [<node>]...
653 Show current status of pcsd on nodes specified, or on all nodes
654 configured in the local cluster if no nodes are specified.
655
656 sync Sync cluster configuration (files which are supported by all
657 subcommands of this command) to all cluster nodes.
658
659 sync corosync
660 Sync corosync configuration to all nodes found from current
661 corosync.conf file.
662
663 cib [filename] [scope=<scope> | --config]
664 Get the raw xml from the CIB (Cluster Information Base). If a
665 filename is provided, we save the CIB to that file, otherwise
666 the CIB is printed. Specify scope to get a specific section of
667 the CIB. Valid values of the scope are: configuration, nodes,
668 resources, constraints, crm_config, rsc_defaults, op_defaults,
669 status. --config is the same as scope=configuration. Do not
670 specify a scope if you want to edit the saved CIB using pcs (pcs
671 -f <command>).
672
673 cib-push <filename> [--wait[=<n>]] [diff-against=<filename_original> |
674 scope=<scope> | --config]
675 Push the raw xml from <filename> to the CIB (Cluster Information
676 Base). You can obtain the CIB by running the 'pcs cluster cib'
677 command, which is recommended first step when you want to per‐
678 form desired modifications (pcs -f <command>) for the one-off
679 push. If diff-against is specified, pcs diffs contents of file‐
680 name against contents of filename_original and pushes the result
681 to the CIB. Specify scope to push a specific section of the
682 CIB. Valid values of the scope are: configuration, nodes,
683 resources, constraints, crm_config, rsc_defaults, op_defaults.
684 --config is the same as scope=configuration. Use of --config is
685 recommended. Do not specify a scope if you need to push the
686 whole CIB or be warned in the case of outdated CIB. If --wait
687 is specified wait up to 'n' seconds for changes to be applied.
688 WARNING: the selected scope of the CIB will be overwritten by
689 the current content of the specified file.
690
691 Example:
692 pcs cluster cib > original.xml
693 cp original.xml new.xml
694 pcs -f new.xml constraint location apache prefers node2
695 pcs cluster cib-push new.xml diff-against=original.xml
696
697 cib-upgrade
698 Upgrade the CIB to conform to the latest version of the document
699 schema.
700
701 edit [scope=<scope> | --config]
702 Edit the cib in the editor specified by the $EDITOR environment
703 variable and push out any changes upon saving. Specify scope to
704 edit a specific section of the CIB. Valid values of the scope
705 are: configuration, nodes, resources, constraints, crm_config,
706 rsc_defaults, op_defaults. --config is the same as scope=con‐
707 figuration. Use of --config is recommended. Do not specify a
708 scope if you need to edit the whole CIB or be warned in the case
709 of outdated CIB.
710
711 node add <node name> [addr=<node address>]... [watchdog=<watchdog
712 path>] [device=<SBD device path>]... [--start [--wait[=<n>]]]
713 [--enable] [--no-watchdog-validation]
714 Add the node to the cluster and synchronize all relevant config‐
715 uration files to the new node. This command can only be run on
716 an existing cluster node.
717
718 The new node is specified by its name and optionally its
719 addresses. If no addresses are specified for the node, pcs will
720 configure corosync to communicate with the node using an address
721 provided in 'pcs host auth' command. Otherwise, pcs will config‐
722 ure corosync to communicate with the node using the specified
723 addresses.
724
725 Use 'watchdog' to specify a path to a watchdog on the new node,
726 when SBD is enabled in the cluster. If SBD is configured with
727 shared storage, use 'device' to specify path to shared device(s)
728 on the new node.
729
730 If --start is specified also start cluster on the new node, if
731 --wait is specified wait up to 'n' seconds for the new node to
732 start. If --enable is specified configure cluster to start on
733 the new node on boot. If --no-watchdog-validation is specified,
734 validation of watchdog will be skipped.
735
736 WARNING: By default, it is tested whether the specified watchdog
737 is supported. This may cause a restart of the system when a
738 watchdog with no-way-out-feature enabled is present. Use
739 --no-watchdog-validation to skip watchdog validation.
740
741 node delete <node name> [<node name>]...
742 Shutdown specified nodes and remove them from the cluster.
743
744 node remove <node name> [<node name>]...
745 Shutdown specified nodes and remove them from the cluster.
746
747 node add-remote <node name> [<node address>] [options] [op <operation
748 action> <operation options> [<operation action> <operation
749 options>]...] [meta <meta options>...] [--wait[=<n>]]
750 Add the node to the cluster as a remote node. Sync all relevant
751 configuration files to the new node. Start the node and config‐
752 ure it to start the cluster on boot. Options are port and recon‐
753 nect_interval. Operations and meta belong to an underlying con‐
754 nection resource (ocf:pacemaker:remote). If node address is not
755 specified for the node, pcs will configure pacemaker to communi‐
756 cate with the node using an address provided in 'pcs host auth'
757 command. Otherwise, pcs will configure pacemaker to communicate
758 with the node using the specified addresses. If --wait is speci‐
759 fied, wait up to 'n' seconds for the node to start.
760
761 node delete-remote <node identifier>
762 Shutdown specified remote node and remove it from the cluster.
763 The node-identifier can be the name of the node or the address
764 of the node.
765
766 node remove-remote <node identifier>
767 Shutdown specified remote node and remove it from the cluster.
768 The node-identifier can be the name of the node or the address
769 of the node.
770
771 node add-guest <node name> <resource id> [options] [--wait[=<n>]]
772 Make the specified resource a guest node resource. Sync all rel‐
773 evant configuration files to the new node. Start the node and
774 configure it to start the cluster on boot. Options are
775 remote-addr, remote-port and remote-connect-timeout. If
776 remote-addr is not specified for the node, pcs will configure
777 pacemaker to communicate with the node using an address provided
778 in 'pcs host auth' command. Otherwise, pcs will configure pace‐
779 maker to communicate with the node using the specified
780 addresses. If --wait is specified, wait up to 'n' seconds for
781 the node to start.
782
783 node delete-guest <node identifier>
784 Shutdown specified guest node and remove it from the cluster.
785 The node-identifier can be the name of the node or the address
786 of the node or id of the resource that is used as the guest
787 node.
788
789 node remove-guest <node identifier>
790 Shutdown specified guest node and remove it from the cluster.
791 The node-identifier can be the name of the node or the address
792 of the node or id of the resource that is used as the guest
793 node.
794
795 node clear <node name>
796 Remove specified node from various cluster caches. Use this if a
797 removed node is still considered by the cluster to be a member
798 of the cluster.
799
800 link add <node_name>=<node_address>... [options <link options>]
801 Add a corosync link. One address must be specified for each
802 cluster node. If no linknumber is specified, pcs will use the
803 lowest available linknumber.
804 Link options (documented in corosync.conf(5) man page) are:
805 link_priority, linknumber, mcastport, ping_interval, ping_preci‐
806 sion, ping_timeout, pong_count, transport (udp or sctp)
807
808 link delete <linknumber> [<linknumber>]...
809 Remove specified corosync links.
810
811 link remove <linknumber> [<linknumber>]...
812 Remove specified corosync links.
813
814 link update <linknumber> [<node_name>=<node_address>...] [options <link
815 options>]
816 Change node addresses / link options of an existing corosync
817 link. Use this if you cannot add / remove links which is the
818 preferred way.
819 Link options (documented in corosync.conf(5) man page) are:
820 for knet transport: link_priority, mcastport, ping_interval,
821 ping_precision, ping_timeout, pong_count, transport (udp or
822 sctp)
823 for udp and udpu transports: bindnetaddr, broadcast, mcastaddr,
824 mcastport, ttl
825
826 uidgid List the current configured uids and gids of users allowed to
827 connect to corosync.
828
829 uidgid add [uid=<uid>] [gid=<gid>]
830 Add the specified uid and/or gid to the list of users/groups
831 allowed to connect to corosync.
832
833 uidgid delete [uid=<uid>] [gid=<gid>]
834 Remove the specified uid and/or gid from the list of
835 users/groups allowed to connect to corosync.
836
837 uidgid remove [uid=<uid>] [gid=<gid>]
838 Remove the specified uid and/or gid from the list of
839 users/groups allowed to connect to corosync.
840
841 corosync [node]
842 Get the corosync.conf from the specified node or from the cur‐
843 rent node if node not specified.
844
845 reload corosync
846 Reload the corosync configuration on the current node.
847
848 destroy [--all]
849 Permanently destroy the cluster on the current node, killing all
850 cluster processes and removing all cluster configuration files.
851 Using --all will attempt to destroy the cluster on all nodes in
852 the local cluster.
853
854 WARNING: This command permanently removes any cluster configura‐
855 tion that has been created. It is recommended to run 'pcs clus‐
856 ter stop' before destroying the cluster.
857
858 verify [--full] [-f <filename>]
859 Checks the pacemaker configuration (CIB) for syntax and common
860 conceptual errors. If no filename is specified the check is per‐
861 formed on the currently running cluster. If --full is used more
862 verbose output will be printed.
863
864 report [--from "YYYY-M-D H:M:S" [--to "YYYY-M-D H:M:S"]] <dest>
865 Create a tarball containing everything needed when reporting
866 cluster problems. If --from and --to are not used, the report
867 will include the past 24 hours.
868
869 stonith
870 [status [--hide-inactive]]
871 Show status of all currently configured stonith devices. If
872 --hide-inactive is specified, only show active stonith devices.
873
874 config [<stonith id>]...
875 Show options of all currently configured stonith devices or if
876 stonith ids are specified show the options for the specified
877 stonith device ids.
878
879 list [filter] [--nodesc]
880 Show list of all available stonith agents (if filter is provided
881 then only stonith agents matching the filter will be shown). If
882 --nodesc is used then descriptions of stonith agents are not
883 printed.
884
885 describe <stonith agent> [--full]
886 Show options for specified stonith agent. If --full is speci‐
887 fied, all options including advanced and deprecated ones are
888 shown.
889
890 create <stonith id> <stonith device type> [stonith device options] [op
891 <operation action> <operation options> [<operation action> <operation
892 options>]...] [meta <meta options>...] [--group <group id> [--before
893 <stonith id> | --after <stonith id>]] [--disabled] [--wait[=n]]
894 Create stonith device with specified type and options. If
895 --group is specified the stonith device is added to the group
896 named. You can use --before or --after to specify the position
897 of the added stonith device relatively to some stonith device
898 already existing in the group. If--disabled is specified the
899 stonith device is not used. If --wait is specified, pcs will
900 wait up to 'n' seconds for the stonith device to start and then
901 return 0 if the stonith device is started, or 1 if the stonith
902 device has not yet started. If 'n' is not specified it defaults
903 to 60 minutes.
904
905 Example: Create a device for nodes node1 and node2
906 pcs stonith create MyFence fence_virt pcmk_host_list=node1,node2
907 Example: Use port p1 for node n1 and ports p2 and p3 for node n2
908 pcs stonith create MyFence fence_virt
909 'pcmk_host_map=n1:p1;n2:p2,p3'
910
911 update <stonith id> [stonith device options]
912 Add/Change options to specified stonith id.
913
914 delete <stonith id>
915 Remove stonith id from configuration.
916
917 remove <stonith id>
918 Remove stonith id from configuration.
919
920 enable <stonith id>... [--wait[=n]]
921 Allow the cluster to use the stonith devices. If --wait is spec‐
922 ified, pcs will wait up to 'n' seconds for the stonith devices
923 to start and then return 0 if the stonith devices are started,
924 or 1 if the stonith devices have not yet started. If 'n' is not
925 specified it defaults to 60 minutes.
926
927 disable <stonith id>... [--wait[=n]]
928 Attempt to stop the stonith devices if they are running and dis‐
929 allow the cluster to use them. If --wait is specified, pcs will
930 wait up to 'n' seconds for the stonith devices to stop and then
931 return 0 if the stonith devices are stopped or 1 if the stonith
932 devices have not stopped. If 'n' is not specified it defaults to
933 60 minutes.
934
935 cleanup [<stonith id>] [--node <node>] [--strict]
936 Make the cluster forget failed operations from history of the
937 stonith device and re-detect its current state. This can be use‐
938 ful to purge knowledge of past failures that have since been
939 resolved.
940 If the named stonith device is part of a group, or one numbered
941 instance of a clone or bundled resource, the clean-up applies to
942 the whole collective resource unless --strict is given.
943 If a stonith id is not specified then all resources / stonith
944 devices will be cleaned up.
945 If a node is not specified then resources / stonith devices on
946 all nodes will be cleaned up.
947
948 refresh [<stonith id>] [--node <node>] [--strict]
949 Make the cluster forget the complete operation history (includ‐
950 ing failures) of the stonith device and re-detect its current
951 state. If you are interested in forgetting failed operations
952 only, use the 'pcs stonith cleanup' command.
953 If the named stonith device is part of a group, or one numbered
954 instance of a clone or bundled resource, the clean-up applies to
955 the whole collective resource unless --strict is given.
956 If a stonith id is not specified then all resources / stonith
957 devices will be refreshed.
958 If a node is not specified then resources / stonith devices on
959 all nodes will be refreshed.
960
961 level [config]
962 Lists all of the fencing levels currently configured.
963
964 level add <level> <target> <stonith id> [stonith id]...
965 Add the fencing level for the specified target with the list of
966 stonith devices to attempt for that target at that level. Fence
967 levels are attempted in numerical order (starting with 1). If a
968 level succeeds (meaning all devices are successfully fenced in
969 that level) then no other levels are tried, and the target is
970 considered fenced. Target may be a node name <node_name> or
971 %<node_name> or node%<node_name>, a node name regular expression
972 regexp%<node_pattern> or a node attribute value
973 attrib%<name>=<value>.
974
975 level delete <level> [target] [stonith id]...
976 Removes the fence level for the level, target and/or devices
977 specified. If no target or devices are specified then the fence
978 level is removed. Target may be a node name <node_name> or
979 %<node_name> or node%<node_name>, a node name regular expression
980 regexp%<node_pattern> or a node attribute value
981 attrib%<name>=<value>.
982
983 level remove <level> [target] [stonith id]...
984 Removes the fence level for the level, target and/or devices
985 specified. If no target or devices are specified then the fence
986 level is removed. Target may be a node name <node_name> or
987 %<node_name> or node%<node_name>, a node name regular expression
988 regexp%<node_pattern> or a node attribute value
989 attrib%<name>=<value>.
990
991 level clear [target|stonith id(s)]
992 Clears the fence levels on the target (or stonith id) specified
993 or clears all fence levels if a target/stonith id is not speci‐
994 fied. If more than one stonith id is specified they must be sep‐
995 arated by a comma and no spaces. Target may be a node name
996 <node_name> or %<node_name> or node%<node_name>, a node name
997 regular expression regexp%<node_pattern> or a node attribute
998 value attrib%<name>=<value>. Example: pcs stonith level clear
999 dev_a,dev_b
1000
1001 level verify
1002 Verifies all fence devices and nodes specified in fence levels
1003 exist.
1004
1005 fence <node> [--off]
1006 Fence the node specified (if --off is specified, use the 'off'
1007 API call to stonith which will turn the node off instead of
1008 rebooting it).
1009
1010 confirm <node> [--force]
1011 Confirm to the cluster that the specified node is powered off.
1012 This allows the cluster to recover from a situation where no
1013 stonith device is able to fence the node. This command should
1014 ONLY be used after manually ensuring that the node is powered
1015 off and has no access to shared resources.
1016
1017 WARNING: If this node is not actually powered off or it does
1018 have access to shared resources, data corruption/cluster failure
1019 can occur. To prevent accidental running of this command,
1020 --force or interactive user response is required in order to
1021 proceed.
1022
1023 NOTE: It is not checked if the specified node exists in the
1024 cluster in order to be able to work with nodes not visible from
1025 the local cluster partition.
1026
1027 history [show [<node>]]
1028 Show fencing history for the specified node or all nodes if no
1029 node specified.
1030
1031 history cleanup [<node>]
1032 Cleanup fence history of the specified node or all nodes if no
1033 node specified.
1034
1035 history update
1036 Update fence history from all nodes.
1037
1038 sbd enable [watchdog=<path>[@<node>]]... [device=<path>[@<node>]]...
1039 [<SBD_OPTION>=<value>]... [--no-watchdog-validation]
1040 Enable SBD in cluster. Default path for watchdog device is
1041 /dev/watchdog. Allowed SBD options: SBD_WATCHDOG_TIMEOUT
1042 (default: 5), SBD_DELAY_START (default: no), SBD_STARTMODE
1043 (default: always) and SBD_TIMEOUT_ACTION. SBD options are docu‐
1044 mented in sbd(8) man page. It is possible to specify up to 3
1045 devices per node. If --no-watchdog-validation is specified, val‐
1046 idation of watchdogs will be skipped.
1047
1048 WARNING: Cluster has to be restarted in order to apply these
1049 changes.
1050
1051 WARNING: By default, it is tested whether the specified watchdog
1052 is supported. This may cause a restart of the system when a
1053 watchdog with no-way-out-feature enabled is present. Use
1054 --no-watchdog-validation to skip watchdog validation.
1055
1056 Example of enabling SBD in cluster with watchdogs on node1 will
1057 be /dev/watchdog2, on node2 /dev/watchdog1, /dev/watchdog0 on
1058 all other nodes, device /dev/sdb on node1, device /dev/sda on
1059 all other nodes and watchdog timeout will bet set to 10 seconds:
1060
1061 pcs stonith sbd enable watchdog=/dev/watchdog2@node1 watch‐
1062 dog=/dev/watchdog1@node2 watchdog=/dev/watchdog0
1063 device=/dev/sdb@node1 device=/dev/sda SBD_WATCHDOG_TIMEOUT=10
1064
1065
1066 sbd disable
1067 Disable SBD in cluster.
1068
1069 WARNING: Cluster has to be restarted in order to apply these
1070 changes.
1071
1072 sbd device setup device=<path> [device=<path>]... [watchdog-time‐
1073 out=<integer>] [allocate-timeout=<integer>] [loop-timeout=<integer>]
1074 [msgwait-timeout=<integer>]
1075 Initialize SBD structures on device(s) with specified timeouts.
1076
1077 WARNING: All content on device(s) will be overwritten.
1078
1079 sbd device message <device-path> <node> <message-type>
1080 Manually set a message of the specified type on the device for
1081 the node. Possible message types (they are documented in sbd(8)
1082 man page): test, reset, off, crashdump, exit, clear
1083
1084 sbd status [--full]
1085 Show status of SBD services in cluster and local device(s) con‐
1086 figured. If --full is specified, also dump of SBD headers on
1087 device(s) will be shown.
1088
1089 sbd config
1090 Show SBD configuration in cluster.
1091
1092
1093 sbd watchdog list
1094 Show all available watchdog devices on the local node.
1095
1096 WARNING: Listing available watchdogs may cause a restart of the
1097 system when a watchdog with no-way-out-feature enabled is
1098 present.
1099
1100
1101 sbd watchdog test [<watchdog-path>]
1102 This operation is expected to force-reboot the local system
1103 without following any shutdown procedures using a watchdog. If
1104 no watchdog is specified, available watchdog will be used if
1105 only one watchdog device is available on the local system.
1106
1107
1108 acl
1109 [show] List all current access control lists.
1110
1111 enable Enable access control lists.
1112
1113 disable
1114 Disable access control lists.
1115
1116 role create <role id> [description=<description>] [((read | write |
1117 deny) (xpath <query> | id <id>))...]
1118 Create a role with the id and (optional) description specified.
1119 Each role can also have an unlimited number of permissions
1120 (read/write/deny) applied to either an xpath query or the id of
1121 a specific element in the cib.
1122 Permissions are applied to the selected XML element's entire XML
1123 subtree (all elements enclosed within it). Write permission
1124 grants the ability to create, modify, or remove the element and
1125 its subtree, and also the ability to create any "scaffolding"
1126 elements (enclosing elements that do not have attributes other
1127 than an ID). Permissions for more specific matches (more deeply
1128 nested elements) take precedence over more general ones. If mul‐
1129 tiple permissions are configured for the same match (for exam‐
1130 ple, in different roles applied to the same user), any deny per‐
1131 mission takes precedence, then write, then lastly read.
1132 An xpath may include an attribute expression to select only ele‐
1133 ments that match the expression, but the permission still
1134 applies to the entire element (and its subtree), not to the
1135 attribute alone. For example, using the xpath "//*[@name]" to
1136 give write permission would allow changes to the entirety of all
1137 elements that have a "name" attribute and everything enclosed by
1138 those elements. There is no way currently to give permissions
1139 for just one attribute of an element. That is to say, you can
1140 not define an ACL that allows someone to read just the dc-uuid
1141 attribute of the cib tag - that would select the cib element and
1142 give read access to the entire CIB.
1143
1144 role delete <role id>
1145 Delete the role specified and remove it from any users/groups it
1146 was assigned to.
1147
1148 role remove <role id>
1149 Delete the role specified and remove it from any users/groups it
1150 was assigned to.
1151
1152 role assign <role id> [to] [user|group] <username/group>
1153 Assign a role to a user or group already created with 'pcs acl
1154 user/group create'. If there is user and group with the same id
1155 and it is not specified which should be used, user will be pri‐
1156 oritized. In cases like this specify whenever user or group
1157 should be used.
1158
1159 role unassign <role id> [from] [user|group] <username/group>
1160 Remove a role from the specified user. If there is user and
1161 group with the same id and it is not specified which should be
1162 used, user will be prioritized. In cases like this specify when‐
1163 ever user or group should be used.
1164
1165 user create <username> [<role id>]...
1166 Create an ACL for the user specified and assign roles to the
1167 user.
1168
1169 user delete <username>
1170 Remove the user specified (and roles assigned will be unassigned
1171 for the specified user).
1172
1173 user remove <username>
1174 Remove the user specified (and roles assigned will be unassigned
1175 for the specified user).
1176
1177 group create <group> [<role id>]...
1178 Create an ACL for the group specified and assign roles to the
1179 group.
1180
1181 group delete <group>
1182 Remove the group specified (and roles assigned will be unas‐
1183 signed for the specified group).
1184
1185 group remove <group>
1186 Remove the group specified (and roles assigned will be unas‐
1187 signed for the specified group).
1188
1189 permission add <role id> ((read | write | deny) (xpath <query> | id
1190 <id>))...
1191 Add the listed permissions to the role specified. Permissions
1192 are applied to either an xpath query or the id of a specific
1193 element in the CIB.
1194 Permissions are applied to the selected XML element's entire XML
1195 subtree (all elements enclosed within it). Write permission
1196 grants the ability to create, modify, or remove the element and
1197 its subtree, and also the ability to create any "scaffolding"
1198 elements (enclosing elements that do not have attributes other
1199 than an ID). Permissions for more specific matches (more deeply
1200 nested elements) take precedence over more general ones. If mul‐
1201 tiple permissions are configured for the same match (for exam‐
1202 ple, in different roles applied to the same user), any deny per‐
1203 mission takes precedence, then write, then lastly read.
1204 An xpath may include an attribute expression to select only ele‐
1205 ments that match the expression, but the permission still
1206 applies to the entire element (and its subtree), not to the
1207 attribute alone. For example, using the xpath "//*[@name]" to
1208 give write permission would allow changes to the entirety of all
1209 elements that have a "name" attribute and everything enclosed by
1210 those elements. There is no way currently to give permissions
1211 for just one attribute of an element. That is to say, you can
1212 not define an ACL that allows someone to read just the dc-uuid
1213 attribute of the cib tag - that would select the cib element and
1214 give read access to the entire CIB.
1215
1216 permission delete <permission id>
1217 Remove the permission id specified (permission id's are listed
1218 in parenthesis after permissions in 'pcs acl' output).
1219
1220 permission remove <permission id>
1221 Remove the permission id specified (permission id's are listed
1222 in parenthesis after permissions in 'pcs acl' output).
1223
1224 property
1225 [list|show [<property> | --all | --defaults]] | [--all | --defaults]
1226 List property settings (default: lists configured properties).
1227 If --defaults is specified will show all property defaults, if
1228 --all is specified, current configured properties will be shown
1229 with unset properties and their defaults. See pacemaker-con‐
1230 trold(7) and pacemaker-schedulerd(7) man pages for a description
1231 of the properties.
1232
1233 set <property>=[<value>] ... [--force]
1234 Set specific pacemaker properties (if the value is blank then
1235 the property is removed from the configuration). If a property
1236 is not recognized by pcs the property will not be created unless
1237 the --force is used. See pacemaker-controld(7) and pacemaker-
1238 schedulerd(7) man pages for a description of the properties.
1239
1240 unset <property> ...
1241 Remove property from configuration. See pacemaker-controld(7)
1242 and pacemaker-schedulerd(7) man pages for a description of the
1243 properties.
1244
1245 constraint
1246 [list|show] [--full] [--all]
1247 List all current constraints that are not expired. If --all is
1248 specified also show expired constraints. If --full is specified
1249 also list the constraint ids.
1250
1251 location <resource> prefers <node>[=<score>] [<node>[=<score>]]...
1252 Create a location constraint on a resource to prefer the speci‐
1253 fied node with score (default score: INFINITY). Resource may be
1254 either a resource id <resource_id> or %<resource_id> or
1255 resource%<resource_id>, or a resource name regular expression
1256 regexp%<resource_pattern>.
1257
1258 location <resource> avoids <node>[=<score>] [<node>[=<score>]]...
1259 Create a location constraint on a resource to avoid the speci‐
1260 fied node with score (default score: INFINITY). Resource may be
1261 either a resource id <resource_id> or %<resource_id> or
1262 resource%<resource_id>, or a resource name regular expression
1263 regexp%<resource_pattern>.
1264
1265 location <resource> rule [id=<rule id>] [resource-discovery=<option>]
1266 [role=master|slave] [constraint-id=<id>] [score=<score> |
1267 score-attribute=<attribute>] <expression>
1268 Creates a location constraint with a rule on the specified
1269 resource where expression looks like one of the following:
1270 defined|not_defined <attribute>
1271 <attribute> lt|gt|lte|gte|eq|ne [string|integer|version]
1272 <value>
1273 date gt|lt <date>
1274 date in_range <date> to <date>
1275 date in_range <date> to duration <duration options>...
1276 date-spec <date spec options>...
1277 <expression> and|or <expression>
1278 ( <expression> )
1279 where duration options and date spec options are: hours, month‐
1280 days, weekdays, yeardays, months, weeks, years, weekyears, moon.
1281 Resource may be either a resource id <resource_id> or
1282 %<resource_id> or resource%<resource_id>, or a resource name
1283 regular expression regexp%<resource_pattern>. If score is omit‐
1284 ted it defaults to INFINITY. If id is omitted one is generated
1285 from the resource id. If resource-discovery is omitted it
1286 defaults to 'always'.
1287
1288 location [show [resources [<resource>...]] | [nodes [<node>...]]]
1289 [--full] [--all]
1290 List all the current location constraints that are not expired.
1291 If 'resources' is specified, location constraints are displayed
1292 per resource (default). If 'nodes' is specified, location con‐
1293 straints are displayed per node. If specific nodes or resources
1294 are specified then we only show information about them. Resource
1295 may be either a resource id <resource_id> or %<resource_id> or
1296 resource%<resource_id>, or a resource name regular expression
1297 regexp%<resource_pattern>. If --full is specified show the
1298 internal constraint id's as well. If --all is specified show the
1299 expired constraints.
1300
1301 location add <id> <resource> <node> <score> [resource-discov‐
1302 ery=<option>]
1303 Add a location constraint with the appropriate id for the speci‐
1304 fied resource, node name and score. Resource may be either a
1305 resource id <resource_id> or %<resource_id> or
1306 resource%<resource_id>, or a resource name regular expression
1307 regexp%<resource_pattern>.
1308
1309 location delete <id>
1310 Remove a location constraint with the appropriate id.
1311
1312 location remove <id>
1313 Remove a location constraint with the appropriate id.
1314
1315 order [show] [--full]
1316 List all current ordering constraints (if --full is specified
1317 show the internal constraint id's as well).
1318
1319 order [action] <resource id> then [action] <resource id> [options]
1320 Add an ordering constraint specifying actions (start, stop, pro‐
1321 mote, demote) and if no action is specified the default action
1322 will be start. Available options are kind=Optional/Manda‐
1323 tory/Serialize, symmetrical=true/false, require-all=true/false
1324 and id=<constraint-id>.
1325
1326 order set <resource1> [resourceN]... [options] [set <resourceX> ...
1327 [options]] [setoptions [constraint_options]]
1328 Create an ordered set of resources. Available options are
1329 sequential=true/false, require-all=true/false and
1330 action=start/promote/demote/stop. Available constraint_options
1331 are id=<constraint-id>, kind=Optional/Mandatory/Serialize and
1332 symmetrical=true/false.
1333
1334 order delete <resource1> [resourceN]...
1335 Remove resource from any ordering constraint
1336
1337 order remove <resource1> [resourceN]...
1338 Remove resource from any ordering constraint
1339
1340 colocation [show] [--full]
1341 List all current colocation constraints (if --full is specified
1342 show the internal constraint id's as well).
1343
1344 colocation add [<role>] <source resource id> with [<role>] <target
1345 resource id> [score] [options] [id=constraint-id]
1346 Request <source resource> to run on the same node where pace‐
1347 maker has determined <target resource> should run. Positive
1348 values of score mean the resources should be run on the same
1349 node, negative values mean the resources should not be run on
1350 the same node. Specifying 'INFINITY' (or '-INFINITY') for the
1351 score forces <source resource> to run (or not run) with <target
1352 resource> (score defaults to "INFINITY"). A role can be: 'Mas‐
1353 ter', 'Slave', 'Started', 'Stopped' (if no role is specified, it
1354 defaults to 'Started').
1355
1356 colocation set <resource1> [resourceN]... [options] [set <resourceX>
1357 ... [options]] [setoptions [constraint_options]]
1358 Create a colocation constraint with a resource set. Available
1359 options are sequential=true/false and role=Stopped/Started/Mas‐
1360 ter/Slave. Available constraint_options are id and either of:
1361 score, score-attribute, score-attribute-mangle.
1362
1363 colocation delete <source resource id> <target resource id>
1364 Remove colocation constraints with specified resources.
1365
1366 colocation remove <source resource id> <target resource id>
1367 Remove colocation constraints with specified resources.
1368
1369 ticket [show] [--full]
1370 List all current ticket constraints (if --full is specified show
1371 the internal constraint id's as well).
1372
1373 ticket add <ticket> [<role>] <resource id> [<options>] [id=<con‐
1374 straint-id>]
1375 Create a ticket constraint for <resource id>. Available option
1376 is loss-policy=fence/stop/freeze/demote. A role can be master,
1377 slave, started or stopped.
1378
1379 ticket set <resource1> [<resourceN>]... [<options>] [set <resourceX>
1380 ... [<options>]] setoptions <constraint_options>
1381 Create a ticket constraint with a resource set. Available
1382 options are role=Stopped/Started/Master/Slave. Required con‐
1383 straint option is ticket=<ticket>. Optional constraint options
1384 are id=<constraint-id> and loss-policy=fence/stop/freeze/demote.
1385
1386 ticket delete <ticket> <resource id>
1387 Remove all ticket constraints with <ticket> from <resource id>.
1388
1389 ticket remove <ticket> <resource id>
1390 Remove all ticket constraints with <ticket> from <resource id>.
1391
1392 delete <constraint id>...
1393 Remove constraint(s) or constraint rules with the specified
1394 id(s).
1395
1396 remove <constraint id>...
1397 Remove constraint(s) or constraint rules with the specified
1398 id(s).
1399
1400 ref <resource>...
1401 List constraints referencing specified resource.
1402
1403 rule add <constraint id> [id=<rule id>] [role=master|slave]
1404 [score=<score>|score-attribute=<attribute>] <expression>
1405 Add a rule to a location constraint specified by 'constraint id'
1406 where the expression looks like one of the following:
1407 defined|not_defined <attribute>
1408 <attribute> lt|gt|lte|gte|eq|ne [string|integer|version]
1409 <value>
1410 date gt|lt <date>
1411 date in_range <date> to <date>
1412 date in_range <date> to duration <duration options>...
1413 date-spec <date spec options>...
1414 <expression> and|or <expression>
1415 ( <expression> )
1416 where duration options and date spec options are: hours, month‐
1417 days, weekdays, yeardays, months, weeks, years, weekyears, moon.
1418 If score is omitted it defaults to INFINITY. If id is omitted
1419 one is generated from the constraint id.
1420
1421 rule delete <rule id>
1422 Remove a rule from its location constraint and if it's the last
1423 rule, the constraint will also be removed.
1424
1425 rule remove <rule id>
1426 Remove a rule from its location constraint and if it's the last
1427 rule, the constraint will also be removed.
1428
1429 qdevice
1430 status <device model> [--full] [<cluster name>]
1431 Show runtime status of specified model of quorum device
1432 provider. Using --full will give more detailed output. If
1433 <cluster name> is specified, only information about the speci‐
1434 fied cluster will be displayed.
1435
1436 setup model <device model> [--enable] [--start]
1437 Configure specified model of quorum device provider. Quorum
1438 device then can be added to clusters by running "pcs quorum
1439 device add" command in a cluster. --start will also start the
1440 provider. --enable will configure the provider to start on
1441 boot.
1442
1443 destroy <device model>
1444 Disable and stop specified model of quorum device provider and
1445 delete its configuration files.
1446
1447 start <device model>
1448 Start specified model of quorum device provider.
1449
1450 stop <device model>
1451 Stop specified model of quorum device provider.
1452
1453 kill <device model>
1454 Force specified model of quorum device provider to stop (per‐
1455 forms kill -9). Note that init system (e.g. systemd) can detect
1456 that the qdevice is not running and start it again. If you want
1457 to stop the qdevice, run "pcs qdevice stop" command.
1458
1459 enable <device model>
1460 Configure specified model of quorum device provider to start on
1461 boot.
1462
1463 disable <device model>
1464 Configure specified model of quorum device provider to not start
1465 on boot.
1466
1467 quorum
1468 [config]
1469 Show quorum configuration.
1470
1471 status Show quorum runtime status.
1472
1473 device add [<generic options>] model <device model> [<model options>]
1474 [heuristics <heuristics options>]
1475 Add a quorum device to the cluster. Quorum device should be con‐
1476 figured first with "pcs qdevice setup". It is not possible to
1477 use more than one quorum device in a cluster simultaneously.
1478 Currently the only supported model is 'net'. It requires model
1479 options 'algorithm' and 'host' to be specified. Options are doc‐
1480 umented in corosync-qdevice(8) man page; generic options are
1481 'sync_timeout' and 'timeout', for model net options check the
1482 quorum.device.net section, for heuristics options see the quo‐
1483 rum.device.heuristics section. Pcs automatically creates and
1484 distributes TLS certificates and sets the 'tls' model option to
1485 the default value 'on'.
1486 Example: pcs quorum device add model net algorithm=lms
1487 host=qnetd.internal.example.com
1488
1489 device heuristics delete
1490 Remove all heuristics settings of the configured quorum device.
1491
1492 device heuristics remove
1493 Remove all heuristics settings of the configured quorum device.
1494
1495 device delete
1496 Remove a quorum device from the cluster.
1497
1498 device remove
1499 Remove a quorum device from the cluster.
1500
1501 device status [--full]
1502 Show quorum device runtime status. Using --full will give more
1503 detailed output.
1504
1505 device update [<generic options>] [model <model options>] [heuristics
1506 <heuristics options>]
1507 Add/Change quorum device options. Requires the cluster to be
1508 stopped. Model and options are all documented in corosync-qde‐
1509 vice(8) man page; for heuristics options check the quo‐
1510 rum.device.heuristics subkey section, for model options check
1511 the quorum.device.<device model> subkey sections.
1512
1513 WARNING: If you want to change "host" option of qdevice model
1514 net, use "pcs quorum device remove" and "pcs quorum device add"
1515 commands to set up configuration properly unless old and new
1516 host is the same machine.
1517
1518 expected-votes <votes>
1519 Set expected votes in the live cluster to specified value. This
1520 only affects the live cluster, not changes any configuration
1521 files.
1522
1523 unblock [--force]
1524 Cancel waiting for all nodes when establishing quorum. Useful
1525 in situations where you know the cluster is inquorate, but you
1526 are confident that the cluster should proceed with resource man‐
1527 agement regardless. This command should ONLY be used when nodes
1528 which the cluster is waiting for have been confirmed to be pow‐
1529 ered off and to have no access to shared resources.
1530
1531 WARNING: If the nodes are not actually powered off or they do
1532 have access to shared resources, data corruption/cluster failure
1533 can occur. To prevent accidental running of this command,
1534 --force or interactive user response is required in order to
1535 proceed.
1536
1537 update [auto_tie_breaker=[0|1]] [last_man_standing=[0|1]]
1538 [last_man_standing_window=[<time in ms>]] [wait_for_all=[0|1]]
1539 Add/Change quorum options. At least one option must be speci‐
1540 fied. Options are documented in corosync's votequorum(5) man
1541 page. Requires the cluster to be stopped.
1542
1543 booth
1544 setup sites <address> <address> [<address>...] [arbitrators <address>
1545 ...] [--force]
1546 Write new booth configuration with specified sites and arbitra‐
1547 tors. Total number of peers (sites and arbitrators) must be
1548 odd. When the configuration file already exists, command fails
1549 unless --force is specified.
1550
1551 destroy
1552 Remove booth configuration files.
1553
1554 ticket add <ticket> [<name>=<value> ...]
1555 Add new ticket to the current configuration. Ticket options are
1556 specified in booth manpage.
1557
1558 ticket delete <ticket>
1559 Remove the specified ticket from the current configuration.
1560
1561 ticket remove <ticket>
1562 Remove the specified ticket from the current configuration.
1563
1564 config [<node>]
1565 Show booth configuration from the specified node or from the
1566 current node if node not specified.
1567
1568 create ip <address>
1569 Make the cluster run booth service on the specified ip address
1570 as a cluster resource. Typically this is used to run booth
1571 site.
1572
1573 delete Remove booth resources created by the "pcs booth create" com‐
1574 mand.
1575
1576 remove Remove booth resources created by the "pcs booth create" com‐
1577 mand.
1578
1579 restart
1580 Restart booth resources created by the "pcs booth create" com‐
1581 mand.
1582
1583 ticket grant <ticket> [<site address>]
1584 Grant the ticket to the site specified by the address, hence to
1585 the booth formation this site is a member of. When this specifi‐
1586 cation is omitted, site address that has been specified with
1587 'pcs booth create' command is used. Specifying site address is
1588 therefore mandatory when running this command at a host in an
1589 arbitrator role.
1590 Note that the ticket must not be already granted in given booth
1591 formation; for an ad-hoc (and, in the worst case, abrupt, for a
1592 lack of a direct atomicity) change of this preference baring
1593 direct interventions at the sites, the ticket needs to be
1594 revoked first, only then it can be granted at another site
1595 again.
1596
1597 ticket revoke <ticket> [<site address>]
1598 Revoke the ticket in the booth formation as identified with one
1599 of its member sites specified by the address. When this specifi‐
1600 cation is omitted, site address that has been specified with a
1601 prior 'pcs booth create' command is used. Specifying site
1602 address is therefore mandatory when running this command at a
1603 host in an arbitrator role.
1604
1605 status Print current status of booth on the local node.
1606
1607 pull <node>
1608 Pull booth configuration from the specified node.
1609
1610 sync [--skip-offline]
1611 Send booth configuration from the local node to all nodes in the
1612 cluster.
1613
1614 enable Enable booth arbitrator service.
1615
1616 disable
1617 Disable booth arbitrator service.
1618
1619 start Start booth arbitrator service.
1620
1621 stop Stop booth arbitrator service.
1622
1623 status
1624 [status] [--full] [--hide-inactive]
1625 View all information about the cluster and resources (--full
1626 provides more details, --hide-inactive hides inactive
1627 resources).
1628
1629 resources [--hide-inactive]
1630 Show status of all currently configured resources. If
1631 --hide-inactive is specified, only show active resources.
1632
1633 cluster
1634 View current cluster status.
1635
1636 corosync
1637 View current membership information as seen by corosync.
1638
1639 quorum View current quorum status.
1640
1641 qdevice <device model> [--full] [<cluster name>]
1642 Show runtime status of specified model of quorum device
1643 provider. Using --full will give more detailed output. If
1644 <cluster name> is specified, only information about the speci‐
1645 fied cluster will be displayed.
1646
1647 booth Print current status of booth on the local node.
1648
1649 nodes [corosync | both | config]
1650 View current status of nodes from pacemaker. If 'corosync' is
1651 specified, view current status of nodes from corosync instead.
1652 If 'both' is specified, view current status of nodes from both
1653 corosync & pacemaker. If 'config' is specified, print nodes from
1654 corosync & pacemaker configuration.
1655
1656 pcsd [<node>]...
1657 Show current status of pcsd on nodes specified, or on all nodes
1658 configured in the local cluster if no nodes are specified.
1659
1660 xml View xml version of status (output from crm_mon -r -1 -X).
1661
1662 config
1663 [show] View full cluster configuration.
1664
1665 backup [filename]
1666 Creates the tarball containing the cluster configuration files.
1667 If filename is not specified the standard output will be used.
1668
1669 restore [--local] [filename]
1670 Restores the cluster configuration files on all nodes from the
1671 backup. If filename is not specified the standard input will be
1672 used. If --local is specified only the files on the current
1673 node will be restored.
1674
1675 checkpoint
1676 List all available configuration checkpoints.
1677
1678 checkpoint view <checkpoint_number>
1679 Show specified configuration checkpoint.
1680
1681 checkpoint diff <checkpoint_number> <checkpoint_number>
1682 Show differences between the two specified checkpoints. Use
1683 checkpoint number 'live' to compare a checkpoint to the current
1684 live configuration.
1685
1686 checkpoint restore <checkpoint_number>
1687 Restore cluster configuration to specified checkpoint.
1688
1689 import-cman output=<filename> [input=<filename>] [--interactive] [out‐
1690 put-format=corosync.conf] [dist=<dist>]
1691 Converts CMAN cluster configuration to Pacemaker cluster config‐
1692 uration. Converted configuration will be saved to 'output' file.
1693 To send the configuration to the cluster nodes the 'pcs config
1694 restore' command can be used. If --interactive is specified you
1695 will be prompted to solve incompatibilities manually. If no
1696 input is specified /etc/cluster/cluster.conf will be used.
1697 Optionally you can specify output version by setting 'dist'
1698 option e. g. redhat,7.3 or debian,7 or ubuntu,trusty. You can
1699 get the list of supported dist values by running the "clufter
1700 --list-dists" command. If 'dist' is not specified, it defaults
1701 to this node's version.
1702
1703 import-cman output=<filename> [input=<filename>] [--interactive] out‐
1704 put-format=pcs-commands|pcs-commands-verbose [dist=<dist>]
1705 Converts CMAN cluster configuration to a list of pcs commands
1706 which recreates the same cluster as Pacemaker cluster when exe‐
1707 cuted. Commands will be saved to 'output' file. For other
1708 options see above.
1709
1710 export pcs-commands|pcs-commands-verbose [output=<filename>]
1711 [dist=<dist>]
1712 Creates a list of pcs commands which upon execution recreates
1713 the current cluster running on this node. Commands will be saved
1714 to 'output' file or written to stdout if 'output' is not speci‐
1715 fied. Use pcs-commands to get a simple list of commands, whereas
1716 pcs-commands-verbose creates a list including comments and debug
1717 messages. Optionally specify output version by setting 'dist'
1718 option e. g. redhat,7.3 or debian,7 or ubuntu,trusty. You can
1719 get the list of supported dist values by running the "clufter
1720 --list-dists" command. If 'dist' is not specified, it defaults
1721 to this node's version.
1722
1723 pcsd
1724 certkey <certificate file> <key file>
1725 Load custom certificate and key files for use in pcsd.
1726
1727 sync-certificates
1728 Sync pcsd certificates to all nodes in the local cluster.
1729
1730 deauth [<token>]...
1731 Delete locally stored authentication tokens used by remote sys‐
1732 tems to connect to the local pcsd instance. If no tokens are
1733 specified all tokens will be deleted. After this command is run
1734 other nodes will need to re-authenticate against this node to be
1735 able to connect to it.
1736
1737 host
1738 auth (<host name> [addr=<address>[:<port>]])... [-u <username>] [-p
1739 <password>]
1740 Authenticate local pcs/pcsd against pcsd on specified hosts. It
1741 is possible to specify an address and a port via which pcs/pcsd
1742 will communicate with each host. If an address is not specified
1743 a host name will be used. If a port is not specified 2224 will
1744 be used.
1745
1746 deauth [<host name>]...
1747 Delete authentication tokens which allow pcs/pcsd on the current
1748 system to connect to remote pcsd instances on specified host
1749 names. If the current system is a member of a cluster, the
1750 tokens will be deleted from all nodes in the cluster. If no host
1751 names are specified all tokens will be deleted. After this com‐
1752 mand is run this node will need to re-authenticate against other
1753 nodes to be able to connect to them.
1754
1755 node
1756 attribute [[<node>] [--name <name>] | <node> <name>=<value> ...]
1757 Manage node attributes. If no parameters are specified, show
1758 attributes of all nodes. If one parameter is specified, show
1759 attributes of specified node. If --name is specified, show
1760 specified attribute's value from all nodes. If more parameters
1761 are specified, set attributes of specified node. Attributes can
1762 be removed by setting an attribute without a value.
1763
1764 maintenance [--all | <node>...] [--wait[=n]]
1765 Put specified node(s) into maintenance mode, if no nodes or
1766 options are specified the current node will be put into mainte‐
1767 nance mode, if --all is specified all nodes will be put into
1768 maintenance mode. If --wait is specified, pcs will wait up to
1769 'n' seconds for the node(s) to be put into maintenance mode and
1770 then return 0 on success or 1 if the operation not succeeded
1771 yet. If 'n' is not specified it defaults to 60 minutes.
1772
1773 unmaintenance [--all | <node>...] [--wait[=n]]
1774 Remove node(s) from maintenance mode, if no nodes or options are
1775 specified the current node will be removed from maintenance
1776 mode, if --all is specified all nodes will be removed from main‐
1777 tenance mode. If --wait is specified, pcs will wait up to 'n'
1778 seconds for the node(s) to be removed from maintenance mode and
1779 then return 0 on success or 1 if the operation not succeeded
1780 yet. If 'n' is not specified it defaults to 60 minutes.
1781
1782 standby [--all | <node>...] [--wait[=n]]
1783 Put specified node(s) into standby mode (the node specified will
1784 no longer be able to host resources), if no nodes or options are
1785 specified the current node will be put into standby mode, if
1786 --all is specified all nodes will be put into standby mode. If
1787 --wait is specified, pcs will wait up to 'n' seconds for the
1788 node(s) to be put into standby mode and then return 0 on success
1789 or 1 if the operation not succeeded yet. If 'n' is not specified
1790 it defaults to 60 minutes.
1791
1792 unstandby [--all | <node>...] [--wait[=n]]
1793 Remove node(s) from standby mode (the node specified will now be
1794 able to host resources), if no nodes or options are specified
1795 the current node will be removed from standby mode, if --all is
1796 specified all nodes will be removed from standby mode. If --wait
1797 is specified, pcs will wait up to 'n' seconds for the node(s) to
1798 be removed from standby mode and then return 0 on success or 1
1799 if the operation not succeeded yet. If 'n' is not specified it
1800 defaults to 60 minutes.
1801
1802 utilization [[<node>] [--name <name>] | <node> <name>=<value> ...]
1803 Add specified utilization options to specified node. If node is
1804 not specified, shows utilization of all nodes. If --name is
1805 specified, shows specified utilization value from all nodes. If
1806 utilization options are not specified, shows utilization of
1807 specified node. Utilization option should be in format
1808 name=value, value has to be integer. Options may be removed by
1809 setting an option without a value. Example: pcs node utiliza‐
1810 tion node1 cpu=4 ram=
1811
1812 alert
1813 [config|show]
1814 Show all configured alerts.
1815
1816 create path=<path> [id=<alert-id>] [description=<description>] [options
1817 [<option>=<value>]...] [meta [<meta-option>=<value>]...]
1818 Define an alert handler with specified path. Id will be automat‐
1819 ically generated if it is not specified.
1820
1821 update <alert-id> [path=<path>] [description=<description>] [options
1822 [<option>=<value>]...] [meta [<meta-option>=<value>]...]
1823 Update an existing alert handler with specified id.
1824
1825 delete <alert-id> ...
1826 Remove alert handlers with specified ids.
1827
1828 remove <alert-id> ...
1829 Remove alert handlers with specified ids.
1830
1831 recipient add <alert-id> value=<recipient-value> [id=<recipient-id>]
1832 [description=<description>] [options [<option>=<value>]...] [meta
1833 [<meta-option>=<value>]...]
1834 Add new recipient to specified alert handler.
1835
1836 recipient update <recipient-id> [value=<recipient-value>] [descrip‐
1837 tion=<description>] [options [<option>=<value>]...] [meta
1838 [<meta-option>=<value>]...]
1839 Update an existing recipient identified by its id.
1840
1841 recipient delete <recipient-id> ...
1842 Remove specified recipients.
1843
1844 recipient remove <recipient-id> ...
1845 Remove specified recipients.
1846
1847 client
1848 local-auth [<pcsd-port>] [-u <username>] [-p <password>]
1849 Authenticate current user to local pcsd. This is required to run
1850 some pcs commands which may require permissions of root user
1851 such as 'pcs cluster start'.
1852
1853 dr
1854 config Display disaster-recovery configuration from the local node.
1855
1856 status [--full] [--hide-inactive]
1857 Display status of the local and the remote site cluster (--full
1858 provides more details, --hide-inactive hides inactive
1859 resources).
1860
1861 set-recovery-site <recovery site node>
1862 Set up disaster-recovery with the local cluster being the pri‐
1863 mary site. The recovery site is defined by a name of one of its
1864 nodes.
1865
1866 destroy
1867 Permanently destroy disaster-recovery configuration on all
1868 sites.
1869
1871 Show all resources
1872 # pcs resource config
1873
1874 Show options specific to the 'VirtualIP' resource
1875 # pcs resource config VirtualIP
1876
1877 Create a new resource called 'VirtualIP' with options
1878 # pcs resource create VirtualIP ocf:heartbeat:IPaddr2
1879 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s
1880
1881 Create a new resource called 'VirtualIP' with options
1882 # pcs resource create VirtualIP IPaddr2 ip=192.168.0.99
1883 cidr_netmask=32 nic=eth2 op monitor interval=30s
1884
1885 Change the ip address of VirtualIP and remove the nic option
1886 # pcs resource update VirtualIP ip=192.168.0.98 nic=
1887
1888 Delete the VirtualIP resource
1889 # pcs resource delete VirtualIP
1890
1891 Create the MyStonith stonith fence_virt device which can fence host
1892 'f1'
1893 # pcs stonith create MyStonith fence_virt pcmk_host_list=f1
1894
1895 Set the stonith-enabled property to false on the cluster (which dis‐
1896 ables stonith)
1897 # pcs property set stonith-enabled=false
1898
1900 Various pcs commands accept the --force option. Its purpose is to over‐
1901 ride some of checks that pcs is doing or some of errors that may occur
1902 when a pcs command is run. When such error occurs, pcs will print the
1903 error with a note it may be overridden. The exact behavior of the
1904 option is different for each pcs command. Using the --force option can
1905 lead into situations that would normally be prevented by logic of pcs
1906 commands and therefore its use is strongly discouraged unless you know
1907 what you are doing.
1908
1910 EDITOR
1911 Path to a plain-text editor. This is used when pcs is requested
1912 to present a text for the user to edit.
1913
1914 no_proxy, https_proxy, all_proxy, NO_PROXY, HTTPS_PROXY, ALL_PROXY
1915 These environment variables (listed according to their priori‐
1916 ties) control how pcs handles proxy servers when connecting to
1917 cluster nodes. See curl(1) man page for details.
1918
1920 This section summarizes the most important changes in commands done in
1921 pcs-0.10.x compared to pcs-0.9.x. For detailed description of current
1922 commands see above.
1923
1924 cluster
1925 auth The 'pcs cluster auth' command only authenticates nodes in a
1926 local cluster and does not accept a node list. The new command
1927 for authentication is 'pcs host auth'. It allows to specify host
1928 names, addresses and pcsd ports.
1929
1930 node add
1931 Custom node names and Corosync 3.x with knet are fully supported
1932 now, therefore the syntax has been completely changed.
1933 The --device and --watchdog options have been replaced with
1934 'device' and 'watchdog' options, respectively.
1935
1936 quorum This command has been replaced with 'pcs quorum'.
1937
1938 remote-node add
1939 This command has been replaced with 'pcs cluster node
1940 add-guest'.
1941
1942 remote-node remove
1943 This command has been replaced with 'pcs cluster node
1944 delete-guest' and its alias 'pcs cluster node remove-guest'.
1945
1946 setup Custom node names and Corosync 3.x with knet are fully supported
1947 now, therefore the syntax has been completely changed.
1948 The --name option has been removed. The first parameter of the
1949 command is the cluster name now.
1950
1951 standby
1952 This command has been replaced with 'pcs node standby'.
1953
1954 uidgid rm
1955 This command has been deprecated, use 'pcs cluster uidgid
1956 delete' or 'pcs cluster uidgid remove' instead.
1957
1958 unstandby
1959 This command has been replaced with 'pcs node unstandby'.
1960
1961 verify The -V option has been replaced with --full.
1962 To specify a filename, use the -f option.
1963
1964 pcsd
1965 clear-auth
1966 This command has been replaced with 'pcs host deauth' and 'pcs
1967 pcsd deauth'.
1968
1969 property
1970 set The --node option is no longer supported. Use the 'pcs node
1971 attribute' command to set node attributes.
1972
1973 show The --node option is no longer supported. Use the 'pcs node
1974 attribute' command to view node attributes.
1975
1976 unset The --node option is no longer supported. Use the 'pcs node
1977 attribute' command to unset node attributes.
1978
1979 resource
1980 create The 'master' keyword has been changed to 'promotable'.
1981
1982 failcount reset
1983 The command has been removed as 'pcs resource cleanup' is doing
1984 exactly the same job.
1985
1986 master This command has been replaced with 'pcs resource promotable'.
1987
1988 show Previously, this command displayed either status or configura‐
1989 tion of resources depending on the parameters specified. This
1990 was confusing, therefore the command was replaced by several new
1991 commands. To display resources status, run 'pcs resource' or
1992 'pcs resource status'. To display resources configuration, run
1993 'pcs resource config' or 'pcs resource config <resource name>'.
1994 To display configured resource groups, run 'pcs resource group
1995 list'.
1996
1997 status
1998 groups This command has been replaced with 'pcs resource group list'.
1999
2000 stonith
2001 sbd device setup
2002 The --device option has been replaced with the 'device' option.
2003
2004 sbd enable
2005 The --device and --watchdog options have been replaced with
2006 'device' and 'watchdog' options, respectively.
2007
2008 show Previously, this command displayed either status or configura‐
2009 tion of stonith resources depending on the parameters specified.
2010 This was confusing, therefore the command was replaced by sev‐
2011 eral new commands. To display stonith resources status, run 'pcs
2012 stonith' or 'pcs stonith status'. To display stonith resources
2013 configuration, run 'pcs stonith config' or 'pcs stonith config
2014 <stonith name>'.
2015
2017 http://clusterlabs.org/doc/
2018
2019 pcsd(8), pcs_snmp_agent(8)
2020
2021 corosync_overview(8), votequorum(5), corosync.conf(5), corosync-qde‐
2022 vice(8), corosync-qdevice-tool(8), corosync-qnetd(8),
2023 corosync-qnetd-tool(8)
2024
2025 pacemaker-controld(7), pacemaker-fenced(7), pacemaker-schedulerd(7),
2026 crm_mon(8), crm_report(8), crm_simulate(8)
2027
2028 boothd(8), sbd(8)
2029
2030 clufter(1)
2031
2032
2033
2034pcs 0.10.5 March 2020 PCS(8)