1PCS(8) System Administration Utilities PCS(8)
2
3
4
6 pcs - pacemaker/corosync configuration system
7
9 pcs [-f file] [-h] [commands]...
10
12 Control and configure pacemaker and corosync.
13
15 -h, --help
16 Display usage and exit.
17
18 -f file
19 Perform actions on file instead of active CIB.
20 Commands supporting the option use the initial state of the
21 specified file as their input and then overwrite the file with
22 the state reflecting the requested operation(s).
23 A few commands only use the specified file in read-only mode
24 since their effect is not a CIB modification.
25
26 --debug
27 Print all network traffic and external commands run.
28
29 --version
30 Print pcs version information. List pcs capabilities if --full
31 is specified.
32
33 --request-timeout=<timeout>
34 Timeout for each outgoing request to another node in seconds.
35 Default is 60s.
36
37 Commands:
38 cluster
39 Configure cluster options and nodes.
40
41 resource
42 Manage cluster resources.
43
44 stonith
45 Manage fence devices.
46
47 constraint
48 Manage resource constraints.
49
50 property
51 Manage pacemaker properties.
52
53 acl
54 Manage pacemaker access control lists.
55
56 qdevice
57 Manage quorum device provider on the local host.
58
59 quorum
60 Manage cluster quorum settings.
61
62 booth
63 Manage booth (cluster ticket manager).
64
65 status
66 View cluster status.
67
68 config
69 View and manage cluster configuration.
70
71 pcsd
72 Manage pcs daemon.
73
74 host
75 Manage hosts known to pcs/pcsd.
76
77 node
78 Manage cluster nodes.
79
80 alert
81 Manage pacemaker alerts.
82
83 client
84 Manage pcsd client configuration.
85
86 resource
87 [status [--hide-inactive]]
88 Show status of all currently configured resources. If
89 --hide-inactive is specified, only show active resources.
90
91 config [<resource id>]...
92 Show options of all currently configured resources or if
93 resource ids are specified show the options for the specified
94 resource ids.
95
96 list [filter] [--nodesc]
97 Show list of all available resource agents (if filter is pro‐
98 vided then only resource agents matching the filter will be
99 shown). If --nodesc is used then descriptions of resource agents
100 are not printed.
101
102 describe [<standard>:[<provider>:]]<type> [--full]
103 Show options for the specified resource. If --full is specified,
104 all options including advanced and deprecated ones are shown.
105
106 create <resource id> [<standard>:[<provider>:]]<type> [resource
107 options] [op <operation action> <operation options> [<operation action>
108 <operation options>]...] [meta <meta options>...] [clone [<clone
109 options>] | promotable [<promotable options>] | --group <group id>
110 [--before <resource id> | --after <resource id>] | bundle <bundle id>]
111 [--disabled] [--no-default-ops] [--wait[=n]]
112 Create specified resource. If clone is used a clone resource is
113 created. If promotable is used a promotable clone resource is
114 created. If --group is specified the resource is added to the
115 group named. You can use --before or --after to specify the
116 position of the added resource relatively to some resource
117 already existing in the group. If bundle is specified, resource
118 will be created inside of the specified bundle. If --disabled is
119 specified the resource is not started automatically. If
120 --no-default-ops is specified, only monitor operations are cre‐
121 ated for the resource and all other operations use default set‐
122 tings. If --wait is specified, pcs will wait up to 'n' seconds
123 for the resource to start and then return 0 if the resource is
124 started, or 1 if the resource has not yet started. If 'n' is not
125 specified it defaults to 60 minutes.
126
127 Example: Create a new resource called 'VirtualIP' with IP
128 address 192.168.0.99, netmask of 32, monitored everything 30
129 seconds, on eth2: pcs resource create VirtualIP ocf:heart‐
130 beat:IPaddr2 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor
131 interval=30s
132
133 delete <resource id|group id|bundle id|clone id>
134 Deletes the resource, group, bundle or clone (and all resources
135 within the group/bundle/clone).
136
137 remove <resource id|group id|bundle id|clone id>
138 Deletes the resource, group, bundle or clone (and all resources
139 within the group/bundle/clone).
140
141 enable <resource id>... [--wait[=n]]
142 Allow the cluster to start the resources. Depending on the rest
143 of the configuration (constraints, options, failures, etc), the
144 resources may remain stopped. If --wait is specified, pcs will
145 wait up to 'n' seconds for the resources to start and then
146 return 0 if the resources are started, or 1 if the resources
147 have not yet started. If 'n' is not specified it defaults to 60
148 minutes.
149
150 disable <resource id>... [--safe [--no-strict]] [--simulate]
151 [--wait[=n]]
152 Attempt to stop the resources if they are running and forbid the
153 cluster from starting them again. Depending on the rest of the
154 configuration (constraints, options, failures, etc), the
155 resources may remain started.
156 If --safe is specified, no changes to the cluster configuration
157 will be made if other than specified resources would be affected
158 in any way.
159 If --no-strict is specified, no changes to the cluster configu‐
160 ration will be made if other than specified resources would get
161 stopped or demoted. Moving resources between nodes is allowed.
162 If --simulate is specified, no changes to the cluster configura‐
163 tion will be made and the effect of the changes will be printed
164 instead.
165 If --wait is specified, pcs will wait up to 'n' seconds for the
166 resources to stop and then return 0 if the resources are stopped
167 or 1 if the resources have not stopped. If 'n' is not specified
168 it defaults to 60 minutes.
169
170 safe-disable <resource id>... [--no-strict] [--simulate] [--wait[=n]]
171 [--force]
172 Attempt to stop the resources if they are running and forbid the
173 cluster from starting them again. Depending on the rest of the
174 configuration (constraints, options, failures, etc), the
175 resources may remain started. No changes to the cluster configu‐
176 ration will be made if other than specified resources would be
177 affected in any way.
178 If --no-strict is specified, no changes to the cluster configu‐
179 ration will be made if other than specified resources would get
180 stopped or demoted. Moving resources between nodes is allowed.
181 If --simulate is specified, no changes to the cluster configura‐
182 tion will be made and the effect of the changes will be printed
183 instead.
184 If --wait is specified, pcs will wait up to 'n' seconds for the
185 resources to stop and then return 0 if the resources are stopped
186 or 1 if the resources have not stopped. If 'n' is not specified
187 it defaults to 60 minutes.
188 If --force is specified, checks for safe disable will be
189 skipped.
190
191 restart <resource id> [node] [--wait=n]
192 Restart the resource specified. If a node is specified and if
193 the resource is a clone or bundle it will be restarted only on
194 the node specified. If --wait is specified, then we will wait up
195 to 'n' seconds for the resource to be restarted and return 0 if
196 the restart was successful or 1 if it was not.
197
198 debug-start <resource id> [--full]
199 This command will force the specified resource to start on this
200 node ignoring the cluster recommendations and print the output
201 from starting the resource. Using --full will give more
202 detailed output. This is mainly used for debugging resources
203 that fail to start.
204
205 debug-stop <resource id> [--full]
206 This command will force the specified resource to stop on this
207 node ignoring the cluster recommendations and print the output
208 from stopping the resource. Using --full will give more
209 detailed output. This is mainly used for debugging resources
210 that fail to stop.
211
212 debug-promote <resource id> [--full]
213 This command will force the specified resource to be promoted on
214 this node ignoring the cluster recommendations and print the
215 output from promoting the resource. Using --full will give more
216 detailed output. This is mainly used for debugging resources
217 that fail to promote.
218
219 debug-demote <resource id> [--full]
220 This command will force the specified resource to be demoted on
221 this node ignoring the cluster recommendations and print the
222 output from demoting the resource. Using --full will give more
223 detailed output. This is mainly used for debugging resources
224 that fail to demote.
225
226 debug-monitor <resource id> [--full]
227 This command will force the specified resource to be monitored
228 on this node ignoring the cluster recommendations and print the
229 output from monitoring the resource. Using --full will give
230 more detailed output. This is mainly used for debugging
231 resources that fail to be monitored.
232
233 move <resource id> [destination node] [--master] [lifetime=<lifetime>]
234 [--wait[=n]]
235 Move the resource off the node it is currently running on by
236 creating a -INFINITY location constraint to ban the node. If
237 destination node is specified the resource will be moved to that
238 node by creating an INFINITY location constraint to prefer the
239 destination node. If --master is used the scope of the command
240 is limited to the master role and you must use the promotable
241 clone id (instead of the resource id). If lifetime is specified
242 then the constraint will expire after that time, otherwise it
243 defaults to infinity and the constraint can be cleared manually
244 with 'pcs resource clear' or 'pcs constraint delete'. If --wait
245 is specified, pcs will wait up to 'n' seconds for the resource
246 to move and then return 0 on success or 1 on error. If 'n' is
247 not specified it defaults to 60 minutes. If you want the
248 resource to preferably avoid running on some nodes but be able
249 to failover to them use 'pcs constraint location avoids'.
250
251 ban <resource id> [node] [--master] [lifetime=<lifetime>] [--wait[=n]]
252 Prevent the resource id specified from running on the node (or
253 on the current node it is running on if no node is specified) by
254 creating a -INFINITY location constraint. If --master is used
255 the scope of the command is limited to the master role and you
256 must use the promotable clone id (instead of the resource id).
257 If lifetime is specified then the constraint will expire after
258 that time, otherwise it defaults to infinity and the constraint
259 can be cleared manually with 'pcs resource clear' or 'pcs con‐
260 straint delete'. If --wait is specified, pcs will wait up to 'n'
261 seconds for the resource to move and then return 0 on success or
262 1 on error. If 'n' is not specified it defaults to 60 minutes.
263 If you want the resource to preferably avoid running on some
264 nodes but be able to failover to them use 'pcs constraint loca‐
265 tion avoids'.
266
267 clear <resource id> [node] [--master] [--expired] [--wait[=n]]
268 Remove constraints created by move and/or ban on the specified
269 resource (and node if specified). If --master is used the scope
270 of the command is limited to the master role and you must use
271 the master id (instead of the resource id). If --expired is
272 specified, only constraints with expired lifetimes will be
273 removed. If --wait is specified, pcs will wait up to 'n' seconds
274 for the operation to finish (including starting and/or moving
275 resources if appropriate) and then return 0 on success or 1 on
276 error. If 'n' is not specified it defaults to 60 minutes.
277
278 standards
279 List available resource agent standards supported by this
280 installation (OCF, LSB, etc.).
281
282 providers
283 List available OCF resource agent providers.
284
285 agents [standard[:provider]]
286 List available agents optionally filtered by standard and
287 provider.
288
289 update <resource id> [resource options] [op [<operation action> <opera‐
290 tion options>]...] [meta <meta operations>...] [--wait[=n]]
291 Add/Change options to specified resource, clone or multi-state
292 resource. If an operation (op) is specified it will update the
293 first found operation with the same action on the specified
294 resource, if no operation with that action exists then a new
295 operation will be created. (WARNING: all existing options on
296 the updated operation will be reset if not specified.) If you
297 want to create multiple monitor operations you should use the
298 'op add' & 'op remove' commands. If --wait is specified, pcs
299 will wait up to 'n' seconds for the changes to take effect and
300 then return 0 if the changes have been processed or 1 otherwise.
301 If 'n' is not specified it defaults to 60 minutes.
302
303 op add <resource id> <operation action> [operation properties]
304 Add operation for specified resource.
305
306 op delete <resource id> <operation action> [<operation properties>...]
307 Remove specified operation (note: you must specify the exact
308 operation properties to properly remove an existing operation).
309
310 op delete <operation id>
311 Remove the specified operation id.
312
313 op remove <resource id> <operation action> [<operation properties>...]
314 Remove specified operation (note: you must specify the exact
315 operation properties to properly remove an existing operation).
316
317 op remove <operation id>
318 Remove the specified operation id.
319
320 op defaults [options]
321 Set default values for operations, if no options are passed,
322 lists currently configured defaults. Defaults do not apply to
323 resources which override them with their own defined operations.
324
325 meta <resource id | group id | clone id> <meta options> [--wait[=n]]
326 Add specified options to the specified resource, group or clone.
327 Meta options should be in the format of name=value, options may
328 be removed by setting an option without a value. If --wait is
329 specified, pcs will wait up to 'n' seconds for the changes to
330 take effect and then return 0 if the changes have been processed
331 or 1 otherwise. If 'n' is not specified it defaults to 60 min‐
332 utes.
333 Example: pcs resource meta TestResource failure-timeout=50
334 stickiness=
335
336 group list
337 Show all currently configured resource groups and their
338 resources.
339
340 group add <group id> <resource id> [resource id] ... [resource id]
341 [--before <resource id> | --after <resource id>] [--wait[=n]]
342 Add the specified resource to the group, creating the group if
343 it does not exist. If the resource is present in another group
344 it is moved to the new group. You can use --before or --after to
345 specify the position of the added resources relatively to some
346 resource already existing in the group. By adding resources to a
347 group they are already in and specifying --after or --before you
348 can move the resources in the group. If --wait is specified, pcs
349 will wait up to 'n' seconds for the operation to finish (includ‐
350 ing moving resources if appropriate) and then return 0 on suc‐
351 cess or 1 on error. If 'n' is not specified it defaults to 60
352 minutes.
353
354 group delete <group id> <resource id> [resource id] ... [resource id]
355 [--wait[=n]]
356 Remove the specified resource(s) from the group, removing the
357 group if no resources remain in it. If --wait is specified, pcs
358 will wait up to 'n' seconds for the operation to finish (includ‐
359 ing moving resources if appropriate) and then return 0 on suc‐
360 cess or 1 on error. If 'n' is not specified it defaults to 60
361 minutes.
362
363 group remove <group id> <resource id> [resource id] ... [resource id]
364 [--wait[=n]]
365 Remove the specified resource(s) from the group, removing the
366 group if no resources remain in it. If --wait is specified, pcs
367 will wait up to 'n' seconds for the operation to finish (includ‐
368 ing moving resources if appropriate) and then return 0 on suc‐
369 cess or 1 on error. If 'n' is not specified it defaults to 60
370 minutes.
371
372 ungroup <group id> [resource id] ... [resource id] [--wait[=n]]
373 Remove the group (note: this does not remove any resources from
374 the cluster) or if resources are specified, remove the specified
375 resources from the group. If --wait is specified, pcs will wait
376 up to 'n' seconds for the operation to finish (including moving
377 resources if appropriate) and the return 0 on success or 1 on
378 error. If 'n' is not specified it defaults to 60 minutes.
379
380 clone <resource id | group id> [clone options]... [--wait[=n]]
381 Set up the specified resource or group as a clone. If --wait is
382 specified, pcs will wait up to 'n' seconds for the operation to
383 finish (including starting clone instances if appropriate) and
384 then return 0 on success or 1 on error. If 'n' is not specified
385 it defaults to 60 minutes.
386
387 promotable <resource id | group id> [clone options]... [--wait[=n]]
388 Set up the specified resource or group as a promotable clone.
389 This is an alias for 'pcs resource clone <resource id> pro‐
390 motable=true'.
391
392 unclone <resource id | group id> [--wait[=n]]
393 Remove the clone which contains the specified group or resource
394 (the resource or group will not be removed). If --wait is spec‐
395 ified, pcs will wait up to 'n' seconds for the operation to fin‐
396 ish (including stopping clone instances if appropriate) and then
397 return 0 on success or 1 on error. If 'n' is not specified it
398 defaults to 60 minutes.
399
400 bundle create <bundle id> container <container type> [<container
401 options>] [network <network options>] [port-map <port options>]...
402 [storage-map <storage options>]... [meta <meta options>] [--disabled]
403 [--wait[=n]]
404 Create a new bundle encapsulating no resources. The bundle can
405 be used either as it is or a resource may be put into it at any
406 time. If --disabled is specified, the bundle is not started
407 automatically. If --wait is specified, pcs will wait up to 'n'
408 seconds for the bundle to start and then return 0 on success or
409 1 on error. If 'n' is not specified it defaults to 60 minutes.
410
411 bundle reset <bundle id> [container <container options>] [network <net‐
412 work options>] [port-map <port options>]... [storage-map <storage
413 options>]... [meta <meta options>] [--disabled] [--wait[=n]]
414 Configure specified bundle with given options. Unlike bundle
415 update, this command resets the bundle according given options -
416 no previous options are kept. Resources inside the bundle are
417 kept as they are. If --disabled is specified, the bundle is not
418 started automatically. If --wait is specified, pcs will wait up
419 to 'n' seconds for the bundle to start and then return 0 on suc‐
420 cess or 1 on error. If 'n' is not specified it defaults to 60
421 minutes.
422
423 bundle update <bundle id> [container <container options>] [network
424 <network options>] [port-map (add <port options>) | (delete | remove
425 <id>...)]... [storage-map (add <storage options>) | (delete | remove
426 <id>...)]... [meta <meta options>] [--wait[=n]]
427 Add, remove or change options to specified bundle. If you wish
428 to update a resource encapsulated in the bundle, use the 'pcs
429 resource update' command instead and specify the resource id.
430 If --wait is specified, pcs will wait up to 'n' seconds for the
431 operation to finish (including moving resources if appropriate)
432 and then return 0 on success or 1 on error. If 'n' is not spec‐
433 ified it defaults to 60 minutes.
434
435 manage <resource id>... [--monitor]
436 Set resources listed to managed mode (default). If --monitor is
437 specified, enable all monitor operations of the resources.
438
439 unmanage <resource id>... [--monitor]
440 Set resources listed to unmanaged mode. When a resource is in
441 unmanaged mode, the cluster is not allowed to start nor stop the
442 resource. If --monitor is specified, disable all monitor opera‐
443 tions of the resources.
444
445 defaults [options]
446 Set default values for resources, if no options are passed,
447 lists currently configured defaults. Defaults do not apply to
448 resources which override them with their own defined values.
449
450 cleanup [<resource id>] [node=<node>] [operation=<operation> [inter‐
451 val=<interval>]]
452 Make the cluster forget failed operations from history of the
453 resource and re-detect its current state. This can be useful to
454 purge knowledge of past failures that have since been resolved.
455 If a resource id is not specified then all resources / stonith
456 devices will be cleaned up. If a node is not specified then
457 resources / stonith devices on all nodes will be cleaned up.
458
459 refresh [<resource id>] [node=<node>] [--full]
460 Make the cluster forget the complete operation history (includ‐
461 ing failures) of the resource and re-detect its current state.
462 If you are interested in forgetting failed operations only, use
463 the 'pcs resource cleanup' command. If a resource id is not
464 specified then all resources / stonith devices will be
465 refreshed. If a node is not specified then resources / stonith
466 devices on all nodes will be refreshed. Use --full to refresh a
467 resource on all nodes, otherwise only nodes where the resource's
468 state is known will be considered.
469
470 failcount show [<resource id>] [node=<node>] [operation=<operation>
471 [interval=<interval>]] [--full]
472 Show current failcount for resources, optionally filtered by a
473 resource, node, operation and its interval. If --full is speci‐
474 fied do not sum failcounts per resource and node. Use 'pcs
475 resource cleanup' or 'pcs resource refresh' to reset failcounts.
476
477 relocate dry-run [resource1] [resource2] ...
478 The same as 'relocate run' but has no effect on the cluster.
479
480 relocate run [resource1] [resource2] ...
481 Relocate specified resources to their preferred nodes. If no
482 resources are specified, relocate all resources. This command
483 calculates the preferred node for each resource while ignoring
484 resource stickiness. Then it creates location constraints which
485 will cause the resources to move to their preferred nodes. Once
486 the resources have been moved the constraints are deleted auto‐
487 matically. Note that the preferred node is calculated based on
488 current cluster status, constraints, location of resources and
489 other settings and thus it might change over time.
490
491 relocate show
492 Display current status of resources and their optimal node
493 ignoring resource stickiness.
494
495 relocate clear
496 Remove all constraints created by the 'relocate run' command.
497
498 utilization [<resource id> [<name>=<value> ...]]
499 Add specified utilization options to specified resource. If
500 resource is not specified, shows utilization of all resources.
501 If utilization options are not specified, shows utilization of
502 specified resource. Utilization option should be in format
503 name=value, value has to be integer. Options may be removed by
504 setting an option without a value. Example: pcs resource uti‐
505 lization TestResource cpu= ram=20
506
507 relations <resource id> [--full]
508 Display relations of a resource specified by its id with other
509 resources in a tree structure. Supported types of resource rela‐
510 tions are: ordering constraints, ordering set constraints, rela‐
511 tions defined by resource hierarchy (clones, groups, bundles).
512 If --full is used, more verbose output will be printed.
513
514 cluster
515 setup <cluster name> (<node name> [addr=<node address>]...)... [trans‐
516 port knet|udp|udpu [<transport options>] [link <link options>] [com‐
517 pression <compression options>] [crypto <crypto options>]] [totem
518 <totem options>] [quorum <quorum options>] [--enable] [--start
519 [--wait[=<n>]]] [--no-keys-sync]
520 Create a cluster from the listed nodes and synchronize cluster
521 configuration files to them.
522 Nodes are specified by their names and optionally their
523 addresses. If no addresses are specified for a node, pcs will
524 configure corosync to communicate with that node using an
525 address provided in 'pcs host auth' command. Otherwise, pcs will
526 configure corosync to communicate with the node using the speci‐
527 fied addresses.
528
529 Transport knet:
530 This is the default transport. It allows configuring traffic
531 encryption and compression as well as using multiple addresses
532 (links) for nodes.
533 Transport options are: ip_version, knet_pmtud_interval,
534 link_mode
535 Link options are: link_priority, linknumber, mcastport,
536 ping_interval, ping_precision, ping_timeout, pong_count, trans‐
537 port (udp or sctp)
538 You can set link options for a subset of links using a linknum‐
539 ber.
540 Compression options are: level, model, threshold
541 Crypto options are: cipher, hash, model
542 By default, encryption is enabled with cipher=aes256 and
543 hash=sha256. To disable encryption, set cipher=none and
544 hash=none.
545
546 Transports udp and udpu:
547 These transports are limited to one address per node. They do
548 not support traffic encryption nor compression.
549 Transport options are: ip_version, netmtu
550 Link options are: bindnetaddr, broadcast, mcastaddr, mcastport,
551 ttl
552
553 Totem and quorum can be configured regardless of used transport.
554 Totem options are: consensus, downcheck, fail_recv_const, heart‐
555 beat_failures_allowed, hold, join, max_messages, max_net‐
556 work_delay, merge, miss_count_const, send_join,
557 seqno_unchanged_const, token, token_coefficient, token_retrans‐
558 mit, token_retransmits_before_loss_const, window_size
559 Quorum options are: auto_tie_breaker, last_man_standing,
560 last_man_standing_window, wait_for_all
561
562 Transports and their options, link, compression, crypto and
563 totem options are all documented in corosync.conf(5) man page;
564 knet link options are prefixed 'knet_' there, compression
565 options are prefixed 'knet_compression_' and crypto options are
566 prefixed 'crypto_'. Quorum options are documented in votequo‐
567 rum(5) man page.
568
569 --enable will configure the cluster to start on nodes boot.
570 --start will start the cluster right after creating it. --wait
571 will wait up to 'n' seconds for the cluster to start.
572 --no-keys-sync will skip creating and distributing pcsd SSL cer‐
573 tificate and key and corosync and pacemaker authkey files. Use
574 this if you provide your own certificates and keys.
575
576 Examples:
577 Create a cluster with default settings:
578 pcs cluster setup newcluster node1 node2
579 Create a cluster using two links:
580 pcs cluster setup newcluster node1 addr=10.0.1.11
581 addr=10.0.2.11 node2 addr=10.0.1.12 addr=10.0.2.12
582 Set link options for the second link only (first link is link
583 0):
584 pcs cluster setup newcluster node1 addr=10.0.1.11
585 addr=10.0.2.11 node2 addr=10.0.1.12 addr=10.0.2.12 transport
586 knet link linknumber=1 transport=sctp
587 Create a cluster using udp transport with a non-default port:
588 pcs cluster setup newcluster node1 node2 transport udp link
589 mcastport=55405
590
591 start [--all | <node>... ] [--wait[=<n>]] [--request-timeout=<seconds>]
592 Start a cluster on specified node(s). If no nodes are specified
593 then start a cluster on the local node. If --all is specified
594 then start a cluster on all nodes. If the cluster has many nodes
595 then the start request may time out. In that case you should
596 consider setting --request-timeout to a suitable value. If
597 --wait is specified, pcs waits up to 'n' seconds for the cluster
598 to get ready to provide services after the cluster has success‐
599 fully started.
600
601 stop [--all | <node>... ] [--request-timeout=<seconds>]
602 Stop a cluster on specified node(s). If no nodes are specified
603 then stop a cluster on the local node. If --all is specified
604 then stop a cluster on all nodes. If the cluster is running
605 resources which take long time to stop then the stop request may
606 time out before the cluster actually stops. In that case you
607 should consider setting --request-timeout to a suitable value.
608
609 kill Force corosync and pacemaker daemons to stop on the local node
610 (performs kill -9). Note that init system (e.g. systemd) can
611 detect that cluster is not running and start it again. If you
612 want to stop cluster on a node, run pcs cluster stop on that
613 node.
614
615 enable [--all | <node>... ]
616 Configure cluster to run on node boot on specified node(s). If
617 node is not specified then cluster is enabled on the local node.
618 If --all is specified then cluster is enabled on all nodes.
619
620 disable [--all | <node>... ]
621 Configure cluster to not run on node boot on specified node(s).
622 If node is not specified then cluster is disabled on the local
623 node. If --all is specified then cluster is disabled on all
624 nodes.
625
626 auth [-u <username>] [-p <password>]
627 Authenticate pcs/pcsd to pcsd on nodes configured in the local
628 cluster.
629
630 status View current cluster status (an alias of 'pcs status cluster').
631
632 pcsd-status [<node>]...
633 Show current status of pcsd on nodes specified, or on all nodes
634 configured in the local cluster if no nodes are specified.
635
636 sync Sync cluster configuration (files which are supported by all
637 subcommands of this command) to all cluster nodes.
638
639 sync corosync
640 Sync corosync configuration to all nodes found from current
641 corosync.conf file.
642
643 cib [filename] [scope=<scope> | --config]
644 Get the raw xml from the CIB (Cluster Information Base). If a
645 filename is provided, we save the CIB to that file, otherwise
646 the CIB is printed. Specify scope to get a specific section of
647 the CIB. Valid values of the scope are: configuration, nodes,
648 resources, constraints, crm_config, rsc_defaults, op_defaults,
649 status. --config is the same as scope=configuration. Do not
650 specify a scope if you want to edit the saved CIB using pcs (pcs
651 -f <command>).
652
653 cib-push <filename> [--wait[=<n>]] [diff-against=<filename_original> |
654 scope=<scope> | --config]
655 Push the raw xml from <filename> to the CIB (Cluster Information
656 Base). You can obtain the CIB by running the 'pcs cluster cib'
657 command, which is recommended first step when you want to per‐
658 form desired modifications (pcs -f <command>) for the one-off
659 push. If diff-against is specified, pcs diffs contents of file‐
660 name against contents of filename_original and pushes the result
661 to the CIB. Specify scope to push a specific section of the
662 CIB. Valid values of the scope are: configuration, nodes,
663 resources, constraints, crm_config, rsc_defaults, op_defaults.
664 --config is the same as scope=configuration. Use of --config is
665 recommended. Do not specify a scope if you need to push the
666 whole CIB or be warned in the case of outdated CIB. If --wait
667 is specified wait up to 'n' seconds for changes to be applied.
668 WARNING: the selected scope of the CIB will be overwritten by
669 the current content of the specified file.
670
671 Example:
672 pcs cluster cib > original.xml
673 cp original.xml new.xml
674 pcs -f new.xml constraint location apache prefers node2
675 pcs cluster cib-push new.xml diff-against=original.xml
676
677 cib-upgrade
678 Upgrade the CIB to conform to the latest version of the document
679 schema.
680
681 edit [scope=<scope> | --config]
682 Edit the cib in the editor specified by the $EDITOR environment
683 variable and push out any changes upon saving. Specify scope to
684 edit a specific section of the CIB. Valid values of the scope
685 are: configuration, nodes, resources, constraints, crm_config,
686 rsc_defaults, op_defaults. --config is the same as scope=con‐
687 figuration. Use of --config is recommended. Do not specify a
688 scope if you need to edit the whole CIB or be warned in the case
689 of outdated CIB.
690
691 node add <node name> [addr=<node address>]... [watchdog=<watchdog
692 path>] [device=<SBD device path>]... [--start [--wait[=<n>]]]
693 [--enable] [--no-watchdog-validation]
694 Add the node to the cluster and synchronize all relevant config‐
695 uration files to the new node. This command can only be run on
696 an existing cluster node.
697
698 The new node is specified by its name and optionally its
699 addresses. If no addresses are specified for the node, pcs will
700 configure corosync to communicate with the node using an address
701 provided in 'pcs host auth' command. Otherwise, pcs will config‐
702 ure corosync to communicate with the node using the specified
703 addresses.
704
705 Use 'watchdog' to specify a path to a watchdog on the new node,
706 when SBD is enabled in the cluster. If SBD is configured with
707 shared storage, use 'device' to specify path to shared device(s)
708 on the new node.
709
710 If --start is specified also start cluster on the new node, if
711 --wait is specified wait up to 'n' seconds for the new node to
712 start. If --enable is specified configure cluster to start on
713 the new node on boot. If --no-watchdog-validation is specified,
714 validation of watchdog will be skipped.
715
716 WARNING: By default, it is tested whether the specified watchdog
717 is supported. This may cause a restart of the system when a
718 watchdog with no-way-out-feature enabled is present. Use
719 --no-watchdog-validation to skip watchdog validation.
720
721 node delete <node name> [<node name>]...
722 Shutdown specified nodes and remove them from the cluster.
723
724 node remove <node name> [<node name>]...
725 Shutdown specified nodes and remove them from the cluster.
726
727 node add-remote <node name> [<node address>] [options] [op <operation
728 action> <operation options> [<operation action> <operation
729 options>]...] [meta <meta options>...] [--wait[=<n>]]
730 Add the node to the cluster as a remote node. Sync all relevant
731 configuration files to the new node. Start the node and config‐
732 ure it to start the cluster on boot. Options are port and recon‐
733 nect_interval. Operations and meta belong to an underlying con‐
734 nection resource (ocf:pacemaker:remote). If node address is not
735 specified for the node, pcs will configure pacemaker to communi‐
736 cate with the node using an address provided in 'pcs host auth'
737 command. Otherwise, pcs will configure pacemaker to communicate
738 with the node using the specified addresses. If --wait is speci‐
739 fied, wait up to 'n' seconds for the node to start.
740
741 node delete-remote <node identifier>
742 Shutdown specified remote node and remove it from the cluster.
743 The node-identifier can be the name of the node or the address
744 of the node.
745
746 node remove-remote <node identifier>
747 Shutdown specified remote node and remove it from the cluster.
748 The node-identifier can be the name of the node or the address
749 of the node.
750
751 node add-guest <node name> <resource id> [options] [--wait[=<n>]]
752 Make the specified resource a guest node resource. Sync all rel‐
753 evant configuration files to the new node. Start the node and
754 configure it to start the cluster on boot. Options are
755 remote-addr, remote-port and remote-connect-timeout. If
756 remote-addr is not specified for the node, pcs will configure
757 pacemaker to communicate with the node using an address provided
758 in 'pcs host auth' command. Otherwise, pcs will configure pace‐
759 maker to communicate with the node using the specified
760 addresses. If --wait is specified, wait up to 'n' seconds for
761 the node to start.
762
763 node delete-guest <node identifier>
764 Shutdown specified guest node and remove it from the cluster.
765 The node-identifier can be the name of the node or the address
766 of the node or id of the resource that is used as the guest
767 node.
768
769 node remove-guest <node identifier>
770 Shutdown specified guest node and remove it from the cluster.
771 The node-identifier can be the name of the node or the address
772 of the node or id of the resource that is used as the guest
773 node.
774
775 node clear <node name>
776 Remove specified node from various cluster caches. Use this if a
777 removed node is still considered by the cluster to be a member
778 of the cluster.
779
780 link add <node_name>=<node_address>... [options <link options>]
781 Add a corosync link. One address must be specified for each
782 cluster node. If no linknumber is specified, pcs will use the
783 lowest available linknumber.
784 Link options (documented in corosync.conf(5) man page) are:
785 link_priority, linknumber, mcastport, ping_interval, ping_preci‐
786 sion, ping_timeout, pong_count, transport (udp or sctp)
787
788 link delete <linknumber> [<linknumber>]...
789 Remove specified corosync links.
790
791 link remove <linknumber> [<linknumber>]...
792 Remove specified corosync links.
793
794 link update <linknumber> [<node_name>=<node_address>...] [options <link
795 options>]
796 Change node addresses / link options of an existing corosync
797 link. Use this if you cannot add / remove links which is the
798 preferred way.
799 Link options (documented in corosync.conf(5) man page) are:
800 for knet transport: link_priority, mcastport, ping_interval,
801 ping_precision, ping_timeout, pong_count, transport (udp or
802 sctp)
803 for udp and udpu transports: bindnetaddr, broadcast, mcastaddr,
804 mcastport, ttl
805
806 uidgid List the current configured uids and gids of users allowed to
807 connect to corosync.
808
809 uidgid add [uid=<uid>] [gid=<gid>]
810 Add the specified uid and/or gid to the list of users/groups
811 allowed to connect to corosync.
812
813 uidgid delete [uid=<uid>] [gid=<gid>]
814 Remove the specified uid and/or gid from the list of
815 users/groups allowed to connect to corosync.
816
817 uidgid remove [uid=<uid>] [gid=<gid>]
818 Remove the specified uid and/or gid from the list of
819 users/groups allowed to connect to corosync.
820
821 corosync [node]
822 Get the corosync.conf from the specified node or from the cur‐
823 rent node if node not specified.
824
825 reload corosync
826 Reload the corosync configuration on the current node.
827
828 destroy [--all]
829 Permanently destroy the cluster on the current node, killing all
830 cluster processes and removing all cluster configuration files.
831 Using --all will attempt to destroy the cluster on all nodes in
832 the local cluster.
833
834 WARNING: This command permanently removes any cluster configura‐
835 tion that has been created. It is recommended to run 'pcs clus‐
836 ter stop' before destroying the cluster.
837
838 verify [--full] [-f <filename>]
839 Checks the pacemaker configuration (CIB) for syntax and common
840 conceptual errors. If no filename is specified the check is per‐
841 formed on the currently running cluster. If --full is used more
842 verbose output will be printed.
843
844 report [--from "YYYY-M-D H:M:S" [--to "YYYY-M-D H:M:S"]] <dest>
845 Create a tarball containing everything needed when reporting
846 cluster problems. If --from and --to are not used, the report
847 will include the past 24 hours.
848
849 stonith
850 [status [--hide-inactive]]
851 Show status of all currently configured stonith devices. If
852 --hide-inactive is specified, only show active stonith devices.
853
854 config [<stonith id>]...
855 Show options of all currently configured stonith devices or if
856 stonith ids are specified show the options for the specified
857 stonith device ids.
858
859 list [filter] [--nodesc]
860 Show list of all available stonith agents (if filter is provided
861 then only stonith agents matching the filter will be shown). If
862 --nodesc is used then descriptions of stonith agents are not
863 printed.
864
865 describe <stonith agent> [--full]
866 Show options for specified stonith agent. If --full is speci‐
867 fied, all options including advanced and deprecated ones are
868 shown.
869
870 create <stonith id> <stonith device type> [stonith device options] [op
871 <operation action> <operation options> [<operation action> <operation
872 options>]...] [meta <meta options>...] [--group <group id> [--before
873 <stonith id> | --after <stonith id>]] [--disabled] [--wait[=n]]
874 Create stonith device with specified type and options. If
875 --group is specified the stonith device is added to the group
876 named. You can use --before or --after to specify the position
877 of the added stonith device relatively to some stonith device
878 already existing in the group. If--disabled is specified the
879 stonith device is not used. If --wait is specified, pcs will
880 wait up to 'n' seconds for the stonith device to start and then
881 return 0 if the stonith device is started, or 1 if the stonith
882 device has not yet started. If 'n' is not specified it defaults
883 to 60 minutes.
884
885 Example: Create a device for nodes node1 and node2
886 pcs stonith create MyFence fence_virt pcmk_host_list=node1,node2
887 Example: Use port p1 for node n1 and ports p2 and p3 for node n2
888 pcs stonith create MyFence fence_virt
889 'pcmk_host_map=n1:p1;n2:p2,p3'
890
891 update <stonith id> [stonith device options]
892 Add/Change options to specified stonith id.
893
894 delete <stonith id>
895 Remove stonith id from configuration.
896
897 remove <stonith id>
898 Remove stonith id from configuration.
899
900 enable <stonith id>... [--wait[=n]]
901 Allow the cluster to use the stonith devices. If --wait is spec‐
902 ified, pcs will wait up to 'n' seconds for the stonith devices
903 to start and then return 0 if the stonith devices are started,
904 or 1 if the stonith devices have not yet started. If 'n' is not
905 specified it defaults to 60 minutes.
906
907 disable <stonith id>... [--wait[=n]]
908 Attempt to stop the stonith devices if they are running and dis‐
909 allow the cluster to use them. If --wait is specified, pcs will
910 wait up to 'n' seconds for the stonith devices to stop and then
911 return 0 if the stonith devices are stopped or 1 if the stonith
912 devices have not stopped. If 'n' is not specified it defaults to
913 60 minutes.
914
915 cleanup [<stonith id>] [--node <node>]
916 Make the cluster forget failed operations from history of the
917 stonith device and re-detect its current state. This can be use‐
918 ful to purge knowledge of past failures that have since been
919 resolved. If a stonith id is not specified then all resources /
920 stonith devices will be cleaned up. If a node is not specified
921 then resources / stonith devices on all nodes will be cleaned
922 up.
923
924 refresh [<stonith id>] [--node <node>] [--full]
925 Make the cluster forget the complete operation history (includ‐
926 ing failures) of the stonith device and re-detect its current
927 state. If you are interested in forgetting failed operations
928 only, use the 'pcs stonith cleanup' command. If a stonith id is
929 not specified then all resources / stonith devices will be
930 refreshed. If a node is not specified then resources / stonith
931 devices on all nodes will be refreshed. Use --full to refresh a
932 stonith device on all nodes, otherwise only nodes where the
933 stonith device's state is known will be considered.
934
935 level [config]
936 Lists all of the fencing levels currently configured.
937
938 level add <level> <target> <stonith id> [stonith id]...
939 Add the fencing level for the specified target with the list of
940 stonith devices to attempt for that target at that level. Fence
941 levels are attempted in numerical order (starting with 1). If a
942 level succeeds (meaning all devices are successfully fenced in
943 that level) then no other levels are tried, and the target is
944 considered fenced. Target may be a node name <node_name> or
945 %<node_name> or node%<node_name>, a node name regular expression
946 regexp%<node_pattern> or a node attribute value
947 attrib%<name>=<value>.
948
949 level delete <level> [target] [stonith id]...
950 Removes the fence level for the level, target and/or devices
951 specified. If no target or devices are specified then the fence
952 level is removed. Target may be a node name <node_name> or
953 %<node_name> or node%<node_name>, a node name regular expression
954 regexp%<node_pattern> or a node attribute value
955 attrib%<name>=<value>.
956
957 level remove <level> [target] [stonith id]...
958 Removes the fence level for the level, target and/or devices
959 specified. If no target or devices are specified then the fence
960 level is removed. Target may be a node name <node_name> or
961 %<node_name> or node%<node_name>, a node name regular expression
962 regexp%<node_pattern> or a node attribute value
963 attrib%<name>=<value>.
964
965 level clear [target|stonith id(s)]
966 Clears the fence levels on the target (or stonith id) specified
967 or clears all fence levels if a target/stonith id is not speci‐
968 fied. If more than one stonith id is specified they must be sep‐
969 arated by a comma and no spaces. Target may be a node name
970 <node_name> or %<node_name> or node%<node_name>, a node name
971 regular expression regexp%<node_pattern> or a node attribute
972 value attrib%<name>=<value>. Example: pcs stonith level clear
973 dev_a,dev_b
974
975 level verify
976 Verifies all fence devices and nodes specified in fence levels
977 exist.
978
979 fence <node> [--off]
980 Fence the node specified (if --off is specified, use the 'off'
981 API call to stonith which will turn the node off instead of
982 rebooting it).
983
984 confirm <node> [--force]
985 Confirm to the cluster that the specified node is powered off.
986 This allows the cluster to recover from a situation where no
987 stonith device is able to fence the node. This command should
988 ONLY be used after manually ensuring that the node is powered
989 off and has no access to shared resources.
990
991 WARNING: If this node is not actually powered off or it does
992 have access to shared resources, data corruption/cluster failure
993 can occur. To prevent accidental running of this command,
994 --force or interactive user response is required in order to
995 proceed.
996
997 NOTE: It is not checked if the specified node exists in the
998 cluster in order to be able to work with nodes not visible from
999 the local cluster partition.
1000
1001 history [show [<node>]]
1002 Show fencing history for the specified node or all nodes if no
1003 node specified.
1004
1005 history cleanup [<node>]
1006 Cleanup fence history of the specified node or all nodes if no
1007 node specified.
1008
1009 history update
1010 Update fence history from all nodes.
1011
1012 sbd enable [watchdog=<path>[@<node>]]... [device=<path>[@<node>]]...
1013 [<SBD_OPTION>=<value>]... [--no-watchdog-validation]
1014 Enable SBD in cluster. Default path for watchdog device is
1015 /dev/watchdog. Allowed SBD options: SBD_WATCHDOG_TIMEOUT
1016 (default: 5), SBD_DELAY_START (default: no), SBD_STARTMODE
1017 (default: always) and SBD_TIMEOUT_ACTION. It is possible to
1018 specify up to 3 devices per node. If --no-watchdog-validation is
1019 specified, validation of watchdogs will be skipped.
1020
1021 WARNING: Cluster has to be restarted in order to apply these
1022 changes.
1023
1024 WARNING: By default, it is tested whether the specified watchdog
1025 is supported. This may cause a restart of the system when a
1026 watchdog with no-way-out-feature enabled is present. Use
1027 --no-watchdog-validation to skip watchdog validation.
1028
1029 Example of enabling SBD in cluster with watchdogs on node1 will
1030 be /dev/watchdog2, on node2 /dev/watchdog1, /dev/watchdog0 on
1031 all other nodes, device /dev/sdb on node1, device /dev/sda on
1032 all other nodes and watchdog timeout will bet set to 10 seconds:
1033
1034 pcs stonith sbd enable watchdog=/dev/watchdog2@node1 watch‐
1035 dog=/dev/watchdog1@node2 watchdog=/dev/watchdog0
1036 device=/dev/sdb@node1 device=/dev/sda SBD_WATCHDOG_TIMEOUT=10
1037
1038
1039 sbd disable
1040 Disable SBD in cluster.
1041
1042 WARNING: Cluster has to be restarted in order to apply these
1043 changes.
1044
1045 sbd device setup device=<path> [device=<path>]... [watchdog-time‐
1046 out=<integer>] [allocate-timeout=<integer>] [loop-timeout=<integer>]
1047 [msgwait-timeout=<integer>]
1048 Initialize SBD structures on device(s) with specified timeouts.
1049
1050 WARNING: All content on device(s) will be overwritten.
1051
1052 sbd device message <device-path> <node> <message-type>
1053 Manually set a message of the specified type on the device for
1054 the node. Possible message types (they are documented in sbd(8)
1055 man page): test, reset, off, crashdump, exit, clear
1056
1057 sbd status [--full]
1058 Show status of SBD services in cluster and local device(s) con‐
1059 figured. If --full is specified, also dump of SBD headers on
1060 device(s) will be shown.
1061
1062 sbd config
1063 Show SBD configuration in cluster.
1064
1065
1066 sbd watchdog list
1067 Show all available watchdog devices on the local node.
1068
1069 WARNING: Listing available watchdogs may cause a restart of the
1070 system when a watchdog with no-way-out-feature enabled is
1071 present.
1072
1073
1074 sbd watchdog test [<watchdog-path>]
1075 This operation is expected to force-reboot the local system
1076 without following any shutdown procedures using a watchdog. If
1077 no watchdog is specified, available watchdog will be used if
1078 only one watchdog device is available on the local system.
1079
1080
1081 acl
1082 [show] List all current access control lists.
1083
1084 enable Enable access control lists.
1085
1086 disable
1087 Disable access control lists.
1088
1089 role create <role id> [description=<description>] [((read | write |
1090 deny) (xpath <query> | id <id>))...]
1091 Create a role with the id and (optional) description specified.
1092 Each role can also have an unlimited number of permissions
1093 (read/write/deny) applied to either an xpath query or the id of
1094 a specific element in the cib.
1095
1096 role delete <role id>
1097 Delete the role specified and remove it from any users/groups it
1098 was assigned to.
1099
1100 role remove <role id>
1101 Delete the role specified and remove it from any users/groups it
1102 was assigned to.
1103
1104 role assign <role id> [to] [user|group] <username/group>
1105 Assign a role to a user or group already created with 'pcs acl
1106 user/group create'. If there is user and group with the same id
1107 and it is not specified which should be used, user will be pri‐
1108 oritized. In cases like this specify whenever user or group
1109 should be used.
1110
1111 role unassign <role id> [from] [user|group] <username/group>
1112 Remove a role from the specified user. If there is user and
1113 group with the same id and it is not specified which should be
1114 used, user will be prioritized. In cases like this specify when‐
1115 ever user or group should be used.
1116
1117 user create <username> [<role id>]...
1118 Create an ACL for the user specified and assign roles to the
1119 user.
1120
1121 user delete <username>
1122 Remove the user specified (and roles assigned will be unassigned
1123 for the specified user).
1124
1125 user remove <username>
1126 Remove the user specified (and roles assigned will be unassigned
1127 for the specified user).
1128
1129 group create <group> [<role id>]...
1130 Create an ACL for the group specified and assign roles to the
1131 group.
1132
1133 group delete <group>
1134 Remove the group specified (and roles assigned will be unas‐
1135 signed for the specified group).
1136
1137 group remove <group>
1138 Remove the group specified (and roles assigned will be unas‐
1139 signed for the specified group).
1140
1141 permission add <role id> ((read | write | deny) (xpath <query> | id
1142 <id>))...
1143 Add the listed permissions to the role specified.
1144
1145 permission delete <permission id>
1146 Remove the permission id specified (permission id's are listed
1147 in parenthesis after permissions in 'pcs acl' output).
1148
1149 permission remove <permission id>
1150 Remove the permission id specified (permission id's are listed
1151 in parenthesis after permissions in 'pcs acl' output).
1152
1153 property
1154 [list|show [<property> | --all | --defaults]] | [--all | --defaults]
1155 List property settings (default: lists configured properties).
1156 If --defaults is specified will show all property defaults, if
1157 --all is specified, current configured properties will be shown
1158 with unset properties and their defaults. See pacemaker-con‐
1159 trold(7) and pacemaker-schedulerd(7) man pages for a description
1160 of the properties.
1161
1162 set <property>=[<value>] ... [--force]
1163 Set specific pacemaker properties (if the value is blank then
1164 the property is removed from the configuration). If a property
1165 is not recognized by pcs the property will not be created unless
1166 the --force is used. See pacemaker-controld(7) and pacemaker-
1167 schedulerd(7) man pages for a description of the properties.
1168
1169 unset <property> ...
1170 Remove property from configuration. See pacemaker-controld(7)
1171 and pacemaker-schedulerd(7) man pages for a description of the
1172 properties.
1173
1174 constraint
1175 [list|show] [--full] [--all]
1176 List all current constraints that are not expired. If --all is
1177 specified also show expired constraints. If --full is specified
1178 also list the constraint ids.
1179
1180 location <resource> prefers <node>[=<score>] [<node>[=<score>]]...
1181 Create a location constraint on a resource to prefer the speci‐
1182 fied node with score (default score: INFINITY). Resource may be
1183 either a resource id <resource_id> or %<resource_id> or
1184 resource%<resource_id>, or a resource name regular expression
1185 regexp%<resource_pattern>.
1186
1187 location <resource> avoids <node>[=<score>] [<node>[=<score>]]...
1188 Create a location constraint on a resource to avoid the speci‐
1189 fied node with score (default score: INFINITY). Resource may be
1190 either a resource id <resource_id> or %<resource_id> or
1191 resource%<resource_id>, or a resource name regular expression
1192 regexp%<resource_pattern>.
1193
1194 location <resource> rule [id=<rule id>] [resource-discovery=<option>]
1195 [role=master|slave] [constraint-id=<id>] [score=<score> |
1196 score-attribute=<attribute>] <expression>
1197 Creates a location constraint with a rule on the specified
1198 resource where expression looks like one of the following:
1199 defined|not_defined <attribute>
1200 <attribute> lt|gt|lte|gte|eq|ne [string|integer|version]
1201 <value>
1202 date gt|lt <date>
1203 date in_range <date> to <date>
1204 date in_range <date> to duration <duration options>...
1205 date-spec <date spec options>...
1206 <expression> and|or <expression>
1207 ( <expression> )
1208 where duration options and date spec options are: hours, month‐
1209 days, weekdays, yeardays, months, weeks, years, weekyears, moon.
1210 Resource may be either a resource id <resource_id> or
1211 %<resource_id> or resource%<resource_id>, or a resource name
1212 regular expression regexp%<resource_pattern>. If score is omit‐
1213 ted it defaults to INFINITY. If id is omitted one is generated
1214 from the resource id. If resource-discovery is omitted it
1215 defaults to 'always'.
1216
1217 location [show [resources [<resource>...]] | [nodes [<node>...]]]
1218 [--full] [--all]
1219 List all the current location constraints that are not expired.
1220 If 'resources' is specified, location constraints are displayed
1221 per resource (default). If 'nodes' is specified, location con‐
1222 straints are displayed per node. If specific nodes or resources
1223 are specified then we only show information about them. Resource
1224 may be either a resource id <resource_id> or %<resource_id> or
1225 resource%<resource_id>, or a resource name regular expression
1226 regexp%<resource_pattern>. If --full is specified show the
1227 internal constraint id's as well. If --all is specified show the
1228 expired constraints.
1229
1230 location add <id> <resource> <node> <score> [resource-discov‐
1231 ery=<option>]
1232 Add a location constraint with the appropriate id for the speci‐
1233 fied resource, node name and score. Resource may be either a
1234 resource id <resource_id> or %<resource_id> or
1235 resource%<resource_id>, or a resource name regular expression
1236 regexp%<resource_pattern>.
1237
1238 location delete <id>
1239 Remove a location constraint with the appropriate id.
1240
1241 location remove <id>
1242 Remove a location constraint with the appropriate id.
1243
1244 order [show] [--full]
1245 List all current ordering constraints (if --full is specified
1246 show the internal constraint id's as well).
1247
1248 order [action] <resource id> then [action] <resource id> [options]
1249 Add an ordering constraint specifying actions (start, stop, pro‐
1250 mote, demote) and if no action is specified the default action
1251 will be start. Available options are kind=Optional/Manda‐
1252 tory/Serialize, symmetrical=true/false, require-all=true/false
1253 and id=<constraint-id>.
1254
1255 order set <resource1> [resourceN]... [options] [set <resourceX> ...
1256 [options]] [setoptions [constraint_options]]
1257 Create an ordered set of resources. Available options are
1258 sequential=true/false, require-all=true/false and
1259 action=start/promote/demote/stop. Available constraint_options
1260 are id=<constraint-id>, kind=Optional/Mandatory/Serialize and
1261 symmetrical=true/false.
1262
1263 order delete <resource1> [resourceN]...
1264 Remove resource from any ordering constraint
1265
1266 order remove <resource1> [resourceN]...
1267 Remove resource from any ordering constraint
1268
1269 colocation [show] [--full]
1270 List all current colocation constraints (if --full is specified
1271 show the internal constraint id's as well).
1272
1273 colocation add [<role>] <source resource id> with [<role>] <target
1274 resource id> [score] [options] [id=constraint-id]
1275 Request <source resource> to run on the same node where pace‐
1276 maker has determined <target resource> should run. Positive
1277 values of score mean the resources should be run on the same
1278 node, negative values mean the resources should not be run on
1279 the same node. Specifying 'INFINITY' (or '-INFINITY') for the
1280 score forces <source resource> to run (or not run) with <target
1281 resource> (score defaults to "INFINITY"). A role can be: 'Mas‐
1282 ter', 'Slave', 'Started', 'Stopped' (if no role is specified, it
1283 defaults to 'Started').
1284
1285 colocation set <resource1> [resourceN]... [options] [set <resourceX>
1286 ... [options]] [setoptions [constraint_options]]
1287 Create a colocation constraint with a resource set. Available
1288 options are sequential=true/false and role=Stopped/Started/Mas‐
1289 ter/Slave. Available constraint_options are id and either of:
1290 score, score-attribute, score-attribute-mangle.
1291
1292 colocation delete <source resource id> <target resource id>
1293 Remove colocation constraints with specified resources.
1294
1295 colocation remove <source resource id> <target resource id>
1296 Remove colocation constraints with specified resources.
1297
1298 ticket [show] [--full]
1299 List all current ticket constraints (if --full is specified show
1300 the internal constraint id's as well).
1301
1302 ticket add <ticket> [<role>] <resource id> [<options>] [id=<con‐
1303 straint-id>]
1304 Create a ticket constraint for <resource id>. Available option
1305 is loss-policy=fence/stop/freeze/demote. A role can be master,
1306 slave, started or stopped.
1307
1308 ticket set <resource1> [<resourceN>]... [<options>] [set <resourceX>
1309 ... [<options>]] setoptions <constraint_options>
1310 Create a ticket constraint with a resource set. Available
1311 options are role=Stopped/Started/Master/Slave. Required con‐
1312 straint option is ticket=<ticket>. Optional constraint options
1313 are id=<constraint-id> and loss-policy=fence/stop/freeze/demote.
1314
1315 ticket delete <ticket> <resource id>
1316 Remove all ticket constraints with <ticket> from <resource id>.
1317
1318 ticket remove <ticket> <resource id>
1319 Remove all ticket constraints with <ticket> from <resource id>.
1320
1321 delete <constraint id>...
1322 Remove constraint(s) or constraint rules with the specified
1323 id(s).
1324
1325 remove <constraint id>...
1326 Remove constraint(s) or constraint rules with the specified
1327 id(s).
1328
1329 ref <resource>...
1330 List constraints referencing specified resource.
1331
1332 rule add <constraint id> [id=<rule id>] [role=master|slave]
1333 [score=<score>|score-attribute=<attribute>] <expression>
1334 Add a rule to a location constraint specified by 'constraint id'
1335 where the expression looks like one of the following:
1336 defined|not_defined <attribute>
1337 <attribute> lt|gt|lte|gte|eq|ne [string|integer|version]
1338 <value>
1339 date gt|lt <date>
1340 date in_range <date> to <date>
1341 date in_range <date> to duration <duration options>...
1342 date-spec <date spec options>...
1343 <expression> and|or <expression>
1344 ( <expression> )
1345 where duration options and date spec options are: hours, month‐
1346 days, weekdays, yeardays, months, weeks, years, weekyears, moon.
1347 If score is omitted it defaults to INFINITY. If id is omitted
1348 one is generated from the constraint id.
1349
1350 rule delete <rule id>
1351 Remove a rule from its location constraint and if it's the last
1352 rule, the constraint will also be removed.
1353
1354 rule remove <rule id>
1355 Remove a rule from its location constraint and if it's the last
1356 rule, the constraint will also be removed.
1357
1358 qdevice
1359 status <device model> [--full] [<cluster name>]
1360 Show runtime status of specified model of quorum device
1361 provider. Using --full will give more detailed output. If
1362 <cluster name> is specified, only information about the speci‐
1363 fied cluster will be displayed.
1364
1365 setup model <device model> [--enable] [--start]
1366 Configure specified model of quorum device provider. Quorum
1367 device then can be added to clusters by running "pcs quorum
1368 device add" command in a cluster. --start will also start the
1369 provider. --enable will configure the provider to start on
1370 boot.
1371
1372 destroy <device model>
1373 Disable and stop specified model of quorum device provider and
1374 delete its configuration files.
1375
1376 start <device model>
1377 Start specified model of quorum device provider.
1378
1379 stop <device model>
1380 Stop specified model of quorum device provider.
1381
1382 kill <device model>
1383 Force specified model of quorum device provider to stop (per‐
1384 forms kill -9). Note that init system (e.g. systemd) can detect
1385 that the qdevice is not running and start it again. If you want
1386 to stop the qdevice, run "pcs qdevice stop" command.
1387
1388 enable <device model>
1389 Configure specified model of quorum device provider to start on
1390 boot.
1391
1392 disable <device model>
1393 Configure specified model of quorum device provider to not start
1394 on boot.
1395
1396 quorum
1397 [config]
1398 Show quorum configuration.
1399
1400 status Show quorum runtime status.
1401
1402 device add [<generic options>] model <device model> [<model options>]
1403 [heuristics <heuristics options>]
1404 Add a quorum device to the cluster. Quorum device should be con‐
1405 figured first with "pcs qdevice setup". It is not possible to
1406 use more than one quorum device in a cluster simultaneously.
1407 Currently the only supported model is 'net'. It requires model
1408 options 'algorithm' and 'host' to be specified. Options are doc‐
1409 umented in corosync-qdevice(8) man page; generic options are
1410 'sync_timeout' and 'timeout', for model net options check the
1411 quorum.device.net section, for heuristics options see the quo‐
1412 rum.device.heuristics section. Pcs automatically creates and
1413 distributes TLS certificates and sets the 'tls' model option to
1414 the default value 'on'.
1415 Example: pcs quorum device add model net algorithm=lms
1416 host=qnetd.internal.example.com
1417
1418 device heuristics delete
1419 Remove all heuristics settings of the configured quorum device.
1420
1421 device heuristics remove
1422 Remove all heuristics settings of the configured quorum device.
1423
1424 device delete
1425 Remove a quorum device from the cluster.
1426
1427 device remove
1428 Remove a quorum device from the cluster.
1429
1430 device status [--full]
1431 Show quorum device runtime status. Using --full will give more
1432 detailed output.
1433
1434 device update [<generic options>] [model <model options>] [heuristics
1435 <heuristics options>]
1436 Add/Change quorum device options. Requires the cluster to be
1437 stopped. Model and options are all documented in corosync-qde‐
1438 vice(8) man page; for heuristics options check the quo‐
1439 rum.device.heuristics subkey section, for model options check
1440 the quorum.device.<device model> subkey sections.
1441
1442 WARNING: If you want to change "host" option of qdevice model
1443 net, use "pcs quorum device remove" and "pcs quorum device add"
1444 commands to set up configuration properly unless old and new
1445 host is the same machine.
1446
1447 expected-votes <votes>
1448 Set expected votes in the live cluster to specified value. This
1449 only affects the live cluster, not changes any configuration
1450 files.
1451
1452 unblock [--force]
1453 Cancel waiting for all nodes when establishing quorum. Useful
1454 in situations where you know the cluster is inquorate, but you
1455 are confident that the cluster should proceed with resource man‐
1456 agement regardless. This command should ONLY be used when nodes
1457 which the cluster is waiting for have been confirmed to be pow‐
1458 ered off and to have no access to shared resources.
1459
1460 WARNING: If the nodes are not actually powered off or they do
1461 have access to shared resources, data corruption/cluster failure
1462 can occur. To prevent accidental running of this command,
1463 --force or interactive user response is required in order to
1464 proceed.
1465
1466 update [auto_tie_breaker=[0|1]] [last_man_standing=[0|1]]
1467 [last_man_standing_window=[<time in ms>]] [wait_for_all=[0|1]]
1468 Add/Change quorum options. At least one option must be speci‐
1469 fied. Options are documented in corosync's votequorum(5) man
1470 page. Requires the cluster to be stopped.
1471
1472 booth
1473 setup sites <address> <address> [<address>...] [arbitrators <address>
1474 ...] [--force]
1475 Write new booth configuration with specified sites and arbitra‐
1476 tors. Total number of peers (sites and arbitrators) must be
1477 odd. When the configuration file already exists, command fails
1478 unless --force is specified.
1479
1480 destroy
1481 Remove booth configuration files.
1482
1483 ticket add <ticket> [<name>=<value> ...]
1484 Add new ticket to the current configuration. Ticket options are
1485 specified in booth manpage.
1486
1487 ticket delete <ticket>
1488 Remove the specified ticket from the current configuration.
1489
1490 ticket remove <ticket>
1491 Remove the specified ticket from the current configuration.
1492
1493 config [<node>]
1494 Show booth configuration from the specified node or from the
1495 current node if node not specified.
1496
1497 create ip <address>
1498 Make the cluster run booth service on the specified ip address
1499 as a cluster resource. Typically this is used to run booth
1500 site.
1501
1502 delete Remove booth resources created by the "pcs booth create" com‐
1503 mand.
1504
1505 remove Remove booth resources created by the "pcs booth create" com‐
1506 mand.
1507
1508 restart
1509 Restart booth resources created by the "pcs booth create" com‐
1510 mand.
1511
1512 ticket grant <ticket> [<site address>]
1513 Grant the ticket for the site specified by address. Site
1514 address which has been specified with 'pcs booth create' command
1515 is used if 'site address' is omitted. Specifying site address
1516 is mandatory when running this command on an arbitrator.
1517
1518 ticket revoke <ticket> [<site address>]
1519 Revoke the ticket for the site specified by address. Site
1520 address which has been specified with 'pcs booth create' command
1521 is used if 'site address' is omitted. Specifying site address
1522 is mandatory when running this command on an arbitrator.
1523
1524 status Print current status of booth on the local node.
1525
1526 pull <node>
1527 Pull booth configuration from the specified node.
1528
1529 sync [--skip-offline]
1530 Send booth configuration from the local node to all nodes in the
1531 cluster.
1532
1533 enable Enable booth arbitrator service.
1534
1535 disable
1536 Disable booth arbitrator service.
1537
1538 start Start booth arbitrator service.
1539
1540 stop Stop booth arbitrator service.
1541
1542 status
1543 [status] [--full | --hide-inactive]
1544 View all information about the cluster and resources (--full
1545 provides more details, --hide-inactive hides inactive
1546 resources).
1547
1548 resources [--hide-inactive]
1549 Show status of all currently configured resources. If
1550 --hide-inactive is specified, only show active resources.
1551
1552 cluster
1553 View current cluster status.
1554
1555 corosync
1556 View current membership information as seen by corosync.
1557
1558 quorum View current quorum status.
1559
1560 qdevice <device model> [--full] [<cluster name>]
1561 Show runtime status of specified model of quorum device
1562 provider. Using --full will give more detailed output. If
1563 <cluster name> is specified, only information about the speci‐
1564 fied cluster will be displayed.
1565
1566 booth Print current status of booth on the local node.
1567
1568 nodes [corosync | both | config]
1569 View current status of nodes from pacemaker. If 'corosync' is
1570 specified, view current status of nodes from corosync instead.
1571 If 'both' is specified, view current status of nodes from both
1572 corosync & pacemaker. If 'config' is specified, print nodes from
1573 corosync & pacemaker configuration.
1574
1575 pcsd [<node>]...
1576 Show current status of pcsd on nodes specified, or on all nodes
1577 configured in the local cluster if no nodes are specified.
1578
1579 xml View xml version of status (output from crm_mon -r -1 -X).
1580
1581 config
1582 [show] View full cluster configuration.
1583
1584 backup [filename]
1585 Creates the tarball containing the cluster configuration files.
1586 If filename is not specified the standard output will be used.
1587
1588 restore [--local] [filename]
1589 Restores the cluster configuration files on all nodes from the
1590 backup. If filename is not specified the standard input will be
1591 used. If --local is specified only the files on the current
1592 node will be restored.
1593
1594 checkpoint
1595 List all available configuration checkpoints.
1596
1597 checkpoint view <checkpoint_number>
1598 Show specified configuration checkpoint.
1599
1600 checkpoint diff <checkpoint_number> <checkpoint_number>
1601 Show differences between the two specified checkpoints. Use
1602 checkpoint number 'live' to compare a checkpoint to the current
1603 live configuration.
1604
1605 checkpoint restore <checkpoint_number>
1606 Restore cluster configuration to specified checkpoint.
1607
1608 import-cman output=<filename> [input=<filename>] [--interactive] [out‐
1609 put-format=corosync.conf] [dist=<dist>]
1610 Converts CMAN cluster configuration to Pacemaker cluster config‐
1611 uration. Converted configuration will be saved to 'output' file.
1612 To send the configuration to the cluster nodes the 'pcs config
1613 restore' command can be used. If --interactive is specified you
1614 will be prompted to solve incompatibilities manually. If no
1615 input is specified /etc/cluster/cluster.conf will be used.
1616 Optionally you can specify output version by setting 'dist'
1617 option e. g. redhat,7.3 or debian,7 or ubuntu,trusty. You can
1618 get the list of supported dist values by running the "clufter
1619 --list-dists" command. If 'dist' is not specified, it defaults
1620 to this node's version.
1621
1622 import-cman output=<filename> [input=<filename>] [--interactive] out‐
1623 put-format=pcs-commands|pcs-commands-verbose [dist=<dist>]
1624 Converts CMAN cluster configuration to a list of pcs commands
1625 which recreates the same cluster as Pacemaker cluster when exe‐
1626 cuted. Commands will be saved to 'output' file. For other
1627 options see above.
1628
1629 export pcs-commands|pcs-commands-verbose [output=<filename>]
1630 [dist=<dist>]
1631 Creates a list of pcs commands which upon execution recreates
1632 the current cluster running on this node. Commands will be saved
1633 to 'output' file or written to stdout if 'output' is not speci‐
1634 fied. Use pcs-commands to get a simple list of commands, whereas
1635 pcs-commands-verbose creates a list including comments and debug
1636 messages. Optionally specify output version by setting 'dist'
1637 option e. g. redhat,7.3 or debian,7 or ubuntu,trusty. You can
1638 get the list of supported dist values by running the "clufter
1639 --list-dists" command. If 'dist' is not specified, it defaults
1640 to this node's version.
1641
1642 pcsd
1643 certkey <certificate file> <key file>
1644 Load custom certificate and key files for use in pcsd.
1645
1646 sync-certificates
1647 Sync pcsd certificates to all nodes in the local cluster.
1648
1649 deauth [<token>]...
1650 Delete locally stored authentication tokens used by remote sys‐
1651 tems to connect to the local pcsd instance. If no tokens are
1652 specified all tokens will be deleted. After this command is run
1653 other nodes will need to re-authenticate against this node to be
1654 able to connect to it.
1655
1656 host
1657 auth (<host name> [addr=<address>[:<port>]])... [-u <username>] [-p
1658 <password>]
1659 Authenticate local pcs/pcsd against pcsd on specified hosts. It
1660 is possible to specify an address and a port via which pcs/pcsd
1661 will communicate with each host. If an address is not specified
1662 a host name will be used. If a port is not specified 2224 will
1663 be used.
1664
1665 deauth [<host name>]...
1666 Delete authentication tokens which allow pcs/pcsd on the current
1667 system to connect to remote pcsd instances on specified host
1668 names. If the current system is a member of a cluster, the
1669 tokens will be deleted from all nodes in the cluster. If no host
1670 names are specified all tokens will be deleted. After this com‐
1671 mand is run this node will need to re-authenticate against other
1672 nodes to be able to connect to them.
1673
1674 node
1675 attribute [[<node>] [--name <name>] | <node> <name>=<value> ...]
1676 Manage node attributes. If no parameters are specified, show
1677 attributes of all nodes. If one parameter is specified, show
1678 attributes of specified node. If --name is specified, show
1679 specified attribute's value from all nodes. If more parameters
1680 are specified, set attributes of specified node. Attributes can
1681 be removed by setting an attribute without a value.
1682
1683 maintenance [--all | <node>...] [--wait[=n]]
1684 Put specified node(s) into maintenance mode, if no nodes or
1685 options are specified the current node will be put into mainte‐
1686 nance mode, if --all is specified all nodes will be put into
1687 maintenance mode. If --wait is specified, pcs will wait up to
1688 'n' seconds for the node(s) to be put into maintenance mode and
1689 then return 0 on success or 1 if the operation not succeeded
1690 yet. If 'n' is not specified it defaults to 60 minutes.
1691
1692 unmaintenance [--all | <node>...] [--wait[=n]]
1693 Remove node(s) from maintenance mode, if no nodes or options are
1694 specified the current node will be removed from maintenance
1695 mode, if --all is specified all nodes will be removed from main‐
1696 tenance mode. If --wait is specified, pcs will wait up to 'n'
1697 seconds for the node(s) to be removed from maintenance mode and
1698 then return 0 on success or 1 if the operation not succeeded
1699 yet. If 'n' is not specified it defaults to 60 minutes.
1700
1701 standby [--all | <node>...] [--wait[=n]]
1702 Put specified node(s) into standby mode (the node specified will
1703 no longer be able to host resources), if no nodes or options are
1704 specified the current node will be put into standby mode, if
1705 --all is specified all nodes will be put into standby mode. If
1706 --wait is specified, pcs will wait up to 'n' seconds for the
1707 node(s) to be put into standby mode and then return 0 on success
1708 or 1 if the operation not succeeded yet. If 'n' is not specified
1709 it defaults to 60 minutes.
1710
1711 unstandby [--all | <node>...] [--wait[=n]]
1712 Remove node(s) from standby mode (the node specified will now be
1713 able to host resources), if no nodes or options are specified
1714 the current node will be removed from standby mode, if --all is
1715 specified all nodes will be removed from standby mode. If --wait
1716 is specified, pcs will wait up to 'n' seconds for the node(s) to
1717 be removed from standby mode and then return 0 on success or 1
1718 if the operation not succeeded yet. If 'n' is not specified it
1719 defaults to 60 minutes.
1720
1721 utilization [[<node>] [--name <name>] | <node> <name>=<value> ...]
1722 Add specified utilization options to specified node. If node is
1723 not specified, shows utilization of all nodes. If --name is
1724 specified, shows specified utilization value from all nodes. If
1725 utilization options are not specified, shows utilization of
1726 specified node. Utilization option should be in format
1727 name=value, value has to be integer. Options may be removed by
1728 setting an option without a value. Example: pcs node utiliza‐
1729 tion node1 cpu=4 ram=
1730
1731 alert
1732 [config|show]
1733 Show all configured alerts.
1734
1735 create path=<path> [id=<alert-id>] [description=<description>] [options
1736 [<option>=<value>]...] [meta [<meta-option>=<value>]...]
1737 Define an alert handler with specified path. Id will be automat‐
1738 ically generated if it is not specified.
1739
1740 update <alert-id> [path=<path>] [description=<description>] [options
1741 [<option>=<value>]...] [meta [<meta-option>=<value>]...]
1742 Update an existing alert handler with specified id.
1743
1744 delete <alert-id> ...
1745 Remove alert handlers with specified ids.
1746
1747 remove <alert-id> ...
1748 Remove alert handlers with specified ids.
1749
1750 recipient add <alert-id> value=<recipient-value> [id=<recipient-id>]
1751 [description=<description>] [options [<option>=<value>]...] [meta
1752 [<meta-option>=<value>]...]
1753 Add new recipient to specified alert handler.
1754
1755 recipient update <recipient-id> [value=<recipient-value>] [descrip‐
1756 tion=<description>] [options [<option>=<value>]...] [meta
1757 [<meta-option>=<value>]...]
1758 Update an existing recipient identified by its id.
1759
1760 recipient delete <recipient-id> ...
1761 Remove specified recipients.
1762
1763 recipient remove <recipient-id> ...
1764 Remove specified recipients.
1765
1766 client
1767 local-auth [<pcsd-port>] [-u <username>] [-p <password>]
1768 Authenticate current user to local pcsd. This is required to run
1769 some pcs commands which may require permissions of root user
1770 such as 'pcs cluster start'.
1771
1773 Show all resources
1774 # pcs resource config
1775
1776 Show options specific to the 'VirtualIP' resource
1777 # pcs resource config VirtualIP
1778
1779 Create a new resource called 'VirtualIP' with options
1780 # pcs resource create VirtualIP ocf:heartbeat:IPaddr2
1781 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s
1782
1783 Create a new resource called 'VirtualIP' with options
1784 # pcs resource create VirtualIP IPaddr2 ip=192.168.0.99
1785 cidr_netmask=32 nic=eth2 op monitor interval=30s
1786
1787 Change the ip address of VirtualIP and remove the nic option
1788 # pcs resource update VirtualIP ip=192.168.0.98 nic=
1789
1790 Delete the VirtualIP resource
1791 # pcs resource delete VirtualIP
1792
1793 Create the MyStonith stonith fence_virt device which can fence host
1794 'f1'
1795 # pcs stonith create MyStonith fence_virt pcmk_host_list=f1
1796
1797 Set the stonith-enabled property to false on the cluster (which dis‐
1798 ables stonith)
1799 # pcs property set stonith-enabled=false
1800
1802 Various pcs commands accept the --force option. Its purpose is to over‐
1803 ride some of checks that pcs is doing or some of errors that may occur
1804 when a pcs command is run. When such error occurs, pcs will print the
1805 error with a note it may be overridden. The exact behavior of the
1806 option is different for each pcs command. Using the --force option can
1807 lead into situations that would normally be prevented by logic of pcs
1808 commands and therefore its use is strongly discouraged unless you know
1809 what you are doing.
1810
1812 EDITOR
1813 Path to a plain-text editor. This is used when pcs is requested
1814 to present a text for the user to edit.
1815
1816 no_proxy, https_proxy, all_proxy, NO_PROXY, HTTPS_PROXY, ALL_PROXY
1817 These environment variables (listed according to their priori‐
1818 ties) control how pcs handles proxy servers when connecting to
1819 cluster nodes. See curl(1) man page for details.
1820
1822 This section summarizes the most important changes in commands done in
1823 pcs-0.10.x compared to pcs-0.9.x. For detailed description of current
1824 commands see above.
1825
1826 cluster
1827 auth The 'pcs cluster auth' command only authenticates nodes in a
1828 local cluster and does not accept a node list. The new command
1829 for authentication is 'pcs host auth'. It allows to specify host
1830 names, addresses and pcsd ports.
1831
1832 node add
1833 Custom node names and Corosync 3.x with knet are fully supported
1834 now, therefore the syntax has been completely changed.
1835 The --device and --watchdog options have been replaced with
1836 'device' and 'watchdog' options, respectively.
1837
1838 quorum This command has been replaced with 'pcs quorum'.
1839
1840 remote-node add
1841 This command has been replaced with 'pcs cluster node
1842 add-guest'.
1843
1844 remote-node remove
1845 This command has been replaced with 'pcs cluster node
1846 delete-guest' and its alias 'pcs cluster node remove-guest'.
1847
1848 setup Custom node names and Corosync 3.x with knet are fully supported
1849 now, therefore the syntax has been completely changed.
1850 The --name option has been removed. The first parameter of the
1851 command is the cluster name now.
1852
1853 standby
1854 This command has been replaced with 'pcs node standby'.
1855
1856 uidgid rm
1857 This command has been deprecated, use 'pcs cluster uidgid
1858 delete' or 'pcs cluster uidgid remove' instead.
1859
1860 unstandby
1861 This command has been replaced with 'pcs node unstandby'.
1862
1863 verify The -V option has been replaced with --full.
1864 To specify a filename, use the -f option.
1865
1866 pcsd
1867 clear-auth
1868 This command has been replaced with 'pcs host deauth' and 'pcs
1869 pcsd deauth'.
1870
1871 property
1872 set The --node option is no longer supported. Use the 'pcs node
1873 attribute' command to set node attributes.
1874
1875 show The --node option is no longer supported. Use the 'pcs node
1876 attribute' command to view node attributes.
1877
1878 unset The --node option is no longer supported. Use the 'pcs node
1879 attribute' command to unset node attributes.
1880
1881 resource
1882 create The 'master' keyword has been changed to 'promotable'.
1883
1884 failcount reset
1885 The command has been removed as 'pcs resource cleanup' is doing
1886 exactly the same job.
1887
1888 master This command has been replaced with 'pcs resource promotable'.
1889
1890 show Previously, this command displayed either status or configura‐
1891 tion of resources depending on the parameters specified. This
1892 was confusing, therefore the command was replaced by several new
1893 commands. To display resources status, run 'pcs resource' or
1894 'pcs resource status'. To display resources configuration, run
1895 'pcs resource config' or 'pcs resource config <resource name>'.
1896 To display configured resource groups, run 'pcs resource group
1897 list'.
1898
1899 status
1900 groups This command has been replaced with 'pcs resource group list'.
1901
1902 stonith
1903 sbd device setup
1904 The --device option has been replaced with the 'device' option.
1905
1906 sbd enable
1907 The --device and --watchdog options have been replaced with
1908 'device' and 'watchdog' options, respectively.
1909
1910 show Previously, this command displayed either status or configura‐
1911 tion of stonith resources depending on the parameters specified.
1912 This was confusing, therefore the command was replaced by sev‐
1913 eral new commands. To display stonith resources status, run 'pcs
1914 stonith' or 'pcs stonith status'. To display stonith resources
1915 configuration, run 'pcs stonith config' or 'pcs stonith config
1916 <stonith name>'.
1917
1919 http://clusterlabs.org/doc/
1920
1921 pcsd(8), pcs_snmp_agent(8)
1922
1923 corosync_overview(8), votequorum(5), corosync.conf(5), corosync-qde‐
1924 vice(8), corosync-qdevice-tool(8), corosync-qnetd(8),
1925 corosync-qnetd-tool(8)
1926
1927 pacemaker-controld(7), pacemaker-fenced(7), pacemaker-schedulerd(7),
1928 crm_mon(8), crm_report(8), crm_simulate(8)
1929
1930 boothd(8), sbd(8)
1931
1932 clufter(1)
1933
1934
1935
1936pcs 0.10.4 November 2019 PCS(8)