1PCS(8) System Administration Utilities PCS(8)
2
3
4
6 pcs - pacemaker/corosync configuration system
7
9 pcs [-f file] [-h] [commands]...
10
12 Control and configure pacemaker and corosync.
13
15 -h, --help
16 Display usage and exit.
17
18 -f file
19 Perform actions on file instead of active CIB.
20
21 --debug
22 Print all network traffic and external commands run.
23
24 --version
25 Print pcs version information. List pcs capabilities if --full
26 is specified.
27
28 --request-timeout=<timeout>
29 Timeout for each outgoing request to another node in seconds.
30 Default is 60s.
31
32 Commands:
33 cluster
34 Configure cluster options and nodes.
35
36 resource
37 Manage cluster resources.
38
39 stonith
40 Manage fence devices.
41
42 constraint
43 Manage resource constraints.
44
45 property
46 Manage pacemaker properties.
47
48 acl
49 Manage pacemaker access control lists.
50
51 qdevice
52 Manage quorum device provider on the local host.
53
54 quorum
55 Manage cluster quorum settings.
56
57 booth
58 Manage booth (cluster ticket manager).
59
60 status
61 View cluster status.
62
63 config
64 View and manage cluster configuration.
65
66 pcsd
67 Manage pcs daemon.
68
69 node
70 Manage cluster nodes.
71
72 alert
73 Manage pacemaker alerts.
74
75 resource
76 [show [<resource id>] | --full | --groups | --hide-inactive]
77 Show all currently configured resources or if a resource is
78 specified show the options for the configured resource. If
79 --full is specified, all configured resource options will be
80 displayed. If --groups is specified, only show groups (and
81 their resources). If --hide-inactive is specified, only show
82 active resources.
83
84 list [filter] [--nodesc]
85 Show list of all available resource agents (if filter is pro‐
86 vided then only resource agents matching the filter will be
87 shown). If --nodesc is used then descriptions of resource agents
88 are not printed.
89
90 describe [<standard>:[<provider>:]]<type> [--full]
91 Show options for the specified resource. If --full is specified,
92 all options including advanced ones are shown.
93
94 create <resource id> [<standard>:[<provider>:]]<type> [resource
95 options] [op <operation action> <operation options> [<operation action>
96 <operation options>]...] [meta <meta options>...] [clone [<clone
97 options>] | master [<master options>] | --group <group id> [--before
98 <resource id> | --after <resource id>] | bundle <bundle id>] [--dis‐
99 abled] [--no-default-ops] [--wait[=n]]
100 Create specified resource. If clone is used a clone resource is
101 created. If master is specified a master/slave resource is cre‐
102 ated. If --group is specified the resource is added to the group
103 named. You can use --before or --after to specify the position
104 of the added resource relatively to some resource already exist‐
105 ing in the group. If bundle is specified, resource will be cre‐
106 ated inside of the specified bundle. If --disabled is specified
107 the resource is not started automatically. If --no-default-ops
108 is specified, only monitor operations are created for the
109 resource and all other operations use default settings. If
110 --wait is specified, pcs will wait up to 'n' seconds for the
111 resource to start and then return 0 if the resource is started,
112 or 1 if the resource has not yet started. If 'n' is not speci‐
113 fied it defaults to 60 minutes.
114
115 Example: Create a new resource called 'VirtualIP' with IP
116 address 192.168.0.99, netmask of 32, monitored everything 30
117 seconds, on eth2: pcs resource create VirtualIP ocf:heart‐
118 beat:IPaddr2 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor
119 interval=30s
120
121 delete <resource id|group id|master id|clone id>
122 Deletes the resource, group, master or clone (and all resources
123 within the group/master/clone).
124
125 enable <resource id>... [--wait[=n]]
126 Allow the cluster to start the resources. Depending on the rest
127 of the configuration (constraints, options, failures, etc), the
128 resources may remain stopped. If --wait is specified, pcs will
129 wait up to 'n' seconds for the resources to start and then
130 return 0 if the resources are started, or 1 if the resources
131 have not yet started. If 'n' is not specified it defaults to 60
132 minutes.
133
134 disable <resource id>... [--wait[=n]]
135 Attempt to stop the resources if they are running and forbid the
136 cluster from starting them again. Depending on the rest of the
137 configuration (constraints, options, failures, etc), the
138 resources may remain started. If --wait is specified, pcs will
139 wait up to 'n' seconds for the resources to stop and then return
140 0 if the resources are stopped or 1 if the resources have not
141 stopped. If 'n' is not specified it defaults to 60 minutes.
142
143 restart <resource id> [node] [--wait=n]
144 Restart the resource specified. If a node is specified and if
145 the resource is a clone or master/slave it will be restarted
146 only on the node specified. If --wait is specified, then we
147 will wait up to 'n' seconds for the resource to be restarted and
148 return 0 if the restart was successful or 1 if it was not.
149
150 debug-start <resource id> [--full]
151 This command will force the specified resource to start on this
152 node ignoring the cluster recommendations and print the output
153 from starting the resource. Using --full will give more
154 detailed output. This is mainly used for debugging resources
155 that fail to start.
156
157 debug-stop <resource id> [--full]
158 This command will force the specified resource to stop on this
159 node ignoring the cluster recommendations and print the output
160 from stopping the resource. Using --full will give more
161 detailed output. This is mainly used for debugging resources
162 that fail to stop.
163
164 debug-promote <resource id> [--full]
165 This command will force the specified resource to be promoted on
166 this node ignoring the cluster recommendations and print the
167 output from promoting the resource. Using --full will give more
168 detailed output. This is mainly used for debugging resources
169 that fail to promote.
170
171 debug-demote <resource id> [--full]
172 This command will force the specified resource to be demoted on
173 this node ignoring the cluster recommendations and print the
174 output from demoting the resource. Using --full will give more
175 detailed output. This is mainly used for debugging resources
176 that fail to demote.
177
178 debug-monitor <resource id> [--full]
179 This command will force the specified resource to be monitored
180 on this node ignoring the cluster recommendations and print the
181 output from monitoring the resource. Using --full will give
182 more detailed output. This is mainly used for debugging
183 resources that fail to be monitored.
184
185 move <resource id> [destination node] [--master] [lifetime=<lifetime>]
186 [--wait[=n]]
187 Move the resource off the node it is currently running on by
188 creating a -INFINITY location constraint to ban the node. If
189 destination node is specified the resource will be moved to that
190 node by creating an INFINITY location constraint to prefer the
191 destination node. If --master is used the scope of the command
192 is limited to the master role and you must use the master id
193 (instead of the resource id). If lifetime is specified then the
194 constraint will expire after that time, otherwise it defaults to
195 infinity and the constraint can be cleared manually with 'pcs
196 resource clear' or 'pcs constraint delete'. If --wait is speci‐
197 fied, pcs will wait up to 'n' seconds for the resource to move
198 and then return 0 on success or 1 on error. If 'n' is not spec‐
199 ified it defaults to 60 minutes. If you want the resource to
200 preferably avoid running on some nodes but be able to failover
201 to them use 'pcs location avoids'.
202
203 ban <resource id> [node] [--master] [lifetime=<lifetime>] [--wait[=n]]
204 Prevent the resource id specified from running on the node (or
205 on the current node it is running on if no node is specified) by
206 creating a -INFINITY location constraint. If --master is used
207 the scope of the command is limited to the master role and you
208 must use the master id (instead of the resource id). If life‐
209 time is specified then the constraint will expire after that
210 time, otherwise it defaults to infinity and the constraint can
211 be cleared manually with 'pcs resource clear' or 'pcs constraint
212 delete'. If --wait is specified, pcs will wait up to 'n' sec‐
213 onds for the resource to move and then return 0 on success or 1
214 on error. If 'n' is not specified it defaults to 60 minutes.
215 If you want the resource to preferably avoid running on some
216 nodes but be able to failover to them use 'pcs location avoids'.
217
218 clear <resource id> [node] [--master] [--wait[=n]]
219 Remove constraints created by move and/or ban on the specified
220 resource (and node if specified). If --master is used the scope
221 of the command is limited to the master role and you must use
222 the master id (instead of the resource id). If --wait is speci‐
223 fied, pcs will wait up to 'n' seconds for the operation to fin‐
224 ish (including starting and/or moving resources if appropriate)
225 and then return 0 on success or 1 on error. If 'n' is not spec‐
226 ified it defaults to 60 minutes.
227
228 standards
229 List available resource agent standards supported by this
230 installation (OCF, LSB, etc.).
231
232 providers
233 List available OCF resource agent providers.
234
235 agents [standard[:provider]]
236 List available agents optionally filtered by standard and
237 provider.
238
239 update <resource id> [resource options] [op [<operation action> <opera‐
240 tion options>]...] [meta <meta operations>...] [--wait[=n]]
241 Add/Change options to specified resource, clone or multi-state
242 resource. If an operation (op) is specified it will update the
243 first found operation with the same action on the specified
244 resource, if no operation with that action exists then a new
245 operation will be created. (WARNING: all existing options on
246 the updated operation will be reset if not specified.) If you
247 want to create multiple monitor operations you should use the
248 'op add' & 'op remove' commands. If --wait is specified, pcs
249 will wait up to 'n' seconds for the changes to take effect and
250 then return 0 if the changes have been processed or 1 otherwise.
251 If 'n' is not specified it defaults to 60 minutes.
252
253 op add <resource id> <operation action> [operation properties]
254 Add operation for specified resource.
255
256 op remove <resource id> <operation action> [<operation properties>...]
257 Remove specified operation (note: you must specify the exact
258 operation properties to properly remove an existing operation).
259
260 op remove <operation id>
261 Remove the specified operation id.
262
263 op defaults [options]
264 Set default values for operations, if no options are passed,
265 lists currently configured defaults. Defaults do not apply to
266 resources which override them with their own defined operations.
267
268 meta <resource id | group id | master id | clone id> <meta options>
269 [--wait[=n]]
270 Add specified options to the specified resource, group, mas‐
271 ter/slave or clone. Meta options should be in the format of
272 name=value, options may be removed by setting an option without
273 a value. If --wait is specified, pcs will wait up to 'n' sec‐
274 onds for the changes to take effect and then return 0 if the
275 changes have been processed or 1 otherwise. If 'n' is not spec‐
276 ified it defaults to 60 minutes. Example: pcs resource meta
277 TestResource failure-timeout=50 stickiness=
278
279 group add <group id> <resource id> [resource id] ... [resource id]
280 [--before <resource id> | --after <resource id>] [--wait[=n]]
281 Add the specified resource to the group, creating the group if
282 it does not exist. If the resource is present in another group
283 it is moved to the new group. You can use --before or --after
284 to specify the position of the added resources relatively to
285 some resource already existing in the group. If --wait is spec‐
286 ified, pcs will wait up to 'n' seconds for the operation to fin‐
287 ish (including moving resources if appropriate) and then return
288 0 on success or 1 on error. If 'n' is not specified it defaults
289 to 60 minutes.
290
291 group remove <group id> <resource id> [resource id] ... [resource id]
292 [--wait[=n]]
293 Remove the specified resource(s) from the group, removing the
294 group if no resources remain in it. If --wait is specified, pcs
295 will wait up to 'n' seconds for the operation to finish (includ‐
296 ing moving resources if appropriate) and then return 0 on suc‐
297 cess or 1 on error. If 'n' is not specified it defaults to 60
298 minutes.
299
300 ungroup <group id> [resource id] ... [resource id] [--wait[=n]]
301 Remove the group (note: this does not remove any resources from
302 the cluster) or if resources are specified, remove the specified
303 resources from the group. If --wait is specified, pcs will wait
304 up to 'n' seconds for the operation to finish (including moving
305 resources if appropriate) and the return 0 on success or 1 on
306 error. If 'n' is not specified it defaults to 60 minutes.
307
308 clone <resource id | group id> [clone options]... [--wait[=n]]
309 Set up the specified resource or group as a clone. If --wait is
310 specified, pcs will wait up to 'n' seconds for the operation to
311 finish (including starting clone instances if appropriate) and
312 then return 0 on success or 1 on error. If 'n' is not specified
313 it defaults to 60 minutes.
314
315 unclone <resource id | group id> [--wait[=n]]
316 Remove the clone which contains the specified group or resource
317 (the resource or group will not be removed). If --wait is spec‐
318 ified, pcs will wait up to 'n' seconds for the operation to fin‐
319 ish (including stopping clone instances if appropriate) and then
320 return 0 on success or 1 on error. If 'n' is not specified it
321 defaults to 60 minutes.
322
323 master [<master/slave id>] <resource id | group id> [options]
324 [--wait[=n]]
325 Configure a resource or group as a multi-state (master/slave)
326 resource. If --wait is specified, pcs will wait up to 'n' sec‐
327 onds for the operation to finish (including starting and promot‐
328 ing resource instances if appropriate) and then return 0 on suc‐
329 cess or 1 on error. If 'n' is not specified it defaults to 60
330 minutes. Note: to remove a master you must remove the
331 resource/group it contains.
332
333 bundle create <bundle id> container <container type> [<container
334 options>] [network <network options>] [port-map <port options>]...
335 [storage-map <storage options>]... [meta <meta options>] [--disabled]
336 [--wait[=n]]
337 Create a new bundle encapsulating no resources. The bundle can
338 be used either as it is or a resource may be put into it at any
339 time. If --disabled is specified, the bundle is not started
340 automatically. If --wait is specified, pcs will wait up to 'n'
341 seconds for the bundle to start and then return 0 on success or
342 1 on error. If 'n' is not specified it defaults to 60 minutes.
343
344 bundle update <bundle id> [container <container options>] [network
345 <network options>] [port-map (add <port options>) | (remove
346 <id>...)]... [storage-map (add <storage options>) | (remove
347 <id>...)]... [meta <meta options>] [--wait[=n]]
348 Add, remove or change options to specified bundle. If you wish
349 to update a resource encapsulated in the bundle, use the 'pcs
350 resource update' command instead and specify the resource id.
351 If --wait is specified, pcs will wait up to 'n' seconds for the
352 operation to finish (including moving resources if appropriate)
353 and then return 0 on success or 1 on error. If 'n' is not spec‐
354 ified it defaults to 60 minutes.
355
356 manage <resource id>... [--monitor]
357 Set resources listed to managed mode (default). If --monitor is
358 specified, enable all monitor operations of the resources.
359
360 unmanage <resource id>... [--monitor]
361 Set resources listed to unmanaged mode. When a resource is in
362 unmanaged mode, the cluster is not allowed to start nor stop the
363 resource. If --monitor is specified, disable all monitor opera‐
364 tions of the resources.
365
366 defaults [options]
367 Set default values for resources, if no options are passed,
368 lists currently configured defaults. Defaults do not apply to
369 resources which override them with their own defined values.
370
371 cleanup [<resource id>] [--node <node>]
372 Make the cluster forget failed operations from history of the
373 resource and re-detect its current state. This can be useful to
374 purge knowledge of past failures that have since been resolved.
375 If a resource id is not specified then all resources / stonith
376 devices will be cleaned up. If a node is not specified then
377 resources / stonith devices on all nodes will be cleaned up.
378
379 refresh [<resource id>] [--node <node>] [--full]
380 Make the cluster forget the complete operation history (includ‐
381 ing failures) of the resource and re-detect its current state.
382 If you are interested in forgetting failed operations only, use
383 the 'pcs resource cleanup' command. If a resource id is not
384 specified then all resources / stonith devices will be
385 refreshed. If a node is not specified then resources / stonith
386 devices on all nodes will be refreshed. Use --full to refresh a
387 resource on all nodes, otherwise only nodes where the resource's
388 state is known will be considered.
389
390 failcount show [<resource id> [<node> [<operation> [<interval>]]]]
391 [--full]
392 Show current failcount for resources, optionally filtered by a
393 resource, node, operation and its interval. If --full is speci‐
394 fied do not sum failcounts per resource and node. Operation,
395 interval and --full options require pacemaker-1.1.18 or newer.
396
397 failcount reset [<resource id> [<node> [<operation> [<interval>]]]]
398 Reset failcount for specified resource on all nodes or only on
399 specified node. This tells the cluster to forget how many times
400 a resource has failed in the past. This may allow the resource
401 to be started or moved to a more preferred location. Operation
402 and interval options require pacemaker-1.1.18 or newer.
403
404 relocate dry-run [resource1] [resource2] ...
405 The same as 'relocate run' but has no effect on the cluster.
406
407 relocate run [resource1] [resource2] ...
408 Relocate specified resources to their preferred nodes. If no
409 resources are specified, relocate all resources. This command
410 calculates the preferred node for each resource while ignoring
411 resource stickiness. Then it creates location constraints which
412 will cause the resources to move to their preferred nodes. Once
413 the resources have been moved the constraints are deleted auto‐
414 matically. Note that the preferred node is calculated based on
415 current cluster status, constraints, location of resources and
416 other settings and thus it might change over time.
417
418 relocate show
419 Display current status of resources and their optimal node
420 ignoring resource stickiness.
421
422 relocate clear
423 Remove all constraints created by the 'relocate run' command.
424
425 utilization [<resource id> [<name>=<value> ...]]
426 Add specified utilization options to specified resource. If
427 resource is not specified, shows utilization of all resources.
428 If utilization options are not specified, shows utilization of
429 specified resource. Utilization option should be in format
430 name=value, value has to be integer. Options may be removed by
431 setting an option without a value. Example: pcs resource uti‐
432 lization TestResource cpu= ram=20
433
434 cluster
435 auth [<node>[:<port>]] [...] [-u <username>] [-p <password>] [--force]
436 [--local]
437 Authenticate pcs to pcsd on nodes specified, or on all nodes
438 configured in the local cluster if no nodes are specified
439 (authorization tokens are stored in ~/.pcs/tokens or
440 /var/lib/pcsd/tokens for root). By default all nodes are also
441 authenticated to each other, using --local only authenticates
442 the local node (and does not authenticate the remote nodes with
443 each other). Using --force forces re-authentication to occur.
444
445 setup [--start [--wait[=<n>]]] [--local] [--enable] --name <cluster
446 name> <node1[,node1-altaddr]> [<node2[,node2-altaddr]>] [...] [--trans‐
447 port udpu|udp] [--rrpmode active|passive] [--addr0 <addr/net>
448 [[[--mcast0 <address>] [--mcastport0 <port>] [--ttl0 <ttl>]] |
449 [--broadcast0]] [--addr1 <addr/net> [[[--mcast1 <address>] [--mcast‐
450 port1 <port>] [--ttl1 <ttl>]] | [--broadcast1]]]]
451 [--wait_for_all=<0|1>] [--auto_tie_breaker=<0|1>] [--last_man_stand‐
452 ing=<0|1> [--last_man_standing_window=<time in ms>]] [--ipv6] [--token
453 <timeout>] [--token_coefficient <timeout>] [--join <timeout>] [--con‐
454 sensus <timeout>] [--netmtu <size>] [--miss_count_const <count>]
455 [--fail_recv_const <failures>] [--encryption 0|1]
456 Configure corosync and sync configuration out to listed nodes.
457 --local will only perform changes on the local node, --start
458 will also start the cluster on the specified nodes, --wait will
459 wait up to 'n' seconds for the nodes to start, --enable will
460 enable corosync and pacemaker on node startup, --transport
461 allows specification of corosync transport (default: udpu; udp
462 for RHEL 6 clusters), --rrpmode allows you to set the RRP mode
463 of the system. Currently only 'passive' is supported or tested
464 (using 'active' is not recommended). The --wait_for_all,
465 --auto_tie_breaker, --last_man_standing, --last_man_stand‐
466 ing_window options are all documented in corosync's votequo‐
467 rum(5) man page. These options are not supported on RHEL 6 clus‐
468 ters.
469
470 --ipv6 will configure corosync to use ipv6 (instead of ipv4).
471 This option is not supported on RHEL 6 clusters.
472
473 --token <timeout> sets time in milliseconds until a token loss
474 is declared after not receiving a token (default 1000 ms; 10000
475 ms for RHEL 6 clusters)
476
477 --token_coefficient <timeout> sets time in milliseconds used for
478 clusters with at least 3 nodes as a coefficient for real token
479 timeout calculation (token + (number_of_nodes - 2) * token_coef‐
480 ficient) (default 650 ms) This option is not supported on RHEL
481 6 clusters.
482
483 --join <timeout> sets time in milliseconds to wait for join mes‐
484 sages (default 50 ms)
485
486 --consensus <timeout> sets time in milliseconds to wait for con‐
487 sensus to be achieved before starting a new round of membership
488 configuration (default 1200 ms)
489
490 --netmtu <size> sets the network maximum transmit unit (default:
491 1500)
492
493 --miss_count_const <count> sets the maximum number of times on
494 receipt of a token a message is checked for retransmission
495 before a retransmission occurs (default 5 messages)
496
497 --fail_recv_const <failures> specifies how many rotations of the
498 token without receiving any messages when messages should be
499 received may occur before a new configuration is formed (default
500 2500 failures)
501
502 --encryption 0|1 disables (0) or enables (1) corosync communica‐
503 tion encryption (default 0)
504
505
506 Configuring Redundant Ring Protocol (RRP)
507
508 When using udpu specifying nodes, specify the ring 0 address
509 first followed by a ',' and then the ring 1 address.
510
511 Example: pcs cluster setup --name cname nodeA-0,nodeA-1
512 nodeB-0,nodeB-1
513
514 When using udp, using --addr0 and --addr1 will allow you to con‐
515 figure rrp mode for corosync. It's recommended to use a network
516 (instead of IP address) for --addr0 and --addr1 so the same
517 corosync.conf file can be used around the cluster. --mcast0
518 defaults to 239.255.1.1 and --mcast1 defaults to 239.255.2.1,
519 --mcastport0/1 default to 5405 and ttl defaults to 1. If
520 --broadcast is specified, --mcast0/1, --mcastport0/1 & --ttl0/1
521 are ignored.
522
523 start [--all | <node>... ] [--wait[=<n>]] [--request-timeout=<seconds>]
524 Start a cluster on specified node(s). If no nodes are specified
525 then start a cluster on the local node. If --all is specified
526 then start a cluster on all nodes. If the cluster has many nodes
527 then the start request may time out. In that case you should
528 consider setting --request-timeout to a suitable value. If
529 --wait is specified, pcs waits up to 'n' seconds for the cluster
530 to get ready to provide services after the cluster has success‐
531 fully started.
532
533 stop [--all | <node>... ] [--request-timeout=<seconds>]
534 Stop a cluster on specified node(s). If no nodes are specified
535 then stop a cluster on the local node. If --all is specified
536 then stop a cluster on all nodes. If the cluster is running
537 resources which take long time to stop then the stop request may
538 time out before the cluster actually stops. In that case you
539 should consider setting --request-timeout to a suitable value.
540
541 kill Force corosync and pacemaker daemons to stop on the local node
542 (performs kill -9). Note that init system (e.g. systemd) can
543 detect that cluster is not running and start it again. If you
544 want to stop cluster on a node, run pcs cluster stop on that
545 node.
546
547 enable [--all | <node>... ]
548 Configure cluster to run on node boot on specified node(s). If
549 node is not specified then cluster is enabled on the local node.
550 If --all is specified then cluster is enabled on all nodes.
551
552 disable [--all | <node>... ]
553 Configure cluster to not run on node boot on specified node(s).
554 If node is not specified then cluster is disabled on the local
555 node. If --all is specified then cluster is disabled on all
556 nodes.
557
558 status View current cluster status (an alias of 'pcs status cluster').
559
560 pcsd-status [<node>]...
561 Show current status of pcsd on nodes specified, or on all nodes
562 configured in the local cluster if no nodes are specified.
563
564 sync Sync corosync configuration to all nodes found from current
565 corosync.conf file (cluster.conf on systems running Corosync
566 1.x).
567
568 cib [filename] [scope=<scope> | --config]
569 Get the raw xml from the CIB (Cluster Information Base). If a
570 filename is provided, we save the CIB to that file, otherwise
571 the CIB is printed. Specify scope to get a specific section of
572 the CIB. Valid values of the scope are: configuration, nodes,
573 resources, constraints, crm_config, rsc_defaults, op_defaults,
574 status. --config is the same as scope=configuration. Do not
575 specify a scope if you want to edit the saved CIB using pcs (pcs
576 -f <command>).
577
578 cib-push <filename> [--wait[=<n>]] [diff-against=<filename_original> |
579 scope=<scope> | --config]
580 Push the raw xml from <filename> to the CIB (Cluster Information
581 Base). You can obtain the CIB by running the 'pcs cluster cib'
582 command, which is recommended first step when you want to per‐
583 form desired modifications (pcs -f <command>) for the one-off
584 push. If diff-against is specified, pcs diffs contents of file‐
585 name against contents of filename_original and pushes the result
586 to the CIB. Specify scope to push a specific section of the
587 CIB. Valid values of the scope are: configuration, nodes,
588 resources, constraints, crm_config, rsc_defaults, op_defaults.
589 --config is the same as scope=configuration. Use of --config is
590 recommended. Do not specify a scope if you need to push the
591 whole CIB or be warned in the case of outdated CIB. If --wait
592 is specified wait up to 'n' seconds for changes to be applied.
593 WARNING: the selected scope of the CIB will be overwritten by
594 the current content of the specified file.
595
596 Example:
597 pcs cluster cib > original.xml
598 cp original.xml new.xml
599 pcs -f new.xml constraint location apache prefers node2
600 pcs cluster cib-push new.xml diff-against=original.xml
601
602 cib-upgrade
603 Upgrade the CIB to conform to the latest version of the document
604 schema.
605
606 edit [scope=<scope> | --config]
607 Edit the cib in the editor specified by the $EDITOR environment
608 variable and push out any changes upon saving. Specify scope to
609 edit a specific section of the CIB. Valid values of the scope
610 are: configuration, nodes, resources, constraints, crm_config,
611 rsc_defaults, op_defaults. --config is the same as scope=con‐
612 figuration. Use of --config is recommended. Do not specify a
613 scope if you need to edit the whole CIB or be warned in the case
614 of outdated CIB.
615
616 node add <node[,node-altaddr]> [--start [--wait[=<n>]]] [--enable]
617 [--watchdog=<watchdog-path>] [--device=<path>] ... [--no-watchdog-vali‐
618 dation]
619 Add the node to the cluster and sync all relevant configuration
620 files to the new node. If --start is specified also start clus‐
621 ter on the new node, if --wait is specified wait up to 'n' sec‐
622 onds for the new node to start. If --enable is specified config‐
623 ure cluster to start on the new node on boot. When using Redun‐
624 dant Ring Protocol (RRP) with udpu transport, specify the ring 0
625 address first followed by a ',' and then the ring 1 address. Use
626 --watchdog to specify path to watchdog on newly added node, when
627 SBD is enabled in cluster. If SBD is configured with shared
628 storage, use --device to specify path to shared device on new
629 node. If --no-watchdog-validation is specified, validation of
630 watchdog will be skipped. This command can only be run on an
631 existing cluster node.
632
633 WARNING: By default, it is tested whether the specified watchdog
634 is supported. This may cause a restart of the system when a
635 watchdog with no-way-out-feature enabled is present. Use
636 --no-watchdog-validation to skip watchdog validation.
637
638 node remove <node>
639 Shutdown specified node and remove it from the cluster.
640
641 node add-remote <node host> [<node name>] [options] [op <operation
642 action> <operation options> [<operation action> <operation
643 options>]...] [meta <meta options>...] [--wait[=<n>]]
644 Add the node to the cluster as a remote node. Sync all relevant
645 configuration files to the new node. Start the node and config‐
646 ure it to start the cluster on boot. Options are port and recon‐
647 nect_interval. Operations and meta belong to an underlying con‐
648 nection resource (ocf:pacemaker:remote). If --wait is specified,
649 wait up to 'n' seconds for the node to start.
650
651 node remove-remote <node identifier>
652 Shutdown specified remote node and remove it from the cluster.
653 The node-identifier can be the name of the node or the address
654 of the node.
655
656 node add-guest <node host> <resource id> [options] [--wait[=<n>]]
657 Make the specified resource a guest node resource. Sync all rel‐
658 evant configuration files to the new node. Start the node and
659 configure it to start the cluster on boot. Options are
660 remote-addr, remote-port and remote-connect-timeout. If --wait
661 is specified, wait up to 'n' seconds for the node to start.
662
663 node remove-guest <node identifier>
664 Shutdown specified guest node and remove it from the cluster.
665 The node-identifier can be the name of the node or the address
666 of the node or id of the resource that is used as the guest
667 node.
668
669 node clear <node name>
670 Remove specified node from various cluster caches. Use this if a
671 removed node is still considered by the cluster to be a member
672 of the cluster.
673
674 uidgid List the current configured uids and gids of users allowed to
675 connect to corosync.
676
677 uidgid add [uid=<uid>] [gid=<gid>]
678 Add the specified uid and/or gid to the list of users/groups
679 allowed to connect to corosync.
680
681 uidgid rm [uid=<uid>] [gid=<gid>]
682 Remove the specified uid and/or gid from the list of
683 users/groups allowed to connect to corosync.
684
685 corosync [node]
686 Get the corosync.conf from the specified node or from the cur‐
687 rent node if node not specified.
688
689 reload corosync
690 Reload the corosync configuration on the current node.
691
692 destroy [--all]
693 Permanently destroy the cluster on the current node, killing all
694 cluster processes and removing all cluster configuration files.
695 Using --all will attempt to destroy the cluster on all nodes in
696 the local cluster.
697
698 WARNING: This command permanently removes any cluster configura‐
699 tion that has been created. It is recommended to run 'pcs clus‐
700 ter stop' before destroying the cluster.
701
702 verify [-V] [filename]
703 Checks the pacemaker configuration (cib) for syntax and common
704 conceptual errors. If no filename is specified the check is
705 performed on the currently running cluster. If -V is used more
706 verbose output will be printed.
707
708 report [--from "YYYY-M-D H:M:S" [--to "YYYY-M-D H:M:S"]] <dest>
709 Create a tarball containing everything needed when reporting
710 cluster problems. If --from and --to are not used, the report
711 will include the past 24 hours.
712
713 stonith
714 [show [stonith id]] [--full]
715 Show all currently configured stonith devices or if a stonith id
716 is specified show the options for the configured stonith device.
717 If --full is specified all configured stonith options will be
718 displayed.
719
720 list [filter] [--nodesc]
721 Show list of all available stonith agents (if filter is provided
722 then only stonith agents matching the filter will be shown). If
723 --nodesc is used then descriptions of stonith agents are not
724 printed.
725
726 describe <stonith agent> [--full]
727 Show options for specified stonith agent. If --full is speci‐
728 fied, all options including advanced ones are shown.
729
730 create <stonith id> <stonith device type> [stonith device options] [op
731 <operation action> <operation options> [<operation action> <operation
732 options>]...] [meta <meta options>...] [--group <group id> [--before
733 <stonith id> | --after <stonith id>]] [--disabled] [--wait[=n]]
734 Create stonith device with specified type and options. If
735 --group is specified the stonith device is added to the group
736 named. You can use --before or --after to specify the position
737 of the added stonith device relatively to some stonith device
738 already existing in the group. If--disabled is specified the
739 stonith device is not used. If --wait is specified, pcs will
740 wait up to 'n' seconds for the stonith device to start and then
741 return 0 if the stonith device is started, or 1 if the stonith
742 device has not yet started. If 'n' is not specified it defaults
743 to 60 minutes.
744
745
746 update <stonith id> [stonith device options]
747 Add/Change options to specified stonith id.
748
749 delete <stonith id>
750 Remove stonith id from configuration.
751
752 enable <stonith id>... [--wait[=n]]
753 Allow the cluster to use the stonith devices. If --wait is spec‐
754 ified, pcs will wait up to 'n' seconds for the stonith devices
755 to start and then return 0 if the stonith devices are started,
756 or 1 if the stonith devices have not yet started. If 'n' is not
757 specified it defaults to 60 minutes.
758
759 disable <stonith id>... [--wait[=n]]
760 Attempt to stop the stonith devices if they are running and dis‐
761 allow the cluster to use them. If --wait is specified, pcs will
762 wait up to 'n' seconds for the stonith devices to stop and then
763 return 0 if the stonith devices are stopped or 1 if the stonith
764 devices have not stopped. If 'n' is not specified it defaults to
765 60 minutes.
766
767 cleanup [<stonith id>] [--node <node>]
768 Make the cluster forget failed operations from history of the
769 stonith device and re-detect its current state. This can be use‐
770 ful to purge knowledge of past failures that have since been
771 resolved. If a stonith id is not specified then all resources /
772 stonith devices will be cleaned up. If a node is not specified
773 then resources / stonith devices on all nodes will be cleaned
774 up.
775
776 refresh [<stonith id>] [--node <node>] [--full]
777 Make the cluster forget the complete operation history (includ‐
778 ing failures) of the stonith device and re-detect its current
779 state. If you are interested in forgetting failed operations
780 only, use the 'pcs stonith cleanup' command. If a stonith id is
781 not specified then all resources / stonith devices will be
782 refreshed. If a node is not specified then resources / stonith
783 devices on all nodes will be refreshed. Use --full to refresh a
784 stonith device on all nodes, otherwise only nodes where the
785 stonith device's state is known will be considered.
786
787 level [config]
788 Lists all of the fencing levels currently configured.
789
790 level add <level> <target> <stonith id> [stonith id]...
791 Add the fencing level for the specified target with the list of
792 stonith devices to attempt for that target at that level. Fence
793 levels are attempted in numerical order (starting with 1). If a
794 level succeeds (meaning all devices are successfully fenced in
795 that level) then no other levels are tried, and the target is
796 considered fenced. Target may be a node name <node_name> or
797 %<node_name> or node%<node_name>, a node name regular expression
798 regexp%<node_pattern> or a node attribute value
799 attrib%<name>=<value>.
800
801 level remove <level> [target] [stonith id]...
802 Removes the fence level for the level, target and/or devices
803 specified. If no target or devices are specified then the fence
804 level is removed. Target may be a node name <node_name> or
805 %<node_name> or node%<node_name>, a node name regular expression
806 regexp%<node_pattern> or a node attribute value
807 attrib%<name>=<value>.
808
809 level clear [target|stonith id(s)]
810 Clears the fence levels on the target (or stonith id) specified
811 or clears all fence levels if a target/stonith id is not speci‐
812 fied. If more than one stonith id is specified they must be sep‐
813 arated by a comma and no spaces. Target may be a node name
814 <node_name> or %<node_name> or node%<node_name>, a node name
815 regular expression regexp%<node_pattern> or a node attribute
816 value attrib%<name>=<value>. Example: pcs stonith level clear
817 dev_a,dev_b
818
819 level verify
820 Verifies all fence devices and nodes specified in fence levels
821 exist.
822
823 fence <node> [--off]
824 Fence the node specified (if --off is specified, use the 'off'
825 API call to stonith which will turn the node off instead of
826 rebooting it).
827
828 confirm <node> [--force]
829 Confirm to the cluster that the specified node is powered off.
830 This allows the cluster to recover from a situation where no
831 stonith device is able to fence the node. This command should
832 ONLY be used after manually ensuring that the node is powered
833 off and has no access to shared resources.
834
835 WARNING: If this node is not actually powered off or it does
836 have access to shared resources, data corruption/cluster failure
837 can occur. To prevent accidental running of this command,
838 --force or interactive user response is required in order to
839 proceed.
840
841 NOTE: It is not checked if the specified node exists in the
842 cluster in order to be able to work with nodes not visible from
843 the local cluster partition.
844
845 sbd enable [--watchdog=<path>[@<node>]] ... [--device=<path>[@<node>]]
846 ... [<SBD_OPTION>=<value>] ... [--no-watchdog-validation]
847 Enable SBD in cluster. Default path for watchdog device is
848 /dev/watchdog. Allowed SBD options: SBD_WATCHDOG_TIMEOUT
849 (default: 5), SBD_DELAY_START (default: no) and SBD_STARTMODE
850 (default: always). It is possible to specify up to 3 devices per
851 node. If --no-watchdog-validation is specified, validation of
852 watchdogs will be skipped.
853
854 WARNING: Cluster has to be restarted in order to apply these
855 changes.
856
857 WARNING: By default, it is tested whether the specified watchdog
858 is supported. This may cause a restart of the system when a
859 watchdog with no-way-out-feature enabled is present. Use
860 --no-watchdog-validation to skip watchdog validation.
861
862 Example of enabling SBD in cluster with watchdogs on node1 will
863 be /dev/watchdog2, on node2 /dev/watchdog1, /dev/watchdog0 on
864 all other nodes, device /dev/sdb on node1, device /dev/sda on
865 all other nodes and watchdog timeout will bet set to 10 seconds:
866
867 pcs stonith sbd enable --watchdog=/dev/watchdog2@node1 --watch‐
868 dog=/dev/watchdog1@node2 --watchdog=/dev/watchdog0
869 --device=/dev/sdb@node1 --device=/dev/sda SBD_WATCHDOG_TIME‐
870 OUT=10
871
872
873 sbd disable
874 Disable SBD in cluster.
875
876 WARNING: Cluster has to be restarted in order to apply these
877 changes.
878
879 sbd device setup --device=<path> [--device=<path>] ... [watchdog-time‐
880 out=<integer>] [allocate-timeout=<integer>] [loop-timeout=<integer>]
881 [msgwait-timeout=<integer>]
882 Initialize SBD structures on device(s) with specified timeouts.
883
884 WARNING: All content on device(s) will be overwritten.
885
886 sbd device message <device-path> <node> <message-type>
887 Manually set a message of the specified type on the device for
888 the node. Possible message types (they are documented in sbd(8)
889 man page): test, reset, off, crashdump, exit, clear
890
891 sbd status [--full]
892 Show status of SBD services in cluster and local device(s) con‐
893 figured. If --full is specified, also dump of SBD headers on
894 device(s) will be shown.
895
896 sbd config
897 Show SBD configuration in cluster.
898
899
900 sbd watchdog list
901 Show all available watchdog devices on the local node.
902
903 WARNING: Listing available watchdogs may cause a restart of the
904 system when a watchdog with no-way-out-feature enabled is
905 present.
906
907
908 sbd watchdog test [<watchdog-path>]
909 This operation is expected to force-reboot the local system
910 without following any shutdown procedures using a watchdog. If
911 no watchdog is specified, available watchdog will be used if
912 only one watchdog device is available on the local system.
913
914
915 acl
916 [show] List all current access control lists.
917
918 enable Enable access control lists.
919
920 disable
921 Disable access control lists.
922
923 role create <role id> [description=<description>] [((read | write |
924 deny) (xpath <query> | id <id>))...]
925 Create a role with the id and (optional) description specified.
926 Each role can also have an unlimited number of permissions
927 (read/write/deny) applied to either an xpath query or the id of
928 a specific element in the cib.
929
930 role delete <role id>
931 Delete the role specified and remove it from any users/groups it
932 was assigned to.
933
934 role assign <role id> [to] [user|group] <username/group>
935 Assign a role to a user or group already created with 'pcs acl
936 user/group create'. If there is user and group with the same id
937 and it is not specified which should be used, user will be pri‐
938 oritized. In cases like this specify whenever user or group
939 should be used.
940
941 role unassign <role id> [from] [user|group] <username/group>
942 Remove a role from the specified user. If there is user and
943 group with the same id and it is not specified which should be
944 used, user will be prioritized. In cases like this specify when‐
945 ever user or group should be used.
946
947 user create <username> [<role id>]...
948 Create an ACL for the user specified and assign roles to the
949 user.
950
951 user delete <username>
952 Remove the user specified (and roles assigned will be unassigned
953 for the specified user).
954
955 group create <group> [<role id>]...
956 Create an ACL for the group specified and assign roles to the
957 group.
958
959 group delete <group>
960 Remove the group specified (and roles assigned will be unas‐
961 signed for the specified group).
962
963 permission add <role id> ((read | write | deny) (xpath <query> | id
964 <id>))...
965 Add the listed permissions to the role specified.
966
967 permission delete <permission id>
968 Remove the permission id specified (permission id's are listed
969 in parenthesis after permissions in 'pcs acl' output).
970
971 property
972 [list|show [<property> | --all | --defaults]] | [--all | --defaults]
973 List property settings (default: lists configured properties).
974 If --defaults is specified will show all property defaults, if
975 --all is specified, current configured properties will be shown
976 with unset properties and their defaults. Run 'man pengine' and
977 'man crmd' to get a description of the properties.
978
979 set [--force | --node <nodename>] <property>=[<value>] [<prop‐
980 erty>=[<value>] ...]
981 Set specific pacemaker properties (if the value is blank then
982 the property is removed from the configuration). If a property
983 is not recognized by pcs the property will not be created unless
984 the --force is used. If --node is used a node attribute is set
985 on the specified node. Run 'man pengine' and 'man crmd' to get
986 a description of the properties.
987
988 unset [--node <nodename>] <property>
989 Remove property from configuration (or remove attribute from
990 specified node if --node is used). Run 'man pengine' and 'man
991 crmd' to get a description of the properties.
992
993 constraint
994 [list|show] --full
995 List all current constraints. If --full is specified also list
996 the constraint ids.
997
998 location <resource> prefers <node>[=<score>] [<node>[=<score>]]...
999 Create a location constraint on a resource to prefer the speci‐
1000 fied node with score (default score: INFINITY). Resource may be
1001 either a resource id <resource_id> or %<resource_id> or
1002 resource%<resource_id>, or a resource name regular expression
1003 regexp%<resource_pattern>.
1004
1005 location <resource> avoids <node>[=<score>] [<node>[=<score>]]...
1006 Create a location constraint on a resource to avoid the speci‐
1007 fied node with score (default score: INFINITY). Resource may be
1008 either a resource id <resource_id> or %<resource_id> or
1009 resource%<resource_id>, or a resource name regular expression
1010 regexp%<resource_pattern>.
1011
1012 location <resource> rule [id=<rule id>] [resource-discovery=<option>]
1013 [role=master|slave] [constraint-id=<id>] [score=<score> |
1014 score-attribute=<attribute>] <expression>
1015 Creates a location rule on the specified resource where the
1016 expression looks like one of the following:
1017 defined|not_defined <attribute>
1018 <attribute> lt|gt|lte|gte|eq|ne [string|integer|version]
1019 <value>
1020 date gt|lt <date>
1021 date in_range <date> to <date>
1022 date in_range <date> to duration <duration options>...
1023 date-spec <date spec options>...
1024 <expression> and|or <expression>
1025 ( <expression> )
1026 where duration options and date spec options are: hours, month‐
1027 days, weekdays, yeardays, months, weeks, years, weekyears, moon.
1028 Resource may be either a resource id <resource_id> or
1029 %<resource_id> or resource%<resource_id>, or a resource name
1030 regular expression regexp%<resource_pattern>. If score is omit‐
1031 ted it defaults to INFINITY. If id is omitted one is generated
1032 from the resource id. If resource-discovery is omitted it
1033 defaults to 'always'.
1034
1035 location [show [resources|nodes [<node>|<resource>]...] [--full]]
1036 List all the current location constraints. If 'resources' is
1037 specified, location constraints are displayed per resource
1038 (default). If 'nodes' is specified, location constraints are
1039 displayed per node. If specific nodes or resources are specified
1040 then we only show information about them. Resource may be either
1041 a resource id <resource_id> or %<resource_id> or
1042 resource%<resource_id>, or a resource name regular expression
1043 regexp%<resource_pattern>. If --full is specified show the
1044 internal constraint id's as well.
1045
1046 location add <id> <resource> <node> <score> [resource-discov‐
1047 ery=<option>]
1048 Add a location constraint with the appropriate id for the speci‐
1049 fied resource, node name and score. Resource may be either a
1050 resource id <resource_id> or %<resource_id> or
1051 resource%<resource_id>, or a resource name regular expression
1052 regexp%<resource_pattern>.
1053
1054 location remove <id>
1055 Remove a location constraint with the appropriate id.
1056
1057 order [show] [--full]
1058 List all current ordering constraints (if --full is specified
1059 show the internal constraint id's as well).
1060
1061 order [action] <resource id> then [action] <resource id> [options]
1062 Add an ordering constraint specifying actions (start, stop, pro‐
1063 mote, demote) and if no action is specified the default action
1064 will be start. Available options are kind=Optional/Manda‐
1065 tory/Serialize, symmetrical=true/false, require-all=true/false
1066 and id=<constraint-id>.
1067
1068 order set <resource1> [resourceN]... [options] [set <resourceX> ...
1069 [options]] [setoptions [constraint_options]]
1070 Create an ordered set of resources. Available options are
1071 sequential=true/false, require-all=true/false and
1072 action=start/promote/demote/stop. Available constraint_options
1073 are id=<constraint-id>, kind=Optional/Mandatory/Serialize and
1074 symmetrical=true/false.
1075
1076 order remove <resource1> [resourceN]...
1077 Remove resource from any ordering constraint
1078
1079 colocation [show] [--full]
1080 List all current colocation constraints (if --full is specified
1081 show the internal constraint id's as well).
1082
1083 colocation add [master|slave] <source resource id> with [master|slave]
1084 <target resource id> [score] [options] [id=constraint-id]
1085 Request <source resource> to run on the same node where pace‐
1086 maker has determined <target resource> should run. Positive
1087 values of score mean the resources should be run on the same
1088 node, negative values mean the resources should not be run on
1089 the same node. Specifying 'INFINITY' (or '-INFINITY') for the
1090 score forces <source resource> to run (or not run) with <target
1091 resource> (score defaults to "INFINITY"). A role can be master
1092 or slave (if no role is specified, it defaults to 'started').
1093
1094 colocation set <resource1> [resourceN]... [options] [set <resourceX>
1095 ... [options]] [setoptions [constraint_options]]
1096 Create a colocation constraint with a resource set. Available
1097 options are sequential=true/false and role=Stopped/Started/Mas‐
1098 ter/Slave. Available constraint_options are id and either of:
1099 score, score-attribute, score-attribute-mangle.
1100
1101 colocation remove <source resource id> <target resource id>
1102 Remove colocation constraints with specified resources.
1103
1104 ticket [show] [--full]
1105 List all current ticket constraints (if --full is specified show
1106 the internal constraint id's as well).
1107
1108 ticket add <ticket> [<role>] <resource id> [<options>] [id=<con‐
1109 straint-id>]
1110 Create a ticket constraint for <resource id>. Available option
1111 is loss-policy=fence/stop/freeze/demote. A role can be master,
1112 slave, started or stopped.
1113
1114 ticket set <resource1> [<resourceN>]... [<options>] [set <resourceX>
1115 ... [<options>]] setoptions <constraint_options>
1116 Create a ticket constraint with a resource set. Available
1117 options are role=Stopped/Started/Master/Slave. Required con‐
1118 straint option is ticket=<ticket>. Optional constraint options
1119 are id=<constraint-id> and loss-policy=fence/stop/freeze/demote.
1120
1121 ticket remove <ticket> <resource id>
1122 Remove all ticket constraints with <ticket> from <resource id>.
1123
1124 remove <constraint id>...
1125 Remove constraint(s) or constraint rules with the specified
1126 id(s).
1127
1128 ref <resource>...
1129 List constraints referencing specified resource.
1130
1131 rule add <constraint id> [id=<rule id>] [role=master|slave]
1132 [score=<score>|score-attribute=<attribute>] <expression>
1133 Add a rule to a constraint where the expression looks like one
1134 of the following:
1135 defined|not_defined <attribute>
1136 <attribute> lt|gt|lte|gte|eq|ne [string|integer|version]
1137 <value>
1138 date gt|lt <date>
1139 date in_range <date> to <date>
1140 date in_range <date> to duration <duration options>...
1141 date-spec <date spec options>...
1142 <expression> and|or <expression>
1143 ( <expression> )
1144 where duration options and date spec options are: hours, month‐
1145 days, weekdays, yeardays, months, weeks, years, weekyears, moon.
1146 If score is omitted it defaults to INFINITY. If id is omitted
1147 one is generated from the constraint id.
1148
1149 rule remove <rule id>
1150 Remove a rule if a rule id is specified, if rule is last rule in
1151 its constraint, the constraint will be removed.
1152
1153 qdevice
1154 status <device model> [--full] [<cluster name>]
1155 Show runtime status of specified model of quorum device
1156 provider. Using --full will give more detailed output. If
1157 <cluster name> is specified, only information about the speci‐
1158 fied cluster will be displayed.
1159
1160 setup model <device model> [--enable] [--start]
1161 Configure specified model of quorum device provider. Quorum
1162 device then can be added to clusters by running "pcs quorum
1163 device add" command in a cluster. --start will also start the
1164 provider. --enable will configure the provider to start on
1165 boot.
1166
1167 destroy <device model>
1168 Disable and stop specified model of quorum device provider and
1169 delete its configuration files.
1170
1171 start <device model>
1172 Start specified model of quorum device provider.
1173
1174 stop <device model>
1175 Stop specified model of quorum device provider.
1176
1177 kill <device model>
1178 Force specified model of quorum device provider to stop (per‐
1179 forms kill -9). Note that init system (e.g. systemd) can detect
1180 that the qdevice is not running and start it again. If you want
1181 to stop the qdevice, run "pcs qdevice stop" command.
1182
1183 enable <device model>
1184 Configure specified model of quorum device provider to start on
1185 boot.
1186
1187 disable <device model>
1188 Configure specified model of quorum device provider to not start
1189 on boot.
1190
1191 quorum
1192 [config]
1193 Show quorum configuration.
1194
1195 status Show quorum runtime status.
1196
1197 device add [<generic options>] model <device model> [<model options>]
1198 [heuristics <heuristics options>]
1199 Add a quorum device to the cluster. Quorum device should be con‐
1200 figured first with "pcs qdevice setup". It is not possible to
1201 use more than one quorum device in a cluster simultaneously.
1202 Currently the only supported model is 'net'. It requires model
1203 options 'algorithm' and 'host' to be specified. Options are doc‐
1204 umented in corosync-qdevice(8) man page; generic options are
1205 'sync_timeout' and 'timeout', for model net options check the
1206 quorum.device.net section, for heuristics options see the quo‐
1207 rum.device.heuristics section. Pcs automatically creates and
1208 distributes TLS certificates and sets the 'tls' model option to
1209 the default value 'on'.
1210 Example: pcs quorum device add model net algorithm=lms
1211 host=qnetd.internal.example.com
1212
1213 device heuristics remove
1214 Remove all heuristics settings of the configured quorum device.
1215
1216 device remove
1217 Remove a quorum device from the cluster.
1218
1219 device status [--full]
1220 Show quorum device runtime status. Using --full will give more
1221 detailed output.
1222
1223 device update [<generic options>] [model <model options>] [heuristics
1224 <heuristics options>]
1225 Add/Change quorum device options. Requires the cluster to be
1226 stopped. Model and options are all documented in corosync-qde‐
1227 vice(8) man page; for heuristics options check the quo‐
1228 rum.device.heuristics subkey section, for model options check
1229 the quorum.device.<device model> subkey sections.
1230
1231 WARNING: If you want to change "host" option of qdevice model
1232 net, use "pcs quorum device remove" and "pcs quorum device add"
1233 commands to set up configuration properly unless old and new
1234 host is the same machine.
1235
1236 expected-votes <votes>
1237 Set expected votes in the live cluster to specified value. This
1238 only affects the live cluster, not changes any configuration
1239 files.
1240
1241 unblock [--force]
1242 Cancel waiting for all nodes when establishing quorum. Useful
1243 in situations where you know the cluster is inquorate, but you
1244 are confident that the cluster should proceed with resource man‐
1245 agement regardless. This command should ONLY be used when nodes
1246 which the cluster is waiting for have been confirmed to be pow‐
1247 ered off and to have no access to shared resources.
1248
1249 WARNING: If the nodes are not actually powered off or they do
1250 have access to shared resources, data corruption/cluster failure
1251 can occur. To prevent accidental running of this command,
1252 --force or interactive user response is required in order to
1253 proceed.
1254
1255 update [auto_tie_breaker=[0|1]] [last_man_standing=[0|1]]
1256 [last_man_standing_window=[<time in ms>]] [wait_for_all=[0|1]]
1257 Add/Change quorum options. At least one option must be speci‐
1258 fied. Options are documented in corosync's votequorum(5) man
1259 page. Requires the cluster to be stopped.
1260
1261 booth
1262 setup sites <address> <address> [<address>...] [arbitrators <address>
1263 ...] [--force]
1264 Write new booth configuration with specified sites and arbitra‐
1265 tors. Total number of peers (sites and arbitrators) must be
1266 odd. When the configuration file already exists, command fails
1267 unless --force is specified.
1268
1269 destroy
1270 Remove booth configuration files.
1271
1272 ticket add <ticket> [<name>=<value> ...]
1273 Add new ticket to the current configuration. Ticket options are
1274 specified in booth manpage.
1275
1276
1277 ticket remove <ticket>
1278 Remove the specified ticket from the current configuration.
1279
1280 config [<node>]
1281 Show booth configuration from the specified node or from the
1282 current node if node not specified.
1283
1284 create ip <address>
1285 Make the cluster run booth service on the specified ip address
1286 as a cluster resource. Typically this is used to run booth
1287 site.
1288
1289 remove Remove booth resources created by the "pcs booth create" com‐
1290 mand.
1291
1292 restart
1293 Restart booth resources created by the "pcs booth create" com‐
1294 mand.
1295
1296 ticket grant <ticket> [<site address>]
1297 Grant the ticket for the site specified by address. Site
1298 address which has been specified with 'pcs booth create' command
1299 is used if 'site address' is omitted. Specifying site address
1300 is mandatory when running this command on an arbitrator.
1301
1302 ticket revoke <ticket> [<site address>]
1303 Revoke the ticket for the site specified by address. Site
1304 address which has been specified with 'pcs booth create' command
1305 is used if 'site address' is omitted. Specifying site address
1306 is mandatory when running this command on an arbitrator.
1307
1308 status Print current status of booth on the local node.
1309
1310 pull <node>
1311 Pull booth configuration from the specified node.
1312
1313 sync [--skip-offline]
1314 Send booth configuration from the local node to all nodes in the
1315 cluster.
1316
1317 enable Enable booth arbitrator service.
1318
1319 disable
1320 Disable booth arbitrator service.
1321
1322 start Start booth arbitrator service.
1323
1324 stop Stop booth arbitrator service.
1325
1326 status
1327 [status] [--full | --hide-inactive]
1328 View all information about the cluster and resources (--full
1329 provides more details, --hide-inactive hides inactive
1330 resources).
1331
1332 resources [<resource id> | --full | --groups | --hide-inactive]
1333 Show all currently configured resources or if a resource is
1334 specified show the options for the configured resource. If
1335 --full is specified, all configured resource options will be
1336 displayed. If --groups is specified, only show groups (and
1337 their resources). If --hide-inactive is specified, only show
1338 active resources.
1339
1340 groups View currently configured groups and their resources.
1341
1342 cluster
1343 View current cluster status.
1344
1345 corosync
1346 View current membership information as seen by corosync.
1347
1348 quorum View current quorum status.
1349
1350 qdevice <device model> [--full] [<cluster name>]
1351 Show runtime status of specified model of quorum device
1352 provider. Using --full will give more detailed output. If
1353 <cluster name> is specified, only information about the speci‐
1354 fied cluster will be displayed.
1355
1356 booth Print current status of booth on the local node.
1357
1358 nodes [corosync | both | config]
1359 View current status of nodes from pacemaker. If 'corosync' is
1360 specified, view current status of nodes from corosync instead.
1361 If 'both' is specified, view current status of nodes from both
1362 corosync & pacemaker. If 'config' is specified, print nodes from
1363 corosync & pacemaker configuration.
1364
1365 pcsd [<node>]...
1366 Show current status of pcsd on nodes specified, or on all nodes
1367 configured in the local cluster if no nodes are specified.
1368
1369 xml View xml version of status (output from crm_mon -r -1 -X).
1370
1371 config
1372 [show] View full cluster configuration.
1373
1374 backup [filename]
1375 Creates the tarball containing the cluster configuration files.
1376 If filename is not specified the standard output will be used.
1377
1378 restore [--local] [filename]
1379 Restores the cluster configuration files on all nodes from the
1380 backup. If filename is not specified the standard input will be
1381 used. If --local is specified only the files on the current
1382 node will be restored.
1383
1384 checkpoint
1385 List all available configuration checkpoints.
1386
1387 checkpoint view <checkpoint_number>
1388 Show specified configuration checkpoint.
1389
1390 checkpoint restore <checkpoint_number>
1391 Restore cluster configuration to specified checkpoint.
1392
1393 import-cman output=<filename> [input=<filename>] [--interactive] [out‐
1394 put-format=corosync.conf|cluster.conf] [dist=<dist>]
1395 Converts RHEL 6 (CMAN) cluster configuration to Pacemaker clus‐
1396 ter configuration. Converted configuration will be saved to
1397 'output' file. To send the configuration to the cluster nodes
1398 the 'pcs config restore' command can be used. If --interactive
1399 is specified you will be prompted to solve incompatibilities
1400 manually. If no input is specified /etc/cluster/cluster.conf
1401 will be used. You can force to create output containing either
1402 cluster.conf or corosync.conf using the output-format option.
1403 Optionally you can specify output version by setting 'dist'
1404 option e. g. rhel,6.8 or redhat,7.3 or debian,7 or
1405 ubuntu,trusty. You can get the list of supported dist values by
1406 running the "clufter --list-dists" command. If 'dist' is not
1407 specified, it defaults to this node's version if that matches
1408 output-format, otherwise redhat,6.7 is used for cluster.conf and
1409 redhat,7.1 is used for corosync.conf.
1410
1411 import-cman output=<filename> [input=<filename>] [--interactive] out‐
1412 put-format=pcs-commands|pcs-commands-verbose [dist=<dist>]
1413 Converts RHEL 6 (CMAN) cluster configuration to a list of pcs
1414 commands which recreates the same cluster as Pacemaker cluster
1415 when executed. Commands will be saved to 'output' file. For
1416 other options see above.
1417
1418 export pcs-commands|pcs-commands-verbose [output=<filename>]
1419 [dist=<dist>]
1420 Creates a list of pcs commands which upon execution recreates
1421 the current cluster running on this node. Commands will be
1422 saved to 'output' file or written to stdout if 'output' is not
1423 specified. Use pcs-commands to get a simple list of commands,
1424 whereas pcs-commands-verbose creates a list including comments
1425 and debug messages. Optionally specify output version by set‐
1426 ting 'dist' option e. g. rhel,6.8 or redhat,7.3 or debian,7 or
1427 ubuntu,trusty. You can get the list of supported dist values by
1428 running the "clufter --list-dists" command. If 'dist' is not
1429 specified, it defaults to this node's version.
1430
1431 pcsd
1432 certkey <certificate file> <key file>
1433 Load custom certificate and key files for use in pcsd.
1434
1435 sync-certificates
1436 Sync pcsd certificates to all nodes in the local cluster. WARN‐
1437 ING: This will restart pcsd daemon on the nodes.
1438
1439 clear-auth [--local] [--remote]
1440 Removes all system tokens which allow pcs/pcsd on the current
1441 system to authenticate with remote pcs/pcsd instances and
1442 vice-versa. After this command is run this node will need to be
1443 re-authenticated with other nodes (using 'pcs cluster auth').
1444 Using --local only removes tokens used by local pcs (and pcsd if
1445 root) to connect to other pcsd instances, using --remote clears
1446 authentication tokens used by remote systems to connect to the
1447 local pcsd instance.
1448
1449 node
1450 attribute [[<node>] [--name <name>] | <node> <name>=<value> ...]
1451 Manage node attributes. If no parameters are specified, show
1452 attributes of all nodes. If one parameter is specified, show
1453 attributes of specified node. If --name is specified, show
1454 specified attribute's value from all nodes. If more parameters
1455 are specified, set attributes of specified node. Attributes can
1456 be removed by setting an attribute without a value.
1457
1458 maintenance [--all | <node>...] [--wait[=n]]
1459 Put specified node(s) into maintenance mode, if no nodes or
1460 options are specified the current node will be put into mainte‐
1461 nance mode, if --all is specified all nodes will be put into
1462 maintenance mode. If --wait is specified, pcs will wait up to
1463 'n' seconds for the node(s) to be put into maintenance mode and
1464 then return 0 on success or 1 if the operation not succeeded
1465 yet. If 'n' is not specified it defaults to 60 minutes.
1466
1467 unmaintenance [--all | <node>...] [--wait[=n]]
1468 Remove node(s) from maintenance mode, if no nodes or options are
1469 specified the current node will be removed from maintenance
1470 mode, if --all is specified all nodes will be removed from main‐
1471 tenance mode. If --wait is specified, pcs will wait up to 'n'
1472 seconds for the node(s) to be removed from maintenance mode and
1473 then return 0 on success or 1 if the operation not succeeded
1474 yet. If 'n' is not specified it defaults to 60 minutes.
1475
1476 standby [--all | <node>...] [--wait[=n]]
1477 Put specified node(s) into standby mode (the node specified will
1478 no longer be able to host resources), if no nodes or options are
1479 specified the current node will be put into standby mode, if
1480 --all is specified all nodes will be put into standby mode. If
1481 --wait is specified, pcs will wait up to 'n' seconds for the
1482 node(s) to be put into standby mode and then return 0 on success
1483 or 1 if the operation not succeeded yet. If 'n' is not specified
1484 it defaults to 60 minutes.
1485
1486 unstandby [--all | <node>...] [--wait[=n]]
1487 Remove node(s) from standby mode (the node specified will now be
1488 able to host resources), if no nodes or options are specified
1489 the current node will be removed from standby mode, if --all is
1490 specified all nodes will be removed from standby mode. If --wait
1491 is specified, pcs will wait up to 'n' seconds for the node(s) to
1492 be removed from standby mode and then return 0 on success or 1
1493 if the operation not succeeded yet. If 'n' is not specified it
1494 defaults to 60 minutes.
1495
1496 utilization [[<node>] [--name <name>] | <node> <name>=<value> ...]
1497 Add specified utilization options to specified node. If node is
1498 not specified, shows utilization of all nodes. If --name is
1499 specified, shows specified utilization value from all nodes. If
1500 utilization options are not specified, shows utilization of
1501 specified node. Utilization option should be in format
1502 name=value, value has to be integer. Options may be removed by
1503 setting an option without a value. Example: pcs node utiliza‐
1504 tion node1 cpu=4 ram=
1505
1506 alert
1507 [config|show]
1508 Show all configured alerts.
1509
1510 create path=<path> [id=<alert-id>] [description=<description>] [options
1511 [<option>=<value>]...] [meta [<meta-option>=<value>]...]
1512 Define an alert handler with specified path. Id will be automat‐
1513 ically generated if it is not specified.
1514
1515 update <alert-id> [path=<path>] [description=<description>] [options
1516 [<option>=<value>]...] [meta [<meta-option>=<value>]...]
1517 Update an existing alert handler with specified id.
1518
1519 remove <alert-id> ...
1520 Remove alert handlers with specified ids.
1521
1522 recipient add <alert-id> value=<recipient-value> [id=<recipient-id>]
1523 [description=<description>] [options [<option>=<value>]...] [meta
1524 [<meta-option>=<value>]...]
1525 Add new recipient to specified alert handler.
1526
1527 recipient update <recipient-id> [value=<recipient-value>] [descrip‐
1528 tion=<description>] [options [<option>=<value>]...] [meta
1529 [<meta-option>=<value>]...]
1530 Update an existing recipient identified by its id.
1531
1532 recipient remove <recipient-id> ...
1533 Remove specified recipients.
1534
1536 Show all resources
1537 # pcs resource show
1538
1539 Show options specific to the 'VirtualIP' resource
1540 # pcs resource show VirtualIP
1541
1542 Create a new resource called 'VirtualIP' with options
1543 # pcs resource create VirtualIP ocf:heartbeat:IPaddr2
1544 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s
1545
1546 Create a new resource called 'VirtualIP' with options
1547 # pcs resource create VirtualIP IPaddr2 ip=192.168.0.99
1548 cidr_netmask=32 nic=eth2 op monitor interval=30s
1549
1550 Change the ip address of VirtualIP and remove the nic option
1551 # pcs resource update VirtualIP ip=192.168.0.98 nic=
1552
1553 Delete the VirtualIP resource
1554 # pcs resource delete VirtualIP
1555
1556 Create the MyStonith stonith fence_virt device which can fence host
1557 'f1'
1558 # pcs stonith create MyStonith fence_virt pcmk_host_list=f1
1559
1560 Set the stonith-enabled property to false on the cluster (which dis‐
1561 ables stonith)
1562 # pcs property set stonith-enabled=false
1563
1565 Various pcs commands accept the --force option. Its purpose is to over‐
1566 ride some of checks that pcs is doing or some of errors that may occur
1567 when a pcs command is run. When such error occurs, pcs will print the
1568 error with a note it may be overridden. The exact behavior of the
1569 option is different for each pcs command. Using the --force option can
1570 lead into situations that would normally be prevented by logic of pcs
1571 commands and therefore its use is strongly discouraged unless you know
1572 what you are doing.
1573
1575 EDITOR
1576 Path to a plain-text editor. This is used when pcs is requested
1577 to present a text for the user to edit.
1578
1579 no_proxy, https_proxy, all_proxy, NO_PROXY, HTTPS_PROXY, ALL_PROXY
1580 These environment variables (listed according to their priori‐
1581 ties) control how pcs handles proxy servers when connecting to
1582 cluster nodes. See curl(1) man page for details.
1583
1585 http://clusterlabs.org/doc/
1586
1587 pcsd(8) pcs_snmp_agent(8)
1588
1589 corosync_overview(8), votequorum(5), corosync.conf(5), corosync-qde‐
1590 vice(8), corosync-qdevice-tool(8), corosync-qnetd(8),
1591 corosync-qnetd-tool(8)
1592
1593 crmd(7), pengine(7), stonithd(7), crm_mon(8), crm_report(8), crm_simu‐
1594 late(8)
1595
1596 boothd(8) sbd(8)
1597
1598 clufter(1)
1599
1600
1601
1602pcs 0.9.165 June 2018 PCS(8)