1PCS(8)                  System Administration Utilities                 PCS(8)
2
3
4

NAME

6       pcs - pacemaker/corosync configuration system
7

SYNOPSIS

9       pcs [-f file] [-h] [commands]...
10

DESCRIPTION

12       Control and configure pacemaker and corosync.
13

OPTIONS

15       -h, --help
16              Display usage and exit.
17
18       -f file
19              Perform actions on file instead of active CIB.
20              Commands  supporting  the  option  use  the initial state of the
21              specified file as their input and then overwrite the  file  with
22              the state reflecting the requested operation(s).
23              A  few  commands  only  use the specified file in read-only mode
24              since their effect is not a CIB modification.
25
26       --debug
27              Print all network traffic and external commands run.
28
29       --version
30              Print pcs version information. List pcs capabilities  if  --full
31              is specified.
32
33       --request-timeout=<timeout>
34              Timeout  for  each  outgoing request to another node in seconds.
35              Default is 60s.
36
37   Commands:
38       cluster
39               Configure cluster options and nodes.
40
41       resource
42               Manage cluster resources.
43
44       stonith
45               Manage fence devices.
46
47       constraint
48               Manage resource constraints.
49
50       property
51               Manage pacemaker properties.
52
53       acl
54               Manage pacemaker access control lists.
55
56       qdevice
57               Manage quorum device provider on the local host.
58
59       quorum
60               Manage cluster quorum settings.
61
62       booth
63               Manage booth (cluster ticket manager).
64
65       status
66               View cluster status.
67
68       config
69               View and manage cluster configuration.
70
71       pcsd
72               Manage pcs daemon.
73
74       host
75               Manage hosts known to pcs/pcsd.
76
77       node
78               Manage cluster nodes.
79
80       alert
81               Manage pacemaker alerts.
82
83       client
84               Manage pcsd client configuration.
85
86       dr
87               Manage disaster recovery configuration.
88
89       tag
90               Manage pacemaker tags.
91
92   resource
93       [status [<resource id | tag id>] [node=<node>] [--hide-inactive]]
94              Show status of all currently configured resources. If --hide-in‐
95              active  is  specified, only show active resources. If a resource
96              or tag id is specified, only show status of  the  specified  re‐
97              source  or resources in the specified tag. If node is specified,
98              only show status of resources configured for the specified node.
99
100       config [--output-format text|cmd|json] [<resource id>]...
101              Show options of all currently configured  resources  or  if  re‐
102              source  ids are specified show the options for the specified re‐
103              source ids. There are 3  formats  of  output  available:  'cmd',
104              'json'  and  'text', default is 'text'. Format 'text' is a human
105              friendly output. Format 'cmd' prints pcs commands which  can  be
106              used  to recreate the same configuration. Format 'json' is a ma‐
107              chine oriented output of the configuration.
108
109       list [filter] [--nodesc]
110              Show list of all available resource agents (if  filter  is  pro‐
111              vided  then  only  resource  agents  matching the filter will be
112              shown). If --nodesc is used then descriptions of resource agents
113              are not printed.
114
115       describe [<standard>:[<provider>:]]<type> [--full]
116              Show options for the specified resource. If --full is specified,
117              all options including advanced and deprecated ones are shown.
118
119       create <resource  id>  [<standard>:[<provider>:]]<type>  [resource  op‐
120       tions]  [op  <operation action> <operation options> [<operation action>
121       <operation options>]...] [meta <meta options>...] [clone  [<clone  id>]
122       [<clone  options>]  |  promotable [<clone id>] [<promotable options>] |
123       --group <group id> [--before <resource id> | --after <resource  id>]  |
124       bundle <bundle id>] [--disabled] [--no-default-ops] [--wait[=n]]
125              Create  specified resource. If clone is used a clone resource is
126              created. If promotable is used a promotable  clone  resource  is
127              created.  If  --group  is specified the resource is added to the
128              group named. You can use --before or --after to specify the  po‐
129              sition of the added resource relatively to some resource already
130              existing in the group. If bundle is specified, resource will  be
131              created  inside of the specified bundle. If --disabled is speci‐
132              fied the resource is  not  started  automatically.  If  --no-de‐
133              fault-ops  is specified, only monitor operations are created for
134              the resource and all other operations use default  settings.  If
135              --wait is specified, pcs will wait up to 'n' seconds for the re‐
136              source to start and then return 0 if the resource is started, or
137              1  if  the resource has not yet started. If 'n' is not specified
138              it defaults to 60 minutes.
139
140              Example: Create a new resource called 'VirtualIP'  with  IP  ad‐
141              dress  192.168.0.99, netmask of 32, monitored everything 30 sec‐
142              onds,  on  eth2:  pcs  resource  create   VirtualIP   ocf:heart‐
143              beat:IPaddr2 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor
144              interval=30s
145
146       delete <resource id|group id|bundle id|clone id>
147              Deletes the resource, group, bundle or clone (and all  resources
148              within the group/bundle/clone).
149
150       remove <resource id|group id|bundle id|clone id>
151              Deletes  the resource, group, bundle or clone (and all resources
152              within the group/bundle/clone).
153
154       enable <resource id | tag id>... [--wait[=n]]
155              Allow the cluster to start the resources. Depending on the  rest
156              of  the configuration (constraints, options, failures, etc), the
157              resources may remain stopped. If --wait is specified,  pcs  will
158              wait  up  to 'n' seconds for the resources to start and then re‐
159              turn 0 if the resources are started, or 1 if the resources  have
160              not  yet started. If 'n' is not specified it defaults to 60 min‐
161              utes.
162
163       disable <resource id |  tag  id>...  [--safe  [--brief]  [--no-strict]]
164       [--simulate [--brief]] [--wait[=n]]
165              Attempt to stop the resources if they are running and forbid the
166              cluster from starting them again. Depending on the rest  of  the
167              configuration  (constraints,  options,  failures,  etc), the re‐
168              sources may remain started.
169              If --safe is specified, no changes to the cluster  configuration
170              will be made if other than specified resources would be affected
171              in any way. If  --brief  is  also  specified,  only  errors  are
172              printed.
173              If  --no-strict is specified, no changes to the cluster configu‐
174              ration will be made if other than specified resources would  get
175              stopped or demoted. Moving resources between nodes is allowed.
176              If --simulate is specified, no changes to the cluster configura‐
177              tion will be made and the effect of the changes will be  printed
178              instead.  If  --brief is also specified, only a list of affected
179              resources will be printed.
180              If --wait is specified, pcs will wait up to 'n' seconds for  the
181              resources to stop and then return 0 if the resources are stopped
182              or 1 if the resources have not stopped. If 'n' is not  specified
183              it defaults to 60 minutes.
184
185       safe-disable <resource id | tag id>... [--brief] [--no-strict] [--simu‐
186       late [--brief]] [--wait[=n]] [--force]
187              Attempt to stop the resources if they are running and forbid the
188              cluster  from  starting them again. Depending on the rest of the
189              configuration (constraints, options,  failures,  etc),  the  re‐
190              sources may remain started. No changes to the cluster configura‐
191              tion will be made if other than specified resources would be af‐
192              fected in any way.
193              If --brief is specified, only errors are printed.
194              If  --no-strict is specified, no changes to the cluster configu‐
195              ration will be made if other than specified resources would  get
196              stopped or demoted. Moving resources between nodes is allowed.
197              If --simulate is specified, no changes to the cluster configura‐
198              tion will be made and the effect of the changes will be  printed
199              instead.  If  --brief is also specified, only a list of affected
200              resources will be printed.
201              If --wait is specified, pcs will wait up to 'n' seconds for  the
202              resources to stop and then return 0 if the resources are stopped
203              or 1 if the resources have not stopped. If 'n' is not  specified
204              it defaults to 60 minutes.
205              If  --force  is  specified,  checks  for  safe  disable  will be
206              skipped.
207
208       restart <resource id> [node] [--wait=n]
209              Restart the resource specified. If a node is  specified  and  if
210              the  resource  is a clone or bundle it will be restarted only on
211              the node specified. If --wait is specified, then we will wait up
212              to  'n' seconds for the resource to be restarted and return 0 if
213              the restart was successful or 1 if it was not.
214
215       debug-start <resource id> [--full]
216              This command will force the specified resource to start on  this
217              node  ignoring  the cluster recommendations and print the output
218              from starting the resource.  Using --full  will  give  more  de‐
219              tailed output.  This is mainly used for debugging resources that
220              fail to start.
221
222       debug-stop <resource id> [--full]
223              This command will force the specified resource to stop  on  this
224              node  ignoring  the cluster recommendations and print the output
225              from stopping the resource.  Using --full  will  give  more  de‐
226              tailed output.  This is mainly used for debugging resources that
227              fail to stop.
228
229       debug-promote <resource id> [--full]
230              This command will force the specified resource to be promoted on
231              this  node  ignoring  the  cluster recommendations and print the
232              output from promoting the resource.  Using --full will give more
233              detailed  output.   This  is mainly used for debugging resources
234              that fail to promote.
235
236       debug-demote <resource id> [--full]
237              This command will force the specified resource to be demoted  on
238              this  node  ignoring  the  cluster recommendations and print the
239              output from demoting the resource.  Using --full will give  more
240              detailed  output.   This  is mainly used for debugging resources
241              that fail to demote.
242
243       debug-monitor <resource id> [--full]
244              This command will force the specified resource to  be  monitored
245              on  this node ignoring the cluster recommendations and print the
246              output from monitoring the resource.   Using  --full  will  give
247              more  detailed  output.   This  is mainly used for debugging re‐
248              sources that fail to be monitored.
249
250       move  <resource  id>   [destination   node]   [--promoted]   [--strict]
251       [--wait[=n]]
252              Move  the resource off the node it is currently running on. This
253              is achieved by creating a -INFINITY location constraint  to  ban
254              the node.  If destination node is specified the resource will be
255              moved to that node by creating an INFINITY  location  constraint
256              to prefer the destination node. The constraint needed for moving
257              the resource will be automatically removed once the resource  is
258              running  on  it's new location. The command will fail in case it
259              is not possible to verify that the resource will  not  be  moved
260              back after deleting the constraint.
261
262              If  --strict  is  specified, the command will also fail if other
263              resources would be affected.
264
265              If --promoted is used the scope of the command is limited to the
266              Promoted  role  and promotable clone id must be used (instead of
267              the resource id).
268
269              If --wait is specified, pcs will wait up to 'n' seconds for  the
270              resource  to move and then return 0 on success or 1 on error. If
271              'n' is not specified it defaults to 60 minutes.
272
273              NOTE: This command has been changed in pcs-0.11. It  is  equiva‐
274              lent  to command 'resource move <resource id> --autodelete' from
275              pcs-0.10.9. Legacy functionality of the 'resource move'  command
276              is  still  available as 'resource move-with-constraint <resource
277              id>'.
278
279              If you want the resource to preferably  avoid  running  on  some
280              nodes  but be able to failover to them use 'pcs constraint loca‐
281              tion avoids'.
282
283       move-with-constraint <resource id> [destination node]  [lifetime=<life‐
284       time>] [--promoted] [--wait[=n]]
285              Move  the  resource  off  the node it is currently running on by
286              creating a -INFINITY location constraint to  ban  the  node.  If
287              destination node is specified the resource will be moved to that
288              node by creating an INFINITY location constraint to  prefer  the
289              destination node.
290
291              If  lifetime  is specified then the constraint will expire after
292              that time, otherwise it defaults to infinity and the  constraint
293              can  be  cleared manually with 'pcs resource clear' or 'pcs con‐
294              straint delete'. Lifetime is expected to  be  specified  as  ISO
295              8601  duration (see https://en.wikipedia.org/wiki/ISO_8601#Dura‐
296              tions).
297
298              If --promoted is used the scope of the command is limited to the
299              Promoted  role  and promotable clone id must be used (instead of
300              the resource id).
301
302              If --wait is specified, pcs will wait up to 'n' seconds for  the
303              resource  to move and then return 0 on success or 1 on error. If
304              'n' is not specified it defaults to 60 minutes.
305
306              If you want the resource to preferably  avoid  running  on  some
307              nodes  but be able to failover to them use 'pcs constraint loca‐
308              tion avoids'.
309
310       ban   <resource   id>   [node]    [--promoted]    [lifetime=<lifetime>]
311       [--wait[=n]]
312              Prevent  the  resource id specified from running on the node (or
313              on the current node it is running on if no node is specified) by
314              creating a -INFINITY location constraint.
315
316              If --promoted is used the scope of the command is limited to the
317              Promoted role and promotable clone id must be used  (instead  of
318              the resource id).
319
320              If  lifetime  is specified then the constraint will expire after
321              that time, otherwise it defaults to infinity and the  constraint
322              can  be  cleared manually with 'pcs resource clear' or 'pcs con‐
323              straint delete'. Lifetime is expected to  be  specified  as  ISO
324              8601  duration (see https://en.wikipedia.org/wiki/ISO_8601#Dura‐
325              tions).
326
327              If --wait is specified, pcs will wait up to 'n' seconds for  the
328              resource  to move and then return 0 on success or 1 on error. If
329              'n' is not specified it defaults to 60 minutes.
330
331              If you want the resource to preferably  avoid  running  on  some
332              nodes  but be able to failover to them use 'pcs constraint loca‐
333              tion avoids'.
334
335       clear <resource id> [node] [--promoted] [--expired] [--wait[=n]]
336              Remove constraints created by move and/or ban on  the  specified
337              resource (and node if specified).
338
339              If --promoted is used the scope of the command is limited to the
340              Promoted role and promotable clone id must be used  (instead  of
341              the resource id).
342
343              If  --expired  is specified, only constraints with expired life‐
344              times will be removed.
345
346              If --wait is specified, pcs will wait up to 'n' seconds for  the
347              operation  to finish (including starting and/or moving resources
348              if appropriate) and then return 0 on success or 1 on  error.  If
349              'n' is not specified it defaults to 60 minutes.
350
351       standards
352              List  available  resource  agent standards supported by this in‐
353              stallation (OCF, LSB, etc.).
354
355       providers
356              List available OCF resource agent providers.
357
358       agents [standard[:provider]]
359              List  available  agents  optionally  filtered  by  standard  and
360              provider.
361
362       update <resource id> [resource options] [op [<operation action> <opera‐
363       tion options>]...] [meta <meta operations>...] [--wait[=n]]
364              Add, remove or change options of specified  resource,  clone  or
365              multi-state  resource.  Unspecified  options  will  be  kept un‐
366              changed. If you wish to remove an option, set it to empty value,
367              i.e. 'option_name='.
368
369              If an operation (op) is specified it will update the first found
370              operation with the same action on the specified resource. If  no
371              operation  with  that action exists then a new operation will be
372              created. (WARNING: all existing options on the updated operation
373              will  be reset if not specified.) If you want to create multiple
374              monitor operations you should use the 'op  add'  &  'op  remove'
375              commands.
376
377              If  --wait is specified, pcs will wait up to 'n' seconds for the
378              changes to take effect and then return 0  if  the  changes  have
379              been  processed  or  1 otherwise. If 'n' is not specified it de‐
380              faults to 60 minutes.
381
382       op add <resource id> <operation action> [operation properties]
383              Add operation for specified resource.
384
385       op delete <resource id> <operation action> [<operation properties>...]
386              Remove specified operation (note: you must specify the exact op‐
387              eration properties to properly remove an existing operation).
388
389       op delete <operation id>
390              Remove the specified operation id.
391
392       op remove <resource id> <operation action> [<operation properties>...]
393              Remove specified operation (note: you must specify the exact op‐
394              eration properties to properly remove an existing operation).
395
396       op remove <operation id>
397              Remove the specified operation id.
398
399       op defaults [config] [--all] [--full] [--no-check-expired]
400              List currently configured  default  values  for  operations.  If
401              --all  is specified, also list expired sets of values. If --full
402              is specified, also list ids. If --no-expire-check is  specified,
403              do not evaluate whether sets of values are expired.
404
405       op defaults <name>=<value>...
406              Set default values for operations.
407              NOTE: Defaults do not apply to resources / stonith devices which
408              override them with their own defined values.
409
410       op defaults set create [<set options>] [meta [<name>=<value>]...] [rule
411       [<expression>]]
412              Create a new set of default values for resource / stonith device
413              operations. You  may  specify  a  rule  describing  resources  /
414              stonith devices and / or operations to which the set applies.
415
416              Set options are: id, score
417
418              Expression looks like one of the following:
419                op <operation name> [interval=<interval>]
420                resource [<standard>]:[<provider>]:[<type>]
421                defined|not_defined <node attribute>
422                <node   attribute>   lt|gt|lte|gte|eq|ne  [string|integer|num‐
423              ber|version] <value>
424                date gt|lt <date>
425                date in_range [<date>] to <date>
426                date in_range <date> to duration <duration options>
427                date-spec <date-spec options>
428                <expression> and|or <expression>
429                (<expression>)
430
431              You may specify all or any of 'standard', 'provider' and  'type'
432              in  a resource expression. For example: 'resource ocf::' matches
433              all  resources  of  'ocf'  standard,  while  'resource  ::Dummy'
434              matches  all resources of 'Dummy' type regardless of their stan‐
435              dard and provider.
436
437              Dates are expected to conform to ISO 8601 format.
438
439              Duration options are:  hours,  monthdays,  weekdays,  yearsdays,
440              months,  weeks,  years, weekyears, moon. Value for these options
441              is an integer.
442
443              Date-spec options are: hours,  monthdays,  weekdays,  yearsdays,
444              months,  weeks,  years, weekyears, moon. Value for these options
445              is an integer or a range written as integer-integer.
446
447              NOTE: Defaults do not apply to resources / stonith devices which
448              override them with their own defined values.
449
450       op defaults set delete [<set id>]...
451              Delete specified options sets.
452
453       op defaults set remove [<set id>]...
454              Delete specified options sets.
455
456       op defaults set update <set id> [meta [<name>=<value>]...]
457              Add,  remove or change values in specified set of default values
458              for resource / stonith device  operations.  Unspecified  options
459              will  be kept unchanged. If you wish to remove an option, set it
460              to empty value, i.e. 'option_name='.
461
462              NOTE: Defaults do not apply to resources / stonith devices which
463              override them with their own defined values.
464
465       op defaults update <name>=<value>...
466              Add,  remove  or change default values for operations. This is a
467              simplified command useful for cases when you only manage one set
468              of  default  values. Unspecified options will be kept unchanged.
469              If you wish to remove an option, set it  to  empty  value,  i.e.
470              'option_name='.
471
472              NOTE: Defaults do not apply to resources / stonith devices which
473              override them with their own defined values.
474
475       meta <resource id | group id | clone id> <meta options> [--wait[=n]]
476              Add specified options to the specified resource, group or clone.
477              Meta  options should be in the format of name=value, options may
478              be removed by setting an option without a value.  If  --wait  is
479              specified,  pcs  will  wait up to 'n' seconds for the changes to
480              take effect and then return 0 if the changes have been processed
481              or  1  otherwise. If 'n' is not specified it defaults to 60 min‐
482              utes.
483              Example: pcs resource meta TestResource  failure-timeout=50  re‐
484              source-stickiness=
485
486       group list
487              Show  all  currently  configured  resource  groups and their re‐
488              sources.
489
490       group add <group id> <resource id>  [resource  id]  ...  [resource  id]
491       [--before <resource id> | --after <resource id>] [--wait[=n]]
492              Add  the  specified resource to the group, creating the group if
493              it does not exist. If the resource is present in  another  group
494              it  is  moved to the new group. If the group remains empty after
495              move, it is deleted (for cloned groups, the clone is deleted  as
496              well). The delete operation may fail in case the group is refer‐
497              enced within the configuration, e.g.  by  constraints.  In  that
498              case, use 'pcs resource ungroup' command prior to moving all re‐
499              sources out of the group.
500
501              You can use --before or --after to specify the position  of  the
502              added  resources relatively to some resource already existing in
503              the group. By adding resources to a group they  are  already  in
504              and specifying --after or --before you can move the resources in
505              the group.
506
507              If --wait is specified, pcs will wait up to 'n' seconds for  the
508              operation  to finish (including moving resources if appropriate)
509              and then return 0 on success or 1 on error. If 'n' is not speci‐
510              fied it defaults to 60 minutes.
511
512       group delete <group id> [resource id]... [--wait[=n]]
513              Remove  the group (note: this does not remove any resources from
514              the cluster) or if resources are specified, remove the specified
515              resources from the group.  If --wait is specified, pcs will wait
516              up to 'n' seconds for the operation to finish (including  moving
517              resources  if  appropriate)  and the return 0 on success or 1 on
518              error.  If 'n' is not specified it defaults to 60 minutes.
519
520       group remove <group id> [resource id]... [--wait[=n]]
521              Remove the group (note: this does not remove any resources  from
522              the cluster) or if resources are specified, remove the specified
523              resources from the group.  If --wait is specified, pcs will wait
524              up  to 'n' seconds for the operation to finish (including moving
525              resources if appropriate) and the return 0 on success  or  1  on
526              error.  If 'n' is not specified it defaults to 60 minutes.
527
528       ungroup <group id> [resource id]... [--wait[=n]]
529              Remove  the group (note: this does not remove any resources from
530              the cluster) or if resources are specified, remove the specified
531              resources from the group.  If --wait is specified, pcs will wait
532              up to 'n' seconds for the operation to finish (including  moving
533              resources  if  appropriate)  and the return 0 on success or 1 on
534              error.  If 'n' is not specified it defaults to 60 minutes.
535
536       clone  <resource  id  |  group  id>  [<clone  id>]  [clone  options]...
537       [--wait[=n]]
538              Set  up the specified resource or group as a clone. If --wait is
539              specified, pcs will wait up to 'n' seconds for the operation  to
540              finish  (including  starting clone instances if appropriate) and
541              then return 0 on success or 1 on error. If 'n' is not  specified
542              it defaults to 60 minutes.
543
544       promotable  <resource  id  |  group id> [<clone id>] [clone options]...
545       [--wait[=n]]
546              Set up the specified resource or group as  a  promotable  clone.
547              This  is  an  alias  for  'pcs resource clone <resource id> pro‐
548              motable=true'.
549
550       unclone <clone id | resource id | group id> [--wait[=n]]
551              Remove the specified clone or the clone which contains the spec‐
552              ified  group  or resource (the resource or group will not be re‐
553              moved). If --wait is specified, pcs will wait up to 'n'  seconds
554              for  the operation to finish (including stopping clone instances
555              if appropriate) and then return 0 on success or 1 on  error.  If
556              'n' is not specified it defaults to 60 minutes.
557
558       bundle  create  <bundle  id> container <container type> [<container op‐
559       tions>] [network <network options>] [port-map <port options>]... [stor‐
560       age-map   <storage  options>]...  [meta  <meta  options>]  [--disabled]
561       [--wait[=n]]
562              Create a new bundle encapsulating no resources. The  bundle  can
563              be  used either as it is or a resource may be put into it at any
564              time. If --disabled is specified, the bundle is not started  au‐
565              tomatically.  If  --wait  is  specified, pcs will wait up to 'n'
566              seconds for the bundle to start and then return 0 on success  or
567              1 on error. If 'n' is not specified it defaults to 60 minutes.
568
569       bundle reset <bundle id> [container <container options>] [network <net‐
570       work options>] [port-map <port options>]... [storage-map  <storage  op‐
571       tions>]... [meta <meta options>] [--disabled] [--wait[=n]]
572              Configure specified bundle with given options. Unlike bundle up‐
573              date, this command resets the bundle according given  options  -
574              no  previous  options  are kept. Resources inside the bundle are
575              kept as they are. If --disabled is specified, the bundle is  not
576              started  automatically. If --wait is specified, pcs will wait up
577              to 'n' seconds for the bundle to start and then return 0 on suc‐
578              cess  or  1  on error. If 'n' is not specified it defaults to 60
579              minutes.
580
581       bundle update <bundle  id>  [container  <container  options>]  [network
582       <network  options>]  [port-map  (add <port options>) | (delete | remove
583       <id>...)]... [storage-map (add <storage options>) |  (delete  |  remove
584       <id>...)]... [meta <meta options>] [--wait[=n]]
585              Add,  remove  or change options of specified bundle. Unspecified
586              options will be kept unchanged. If you wish to remove an option,
587              set it to empty value, i.e. 'option_name='.
588
589              If you wish to update a resource encapsulated in the bundle, use
590              the 'pcs resource update' command instead and  specify  the  re‐
591              source id.
592
593              If  --wait is specified, pcs will wait up to 'n' seconds for the
594              operation to finish (including moving resources if  appropriate)
595              and then return 0 on success or 1 on error. If 'n' is not speci‐
596              fied it defaults to 60 minutes.
597
598       manage <resource id | tag id>... [--monitor]
599              Set resources listed to managed mode (default). If --monitor  is
600              specified, enable all monitor operations of the resources.
601
602       unmanage <resource id | tag id>... [--monitor]
603              Set  resources  listed  to unmanaged mode. When a resource is in
604              unmanaged mode, the cluster is not allowed to start nor stop the
605              resource.  If --monitor is specified, disable all monitor opera‐
606              tions of the resources.
607
608       defaults [config] [--all] [--full] [--no-check-expired]
609              List currently configured default values for resources / stonith
610              devices.  If  --all is specified, also list expired sets of val‐
611              ues. If --full is specified, also list ids. If --no-expire-check
612              is  specified,  do  not  evaluate whether sets of values are ex‐
613              pired.
614
615       defaults <name>=<value>...
616              Set default values for resources / stonith devices.
617              NOTE: Defaults do not apply to resources / stonith devices which
618              override them with their own defined values.
619
620       defaults  set  create  [<set options>] [meta [<name>=<value>]...] [rule
621       [<expression>]]
622              Create a new set of default values for resources /  stonith  de‐
623              vices. You may specify a rule describing resources / stonith de‐
624              vices to which the set applies.
625
626              Set options are: id, score
627
628              Expression looks like one of the following:
629                resource [<standard>]:[<provider>]:[<type>]
630                date gt|lt <date>
631                date in_range [<date>] to <date>
632                date in_range <date> to duration <duration options>
633                date-spec <date-spec options>
634                <expression> and|or <expression>
635                (<expression>)
636
637              You may specify all or any of 'standard', 'provider' and  'type'
638              in  a resource expression. For example: 'resource ocf::' matches
639              all  resources  of  'ocf'  standard,  while  'resource  ::Dummy'
640              matches  all resources of 'Dummy' type regardless of their stan‐
641              dard and provider.
642
643              Dates are expected to conform to ISO 8601 format.
644
645              Duration options are:  hours,  monthdays,  weekdays,  yearsdays,
646              months,  weeks,  years, weekyears, moon. Value for these options
647              is an integer.
648
649              Date-spec options are: hours,  monthdays,  weekdays,  yearsdays,
650              months,  weeks,  years, weekyears, moon. Value for these options
651              is an integer or a range written as integer-integer.
652
653              NOTE: Defaults do not apply to resources / stonith devices which
654              override them with their own defined values.
655
656       defaults set delete [<set id>]...
657              Delete specified options sets.
658
659       defaults set remove [<set id>]...
660              Delete specified options sets.
661
662       defaults set update <set id> [meta [<name>=<value>]...]
663              Add,  remove or change values in specified set of default values
664              for resources / stonith devices.  Unspecified  options  will  be
665              kept unchanged. If you wish to remove an option, set it to empty
666              value, i.e. 'option_name='.
667
668              NOTE: Defaults do not apply to resources / stonith devices which
669              override them with their own defined values.
670
671       defaults update <name>=<value>...
672              Add, remove or change default values for resources / stonith de‐
673              vices. This is a simplified command useful for  cases  when  you
674              only  manage one set of default values. Unspecified options will
675              be kept unchanged. If you wish to remove an option,  set  it  to
676              empty value, i.e. 'option_name='.
677
678              NOTE: Defaults do not apply to resources / stonith devices which
679              override them with their own defined values.
680
681       cleanup [<resource id | stonith id>]  [node=<node>]  [operation=<opera‐
682       tion> [interval=<interval>]] [--strict]
683              Make  the  cluster  forget failed operations from history of the
684              resource / stonith device and re-detect its current state.  This
685              can  be  useful  to  purge  knowledge of past failures that have
686              since been resolved.
687
688              If the named resource is part of a group, or  one  numbered  in‐
689              stance  of  a clone or bundled resource, the clean-up applies to
690              the whole collective resource unless --strict is given.
691
692              If a resource id / stonith id is  not  specified  then  all  re‐
693              sources / stonith devices will be cleaned up.
694
695              If  a  node is not specified then resources / stonith devices on
696              all nodes will be cleaned up.
697
698       refresh [<resource id | stonith id>] [node=<node>] [--strict]
699              Make the cluster forget the complete operation history  (includ‐
700              ing failures) of the resource / stonith device and re-detect its
701              current state. If you are interested in forgetting failed opera‐
702              tions only, use the 'pcs resource cleanup' command.
703
704              If  the  named  resource is part of a group, or one numbered in‐
705              stance of a clone or bundled resource, the  refresh  applies  to
706              the whole collective resource unless --strict is given.
707
708              If  a  resource  id  /  stonith id is not specified then all re‐
709              sources / stonith devices will be refreshed.
710
711              If a node is not specified then resources / stonith  devices  on
712              all nodes will be refreshed.
713
714       failcount  [show  [<resource  id  |  stonith id>] [node=<node>] [opera‐
715       tion=<operation> [interval=<interval>]]] [--full]
716              Show current failcount for resources and  stonith  devices,  op‐
717              tionally  filtered  by a resource / stonith device, node, opera‐
718              tion and its interval. If --full is specified do not  sum  fail‐
719              counts per resource / stonith device and node. Use 'pcs resource
720              cleanup' or 'pcs resource refresh' to reset failcounts.
721
722       relocate dry-run [resource1] [resource2] ...
723              The same as 'relocate run' but has no effect on the cluster.
724
725       relocate run [resource1] [resource2] ...
726              Relocate specified resources to their preferred  nodes.   If  no
727              resources  are  specified, relocate all resources.  This command
728              calculates the preferred node for each resource  while  ignoring
729              resource stickiness.  Then it creates location constraints which
730              will cause the resources to move to their preferred nodes.  Once
731              the  resources have been moved the constraints are deleted auto‐
732              matically.  Note that the preferred node is calculated based  on
733              current  cluster  status, constraints, location of resources and
734              other settings and thus it might change over time.
735
736       relocate show
737              Display current status of resources and their optimal  node  ig‐
738              noring resource stickiness.
739
740       relocate clear
741              Remove all constraints created by the 'relocate run' command.
742
743       utilization [<resource id> [<name>=<value> ...]]
744              Add  specified utilization options to specified resource. If re‐
745              source is not specified, shows utilization of all resources.  If
746              utilization  options  are  not  specified,  shows utilization of
747              specified resource.  Utilization  option  should  be  in  format
748              name=value,  value  has to be integer. Options may be removed by
749              setting an option without a value. Example:  pcs  resource  uti‐
750              lization TestResource cpu= ram=20
751
752       relations <resource id> [--full]
753              Display  relations  of a resource specified by its id with other
754              resources in a tree structure. Supported types of resource rela‐
755              tions are: ordering constraints, ordering set constraints, rela‐
756              tions defined by resource hierarchy (clones,  groups,  bundles).
757              If --full is used, more verbose output will be printed.
758
759   cluster
760       setup  <cluster name> (<node name> [addr=<node address>]...)... [trans‐
761       port knet|udp|udpu [<transport options>] [link <link options>]... [com‐
762       pression  <compression  options>]  [crypto  <crypto  options>]]  [totem
763       <totem options>] [quorum <quorum options>] [--no-cluster-uuid]  ([--en‐
764       able]  [--start  [--wait[=<n>]]]  [--no-keys-sync])  | [--corosync_conf
765       <path>]
766              Create a cluster from the listed nodes and  synchronize  cluster
767              configuration files to them. If --corosync_conf is specified, do
768              not connect to other nodes and save corosync.conf to the  speci‐
769              fied path; see 'Local only mode' below for details.
770
771              Nodes  are  specified  by  their  names and optionally their ad‐
772              dresses. If no addresses are specified for a node, pcs will con‐
773              figure  corosync  to communicate with that node using an address
774              provided in 'pcs host auth' command. Otherwise, pcs will config‐
775              ure  corosync  to  communicate with the node using the specified
776              addresses.
777
778              Transport knet:
779              This is the default transport. It allows configuring traffic en‐
780              cryption  and  compression  as  well as using multiple addresses
781              (links) for nodes.
782              Transport   options   are:   ip_version,    knet_pmtud_interval,
783              link_mode
784              Link options are: link_priority, linknumber, mcastport, ping_in‐
785              terval, ping_precision, ping_timeout, pong_count, transport (udp
786              or sctp)
787              Each 'link' followed by options sets options for one link in the
788              order the links are defined by nodes'  addresses.  You  can  set
789              link options for a subset of links using a linknumber. See exam‐
790              ples below.
791              Compression options are: level, model, threshold
792              Crypto options are: cipher, hash, model
793              By  default,  encryption  is  enabled  with  cipher=aes256   and
794              hash=sha256.   To   disable   encryption,  set  cipher=none  and
795              hash=none.
796
797              Transports udp and udpu:
798              These transports are limited to one address per  node.  They  do
799              not support traffic encryption nor compression.
800              Transport options are: ip_version, netmtu
801              Link  options are: bindnetaddr, broadcast, mcastaddr, mcastport,
802              ttl
803
804              Totem and quorum can be configured regardless of used transport.
805              Totem options  are:  block_unlisted_ips,  consensus,  downcheck,
806              fail_recv_const,    heartbeat_failures_allowed,    hold,   join,
807              max_messages,   max_network_delay,   merge,    miss_count_const,
808              send_join,  seqno_unchanged_const, token, token_coefficient, to‐
809              ken_retransmit, token_retransmits_before_loss_const, window_size
810              Quorum   options   are:   auto_tie_breaker,   last_man_standing,
811              last_man_standing_window, wait_for_all
812
813              Transports  and  their  options,  link,  compression, crypto and
814              totem options are all documented in corosync.conf(5)  man  page;
815              knet  link  options  are prefixed 'knet_' there, compression op‐
816              tions are prefixed 'knet_compression_' and  crypto  options  are
817              prefixed  'crypto_'.  Quorum  options are documented in votequo‐
818              rum(5) man page.
819
820              --no-cluster-uuid will not generate a unique ID for the cluster.
821              --enable  will  configure  the  cluster  to start on nodes boot.
822              --start will start the cluster right after creating  it.  --wait
823              will   wait  up  to  'n'  seconds  for  the  cluster  to  start.
824              --no-keys-sync will skip creating and distributing pcsd SSL cer‐
825              tificate  and  key and corosync and pacemaker authkey files. Use
826              this if you provide your own certificates and keys.
827
828              Local only mode:
829              By default, pcs connects to all specified nodes to  verify  they
830              can be used in the new cluster and to send cluster configuration
831              files  to  them.  If  this  is  not  what  you   want,   specify
832              --corosync_conf  option  followed  by a file path. Pcs will save
833              corosync.conf to the specified file  and  will  not  connect  to
834              cluster nodes. These are the tasks that pcs skips in that case:
835              *  make  sure  the  nodes are not running or configured to run a
836              cluster already
837              * make sure cluster packages are  installed  on  all  nodes  and
838              their versions are compatible
839              * make sure there are no cluster configuration files on any node
840              (run 'pcs cluster destroy' and remove pcs_settings.conf file  on
841              all nodes)
842              * synchronize corosync and pacemaker authkeys, /etc/corosync/au‐
843              thkey   and   /etc/pacemaker/authkey   respectively,   and   the
844              corosync.conf file
845              *  authenticate the cluster nodes against each other ('pcs clus‐
846              ter auth' or 'pcs host auth' command)
847              * synchronize pcsd certificates (so that pcs web UI can be  used
848              in an HA mode)
849
850              Examples:
851              Create a cluster with default settings:
852                  pcs cluster setup newcluster node1 node2
853              Create a cluster using two links:
854                  pcs    cluster   setup   newcluster   node1   addr=10.0.1.11
855              addr=10.0.2.11 node2 addr=10.0.1.12 addr=10.0.2.12
856              Set link options for all links. Link options are matched to  the
857              links  in order. The first link (link 0) has sctp transport, the
858              second link (link 1) has mcastport 55405:
859                  pcs   cluster   setup   newcluster   node1    addr=10.0.1.11
860              addr=10.0.2.11  node2  addr=10.0.1.12  addr=10.0.2.12  transport
861              knet link transport=sctp link mcastport=55405
862              Set link options for the second and fourth links only. Link  op‐
863              tions  are  matched  to the links based on the linknumber option
864              (the first link is link 0):
865                  pcs   cluster   setup   newcluster   node1    addr=10.0.1.11
866              addr=10.0.2.11      addr=10.0.3.11      addr=10.0.4.11     node2
867              addr=10.0.1.12  addr=10.0.2.12   addr=10.0.3.12   addr=10.0.4.12
868              transport  knet  link linknumber=3 mcastport=55405 link linknum‐
869              ber=1 transport=sctp
870              Create a cluster using udp transport with a non-default port:
871                  pcs cluster setup newcluster node1 node2 transport udp  link
872              mcastport=55405
873
874       config [show] [--output-format cmd|json|text] [--corosync_conf <path>]
875              Show cluster configuration. There are 3 formats of output avail‐
876              able: 'cmd', 'json' and 'text', default is 'text'. Format 'text'
877              is  a  human  friendly  output. Format 'cmd' prints pcs commands
878              which can be used to recreate  the  same  configuration.  Format
879              'json'  is  a  machine  oriented output of the configuration. If
880              --corosync_conf is specified, configuration  file  specified  by
881              <path> is used instead of the current cluster configuration.
882
883       config update [transport <transport options>] [compression <compression
884       options>]   [crypto   <crypto   options>]   [totem   <totem   options>]
885       [--corosync_conf <path>]
886              Update  cluster  configuration. Unspecified options will be kept
887              unchanged. If you wish to remove an  option,  set  it  to  empty
888              value, i.e. 'option_name='.
889
890              If --corosync_conf is specified, update cluster configuration in
891              a file specified by <path>.
892
893              All options are documented in corosync.conf(5) man  page.  There
894              are different transport options for transport types. Compression
895              and crypto options are only available for knet transport.  Totem
896              options can be set regardless of the transport type.
897              Transport  options  for knet transport are: ip_version, knet_pm‐
898              tud_interval, link_mode
899              Transport options for udp and updu transports  are:  ip_version,
900              netmtu
901              Compression options are: level, model, threshold
902              Crypto options are: cipher, hash, model
903              Totem  options  are:  block_unlisted_ips,  consensus, downcheck,
904              fail_recv_const,   heartbeat_failures_allowed,    hold,    join,
905              max_messages,    max_network_delay,   merge,   miss_count_const,
906              send_join, seqno_unchanged_const, token, token_coefficient,  to‐
907              ken_retransmit, token_retransmits_before_loss_const, window_size
908
909       config uuid generate [--corosync_conf <path>] [--force]
910              Generate  a  new  cluster  UUID and distribute it to all cluster
911              nodes. Cluster UUID is not used by the cluster stack in any way,
912              it  is  provided to easily distinguish between multiple clusters
913              in a multi-cluster environment since the cluster name  does  not
914              have to be unique.
915
916              If --corosync_conf is specified, update cluster configuration in
917              file specified by <path>.
918
919              If --force is specified, existing UUID will be overwritten.
920
921       authkey corosync [<path>]
922              Generate a new corosync authkey and distribute it to all cluster
923              nodes. If <path> is specified, do not generate a key and use key
924              from the file.
925
926       start [--all | <node>... ] [--wait[=<n>]] [--request-timeout=<seconds>]
927              Start a cluster on specified node(s). If no nodes are  specified
928              then  start  a  cluster on the local node. If --all is specified
929              then start a cluster on all nodes. If the cluster has many nodes
930              then  the  start  request  may time out. In that case you should
931              consider setting  --request-timeout  to  a  suitable  value.  If
932              --wait is specified, pcs waits up to 'n' seconds for the cluster
933              to get ready to provide services after the cluster has  success‐
934              fully started.
935
936       stop [--all | <node>... ] [--request-timeout=<seconds>]
937              Stop  a  cluster on specified node(s). If no nodes are specified
938              then stop a cluster on the local node.  If  --all  is  specified
939              then  stop a cluster on all nodes. If the cluster is running re‐
940              sources which take long time to stop then the stop  request  may
941              time  out  before  the  cluster actually stops. In that case you
942              should consider setting --request-timeout to a suitable value.
943
944       kill   Force corosync and pacemaker daemons to stop on the  local  node
945              (performs kill -9). Note that init system (e.g. systemd) can de‐
946              tect that cluster is not running and start it again. If you want
947              to stop cluster on a node, run pcs cluster stop on that node.
948
949       enable [--all | <node>... ]
950              Configure  cluster  to run on node boot on specified node(s). If
951              node is not specified then cluster is enabled on the local node.
952              If --all is specified then cluster is enabled on all nodes.
953
954       disable [--all | <node>... ]
955              Configure  cluster to not run on node boot on specified node(s).
956              If node is not specified then cluster is disabled on  the  local
957              node.  If  --all  is  specified  then cluster is disabled on all
958              nodes.
959
960       auth [-u <username>] [-p <password>]
961              Authenticate pcs/pcsd to pcsd on nodes configured in  the  local
962              cluster.
963
964       status View current cluster status (an alias of 'pcs status cluster').
965
966       sync   Sync  cluster  configuration  (files  which are supported by all
967              subcommands of this command) to all cluster nodes.
968
969       sync corosync
970              Sync corosync configuration to  all  nodes  found  from  current
971              corosync.conf file.
972
973       cib [filename] [scope=<scope> | --config]
974              Get  the  raw  xml from the CIB (Cluster Information Base). If a
975              filename is provided, we save the CIB to  that  file,  otherwise
976              the  CIB  is printed. Specify scope to get a specific section of
977              the CIB. Valid values of the scope are: acls, alerts, configura‐
978              tion,  constraints,  crm_config, fencing-topology, nodes, op_de‐
979              faults, resources, rsc_defaults, tags. --config is the  same  as
980              scope=configuration.  Do not specify a scope if you want to edit
981              the saved CIB using pcs (pcs -f <command>).
982
983       cib-push <filename> [--wait[=<n>]] [diff-against=<filename_original>  |
984       scope=<scope> | --config]
985              Push the raw xml from <filename> to the CIB (Cluster Information
986              Base). You can obtain the CIB by running the 'pcs  cluster  cib'
987              command,  which  is recommended first step when you want to per‐
988              form desired modifications (pcs -f <command>)  for  the  one-off
989              push.
990              If  diff-against  is  specified,  pcs diffs contents of filename
991              against contents of filename_original and pushes the  result  to
992              the CIB.
993              Specify  scope to push a specific section of the CIB. Valid val‐
994              ues of the scope are: acls, alerts, configuration,  constraints,
995              crm_config,  fencing-topology,  nodes,  op_defaults,  resources,
996              rsc_defaults, tags. --config is the same as scope=configuration.
997              Use  of  --config  is recommended. Do not specify a scope if you
998              need to push the whole CIB or be warned in the case of  outdated
999              CIB.
1000              If  --wait is specified wait up to 'n' seconds for changes to be
1001              applied.
1002              WARNING: the selected scope of the CIB will  be  overwritten  by
1003              the current content of the specified file.
1004
1005              Example:
1006                  pcs cluster cib > original.xml
1007                  cp original.xml new.xml
1008                  pcs -f new.xml constraint location apache prefers node2
1009                  pcs cluster cib-push new.xml diff-against=original.xml
1010
1011       cib-upgrade
1012              Upgrade the CIB to conform to the latest version of the document
1013              schema.
1014
1015       edit [scope=<scope> | --config]
1016              Edit the cib in the editor specified by the $EDITOR  environment
1017              variable  and push out any changes upon saving. Specify scope to
1018              edit a specific section of the CIB. Valid values  of  the  scope
1019              are: acls, alerts, configuration, constraints, crm_config, fenc‐
1020              ing-topology, nodes, op_defaults, resources, rsc_defaults, tags.
1021              --config  is the same as scope=configuration. Use of --config is
1022              recommended. Do not specify a scope if  you  need  to  edit  the
1023              whole CIB or be warned in the case of outdated CIB.
1024
1025       node  add  <node  name>  [addr=<node  address>]...  [watchdog=<watchdog
1026       path>] [device=<SBD device path>]...  [--start  [--wait[=<n>]]]  [--en‐
1027       able] [--no-watchdog-validation]
1028              Add the node to the cluster and synchronize all relevant config‐
1029              uration files to the new node. This command can only be  run  on
1030              an existing cluster node.
1031
1032              The  new  node  is  specified by its name and optionally its ad‐
1033              dresses. If no addresses are specified for the  node,  pcs  will
1034              configure corosync to communicate with the node using an address
1035              provided in 'pcs host auth' command. Otherwise, pcs will config‐
1036              ure  corosync  to  communicate with the node using the specified
1037              addresses.
1038
1039              Use 'watchdog' to specify a path to a watchdog on the new  node,
1040              when  SBD  is  enabled in the cluster. If SBD is configured with
1041              shared storage, use 'device' to specify path to shared device(s)
1042              on the new node.
1043
1044              If  --start  is specified also start cluster on the new node, if
1045              --wait is specified wait up to 'n' seconds for the new  node  to
1046              start.  If  --enable  is specified configure cluster to start on
1047              the new node on boot. If --no-watchdog-validation is  specified,
1048              validation of watchdog will be skipped.
1049
1050              WARNING: By default, it is tested whether the specified watchdog
1051              is supported. This may cause a restart  of  the  system  when  a
1052              watchdog   with   no-way-out-feature  enabled  is  present.  Use
1053              --no-watchdog-validation to skip watchdog validation.
1054
1055       node delete <node name> [<node name>]...
1056              Shutdown specified nodes and remove them from the cluster.
1057
1058       node remove <node name> [<node name>]...
1059              Shutdown specified nodes and remove them from the cluster.
1060
1061       node add-remote <node name> [<node address>] [options]  [op  <operation
1062       action>   <operation   options>   [<operation  action>  <operation  op‐
1063       tions>]...] [meta <meta options>...] [--wait[=<n>]]
1064              Add the node to the cluster as a remote node. Sync all  relevant
1065              configuration  files to the new node. Start the node and config‐
1066              ure it to start the cluster on boot. Options are port and recon‐
1067              nect_interval.  Operations and meta belong to an underlying con‐
1068              nection resource (ocf:pacemaker:remote). If node address is  not
1069              specified for the node, pcs will configure pacemaker to communi‐
1070              cate with the node using an address provided in 'pcs host  auth'
1071              command.  Otherwise, pcs will configure pacemaker to communicate
1072              with the node using the specified addresses. If --wait is speci‐
1073              fied, wait up to 'n' seconds for the node to start.
1074
1075       node delete-remote <node identifier>
1076              Shutdown  specified  remote node and remove it from the cluster.
1077              The node-identifier can be the name of the node or  the  address
1078              of the node.
1079
1080       node remove-remote <node identifier>
1081              Shutdown  specified  remote node and remove it from the cluster.
1082              The node-identifier can be the name of the node or  the  address
1083              of the node.
1084
1085       node add-guest <node name> <resource id> [options] [--wait[=<n>]]
1086              Make the specified resource a guest node resource. Sync all rel‐
1087              evant configuration files to the new node. Start  the  node  and
1088              configure  it  to  start  the  cluster  on boot. Options are re‐
1089              mote-addr,  remote-port  and  remote-connect-timeout.   If   re‐
1090              mote-addr  is  not  specified  for  the node, pcs will configure
1091              pacemaker to communicate with the node using an address provided
1092              in  'pcs host auth' command. Otherwise, pcs will configure pace‐
1093              maker to communicate with  the  node  using  the  specified  ad‐
1094              dresses.  If --wait is specified, wait up to 'n' seconds for the
1095              node to start.
1096
1097       node delete-guest <node identifier>
1098              Shutdown specified guest node and remove it  from  the  cluster.
1099              The  node-identifier  can be the name of the node or the address
1100              of the node or id of the resource that  is  used  as  the  guest
1101              node.
1102
1103       node remove-guest <node identifier>
1104              Shutdown  specified  guest  node and remove it from the cluster.
1105              The node-identifier can be the name of the node or  the  address
1106              of  the  node  or  id  of the resource that is used as the guest
1107              node.
1108
1109       node clear <node name>
1110              Remove specified node from various cluster caches. Use this if a
1111              removed  node  is still considered by the cluster to be a member
1112              of the cluster.
1113
1114       link add <node_name>=<node_address>... [options <link options>]
1115              Add a corosync link. One address  must  be  specified  for  each
1116              cluster  node.  If  no linknumber is specified, pcs will use the
1117              lowest available linknumber.
1118              Link options (documented  in  corosync.conf(5)  man  page)  are:
1119              link_priority, linknumber, mcastport, ping_interval, ping_preci‐
1120              sion, ping_timeout, pong_count, transport (udp or sctp)
1121
1122       link delete <linknumber> [<linknumber>]...
1123              Remove specified corosync links.
1124
1125       link remove <linknumber> [<linknumber>]...
1126              Remove specified corosync links.
1127
1128       link update <linknumber> [<node_name>=<node_address>...] [options <link
1129       options>]
1130              Add, remove or change node addresses / link options of an exist‐
1131              ing corosync link. Use this if you cannot  add  /  remove  links
1132              which is the preferred way. Unspecified options will be kept un‐
1133              changed. If you wish to remove an option, set it to empty value,
1134              i.e. 'option_name='.
1135              Link options (documented in corosync.conf(5) man page) are:
1136              for  knet  transport:  link_priority,  mcastport, ping_interval,
1137              ping_precision,  ping_timeout,  pong_count,  transport  (udp  or
1138              sctp)
1139              for  udp and udpu transports: bindnetaddr, broadcast, mcastaddr,
1140              mcastport, ttl
1141
1142       uidgid List the current configured uids and gids of  users  allowed  to
1143              connect to corosync.
1144
1145       uidgid add [uid=<uid>] [gid=<gid>]
1146              Add the specified uid and/or gid to the list of users/groups al‐
1147              lowed to connect to corosync.
1148
1149       uidgid delete [uid=<uid>] [gid=<gid>]
1150              Remove  the  specified  uid  and/or  gid  from   the   list   of
1151              users/groups allowed to connect to corosync.
1152
1153       uidgid remove [uid=<uid>] [gid=<gid>]
1154              Remove   the   specified   uid  and/or  gid  from  the  list  of
1155              users/groups allowed to connect to corosync.
1156
1157       corosync [node]
1158              Get the corosync.conf from the specified node or from  the  cur‐
1159              rent node if node not specified.
1160
1161       reload corosync
1162              Reload the corosync configuration on the current node.
1163
1164       destroy [--all] [--force]
1165              Permanently destroy the cluster on the current node, killing all
1166              cluster processes and removing all cluster configuration  files.
1167              Using  --all will attempt to destroy the cluster on all nodes in
1168              the local cluster.
1169
1170              WARNING: This command permanently removes any cluster configura‐
1171              tion  that has been created. It is recommended to run 'pcs clus‐
1172              ter stop' before destroying the cluster. To  prevent  accidental
1173              running of this command, --force or interactive user response is
1174              required in order to proceed.
1175
1176       verify [--full] [-f <filename>]
1177              Checks the pacemaker configuration (CIB) for syntax  and  common
1178              conceptual errors. If no filename is specified the check is per‐
1179              formed on the currently running cluster. If --full is used  more
1180              verbose output will be printed.
1181
1182       report [--from "YYYY-M-D H:M:S" [--to "YYYY-M-D H:M:S"]] <dest>
1183              Create  a  tarball  containing  everything needed when reporting
1184              cluster problems.  If --from and --to are not used,  the  report
1185              will include the past 24 hours.
1186
1187   stonith
1188       [status [<resource id | tag id>] [node=<node>] [--hide-inactive]]
1189              Show  status  of  all  currently  configured stonith devices. If
1190              --hide-inactive is specified, only show active stonith  devices.
1191              If  a  resource  or tag id is specified, only show status of the
1192              specified resource or resources in the specified tag. If node is
1193              specified,  only  show  status  of  resources configured for the
1194              specified node.
1195
1196       config [--output-format text|cmd|json] [<stonith id>]...
1197              Show options of all currently configured stonith devices  or  if
1198              stonith device ids are specified show the options for the speci‐
1199              fied stonith device ids. There are 3 formats  of  output  avail‐
1200              able: 'cmd', 'json' and 'text', default is 'text'. Format 'text'
1201              is a human friendly output. Format  'cmd'  prints  pcs  commands
1202              which  can  be  used  to recreate the same configuration. Format
1203              'json' is a machine oriented output of the configuration.
1204
1205       list [filter] [--nodesc]
1206              Show list of all available stonith agents (if filter is provided
1207              then  only stonith agents matching the filter will be shown). If
1208              --nodesc is used then descriptions of  stonith  agents  are  not
1209              printed.
1210
1211       describe <stonith agent> [--full]
1212              Show  options  for  specified stonith agent. If --full is speci‐
1213              fied, all options including advanced  and  deprecated  ones  are
1214              shown.
1215
1216       create  <stonith id> <stonith device type> [stonith device options] [op
1217       <operation action> <operation options> [<operation  action>  <operation
1218       options>]...]  [meta  <meta  options>...] [--group <group id> [--before
1219       <stonith id> | --after <stonith id>]] [--disabled] [--wait[=n]]
1220              Create stonith  device  with  specified  type  and  options.  If
1221              --group  is  specified  the stonith device is added to the group
1222              named. You can use --before or --after to specify  the  position
1223              of  the  added  stonith device relatively to some stonith device
1224              already existing in the group.  If--disabled  is  specified  the
1225              stonith  device  is  not  used. If --wait is specified, pcs will
1226              wait up to 'n' seconds for the stonith device to start and  then
1227              return  0  if the stonith device is started, or 1 if the stonith
1228              device has not yet started. If 'n' is not specified it  defaults
1229              to 60 minutes.
1230
1231              Example: Create a device for nodes node1 and node2
1232              pcs stonith create MyFence fence_virt pcmk_host_list=node1,node2
1233              Example: Use port p1 for node n1 and ports p2 and p3 for node n2
1234              pcs        stonith        create        MyFence       fence_virt
1235              'pcmk_host_map=n1:p1;n2:p2,p3'
1236
1237       update <stonith id> [stonith options] [op [<operation  action>  <opera‐
1238       tion options>]...] [meta <meta operations>...] [--wait[=n]]
1239              Add,  remove  or change options of specified stonith device. Un‐
1240              specified options will be kept unchanged. If you wish to  remove
1241              an option, set it to empty value, i.e. 'option_name='.
1242
1243              If an operation (op) is specified it will update the first found
1244              operation with the same action on the specified stonith  device.
1245              If  no  operation  with  that action exists then a new operation
1246              will be created. (WARNING: all existing options on  the  updated
1247              operation will be reset if not specified.) If you want to create
1248              multiple monitor operations you should use the 'op  add'  &  'op
1249              remove' commands.
1250
1251              If  --wait is specified, pcs will wait up to 'n' seconds for the
1252              changes to take effect and then return 0  if  the  changes  have
1253              been  processed  or  1 otherwise. If 'n' is not specified it de‐
1254              faults to 60 minutes.
1255
1256       update-scsi-devices <stonith id> (set <device-path> [<device-path>...])
1257       |  (add  <device-path>  [<device-path>...]  delete|remove <device-path>
1258       [<device-path>...] )
1259              Update scsi fencing devices without affecting  other  resources.
1260              You  must specify either list of set devices or at least one de‐
1261              vice for add or delete/remove devices. Stonith resource must  be
1262              running  on  one  cluster  node. Each device will be unfenced on
1263              each cluster  node  running  cluster.  Supported  fence  agents:
1264              fence_scsi, fence_mpath.
1265
1266       delete <stonith id>
1267              Remove stonith id from configuration.
1268
1269       remove <stonith id>
1270              Remove stonith id from configuration.
1271
1272       op add <stonith id> <operation action> [operation properties]
1273              Add operation for specified stonith device.
1274
1275       op delete <stonith id> <operation action> [<operation properties>...]
1276              Remove specified operation (note: you must specify the exact op‐
1277              eration properties to properly remove an existing operation).
1278
1279       op delete <operation id>
1280              Remove the specified operation id.
1281
1282       op remove <stonith id> <operation action> [<operation properties>...]
1283              Remove specified operation (note: you must specify the exact op‐
1284              eration properties to properly remove an existing operation).
1285
1286       op remove <operation id>
1287              Remove the specified operation id.
1288
1289       op defaults [config] [--all] [--full] [--no-check-expired]
1290              This command is an alias of 'resource op defaults [config]' com‐
1291              mand.
1292
1293              List currently configured  default  values  for  operations.  If
1294              --all  is specified, also list expired sets of values. If --full
1295              is specified, also list ids. If --no-expire-check is  specified,
1296              do not evaluate whether sets of values are expired.
1297
1298       op defaults <name>=<value>...
1299              This command is an alias of 'resource op defaults' command.
1300
1301              Set default values for operations.
1302              NOTE: Defaults do not apply to resources / stonith devices which
1303              override them with their own defined values.
1304
1305       op defaults set create [<set options>] [meta [<name>=<value>]...] [rule
1306       [<expression>]]
1307              This  command  is  an alias of 'resource op defaults set create'
1308              command.
1309
1310              Create a new set of default values for resource / stonith device
1311              operations.  You  may  specify  a  rule  describing  resources /
1312              stonith devices and / or operations to which the set applies.
1313
1314              Set options are: id, score
1315
1316              Expression looks like one of the following:
1317                op <operation name> [interval=<interval>]
1318                resource [<standard>]:[<provider>]:[<type>]
1319                defined|not_defined <node attribute>
1320                <node  attribute>   lt|gt|lte|gte|eq|ne   [string|integer|num‐
1321              ber|version] <value>
1322                date gt|lt <date>
1323                date in_range [<date>] to <date>
1324                date in_range <date> to duration <duration options>
1325                date-spec <date-spec options>
1326                <expression> and|or <expression>
1327                (<expression>)
1328
1329              You  may specify all or any of 'standard', 'provider' and 'type'
1330              in a resource expression. For example: 'resource ocf::'  matches
1331              all  resources  of  'ocf'  standard,  while  'resource  ::Dummy'
1332              matches all resources of 'Dummy' type regardless of their  stan‐
1333              dard and provider.
1334
1335              Dates are expected to conform to ISO 8601 format.
1336
1337              Duration  options  are:  hours,  monthdays, weekdays, yearsdays,
1338              months, weeks, years, weekyears, moon. Value for  these  options
1339              is an integer.
1340
1341              Date-spec  options  are:  hours, monthdays, weekdays, yearsdays,
1342              months, weeks, years, weekyears, moon. Value for  these  options
1343              is an integer or a range written as integer-integer.
1344
1345              NOTE: Defaults do not apply to resources / stonith devices which
1346              override them with their own defined values.
1347
1348       op defaults set delete [<set id>]...
1349              This command is an alias of 'resource op  defaults  set  delete'
1350              command.
1351
1352              Delete specified options sets.
1353
1354       op defaults set remove [<set id>]...
1355              This  command  is  an alias of 'resource op defaults set delete'
1356              command.
1357
1358              Delete specified options sets.
1359
1360       op defaults set update <set id> [meta [<name>=<value>]...]
1361              This command is an alias of 'resource op  defaults  set  update'
1362              command.
1363
1364              Add,  remove or change values in specified set of default values
1365              for resource / stonith device  operations.  Unspecified  options
1366              will  be kept unchanged. If you wish to remove an option, set it
1367              to empty value, i.e. 'option_name='.
1368
1369              NOTE: Defaults do not apply to resources / stonith devices which
1370              override them with their own defined values.
1371
1372       op defaults update <name>=<value>...
1373              This  command  is an alias of 'resource op defaults update' com‐
1374              mand.
1375
1376              Add, remove or change default values for operations. This  is  a
1377              simplified command useful for cases when you only manage one set
1378              of default values. Unspecified options will be  kept  unchanged.
1379              If  you  wish  to  remove an option, set it to empty value, i.e.
1380              'option_name='.
1381
1382              NOTE: Defaults do not apply to resources / stonith devices which
1383              override them with their own defined values.
1384
1385       meta <stonith id> <meta options> [--wait[=n]]
1386              Add  specified options to the specified stonith device. Meta op‐
1387              tions should be in the format of name=value, options may be  re‐
1388              moved  by setting an option without a value. If --wait is speci‐
1389              fied, pcs will wait up to 'n' seconds for the  changes  to  take
1390              effect and then return 0 if the changes have been processed or 1
1391              otherwise. If 'n' is not specified it defaults to 60 minutes.
1392
1393              Example: pcs stonith meta  test_stonith  failure-timeout=50  re‐
1394              source-stickiness=
1395
1396       defaults [config] [--all] [--full] [--no-check-expired]
1397              This  command  is  an alias of 'resource defaults [config]' com‐
1398              mand.
1399
1400              List currently configured default values for resources / stonith
1401              devices.  If  --all is specified, also list expired sets of val‐
1402              ues. If --full is specified, also list ids. If --no-expire-check
1403              is  specified,  do  not  evaluate whether sets of values are ex‐
1404              pired.
1405
1406       defaults <name>=<value>...
1407              This command is an alias of 'resource defaults' command.
1408
1409              Set default values for resources / stonith devices.
1410              NOTE: Defaults do not apply to resources / stonith devices which
1411              override them with their own defined values.
1412
1413       defaults  set  create  [<set options>] [meta [<name>=<value>]...] [rule
1414       [<expression>]]
1415              This command is an alias of 'resource defaults set create'  com‐
1416              mand.
1417
1418              Create  a  new set of default values for resources / stonith de‐
1419              vices. You may specify a rule describing resources / stonith de‐
1420              vices to which the set applies.
1421
1422              Set options are: id, score
1423
1424              Expression looks like one of the following:
1425                resource [<standard>]:[<provider>]:[<type>]
1426                date gt|lt <date>
1427                date in_range [<date>] to <date>
1428                date in_range <date> to duration <duration options>
1429                date-spec <date-spec options>
1430                <expression> and|or <expression>
1431                (<expression>)
1432
1433              You  may specify all or any of 'standard', 'provider' and 'type'
1434              in a resource expression. For example: 'resource ocf::'  matches
1435              all  resources  of  'ocf'  standard,  while  'resource  ::Dummy'
1436              matches all resources of 'Dummy' type regardless of their  stan‐
1437              dard and provider.
1438
1439              Dates are expected to conform to ISO 8601 format.
1440
1441              Duration  options  are:  hours,  monthdays, weekdays, yearsdays,
1442              months, weeks, years, weekyears, moon. Value for  these  options
1443              is an integer.
1444
1445              Date-spec  options  are:  hours, monthdays, weekdays, yearsdays,
1446              months, weeks, years, weekyears, moon. Value for  these  options
1447              is an integer or a range written as integer-integer.
1448
1449              NOTE: Defaults do not apply to resources / stonith devices which
1450              override them with their own defined values.
1451
1452       defaults set delete [<set id>]...
1453              This command is an alias of 'resource defaults set delete'  com‐
1454              mand.
1455
1456              Delete specified options sets.
1457
1458       defaults set remove [<set id>]...
1459              This  command is an alias of 'resource defaults set delete' com‐
1460              mand.
1461
1462              Delete specified options sets.
1463
1464       defaults set update <set id> [meta [<name>=<value>]...]
1465              This command is an alias of 'resource defaults set update'  com‐
1466              mand.
1467
1468              Add,  remove or change values in specified set of default values
1469              for resources / stonith devices.  Unspecified  options  will  be
1470              kept unchanged. If you wish to remove an option, set it to empty
1471              value, i.e. 'option_name='.
1472
1473              NOTE: Defaults do not apply to resources / stonith devices which
1474              override them with their own defined values.
1475
1476       defaults update <name>=<value>...
1477              This command is an alias of 'resource defaults update' command.
1478
1479              Add, remove or change default values for resources / stonith de‐
1480              vices. This is a simplified command useful for  cases  when  you
1481              only  manage one set of default values. Unspecified options will
1482              be kept unchanged. If you wish to remove an option,  set  it  to
1483              empty value, i.e. 'option_name='.
1484
1485              NOTE: Defaults do not apply to resources / stonith devices which
1486              override them with their own defined values.
1487
1488       cleanup [<resource id | stonith id>]  [node=<node>]  [operation=<opera‐
1489       tion> [interval=<interval>]] [--strict]
1490              This command is an alias of 'resource cleanup' command.
1491
1492              Make  the  cluster  forget failed operations from history of the
1493              resource / stonith device and re-detect its current state.  This
1494              can  be  useful  to  purge  knowledge of past failures that have
1495              since been resolved.
1496
1497              If the named resource is part of a group, or  one  numbered  in‐
1498              stance  of  a clone or bundled resource, the clean-up applies to
1499              the whole collective resource unless --strict is given.
1500
1501              If a resource id / stonith id is  not  specified  then  all  re‐
1502              sources / stonith devices will be cleaned up.
1503
1504              If  a  node is not specified then resources / stonith devices on
1505              all nodes will be cleaned up.
1506
1507       refresh [<resource id | stonith id>] [node=<node>] [--strict]
1508              This command is an alias of 'resource refresh' command.
1509
1510              Make the cluster forget the complete operation history  (includ‐
1511              ing failures) of the resource / stonith device and re-detect its
1512              current state. If you are interested in forgetting failed opera‐
1513              tions only, use the 'pcs resource cleanup' command.
1514
1515              If  the  named  resource is part of a group, or one numbered in‐
1516              stance of a clone or bundled resource, the  refresh  applies  to
1517              the whole collective resource unless --strict is given.
1518
1519              If  a  resource  id  /  stonith id is not specified then all re‐
1520              sources / stonith devices will be refreshed.
1521
1522              If a node is not specified then resources / stonith  devices  on
1523              all nodes will be refreshed.
1524
1525       failcount  [show  [<resource  id  |  stonith id>] [node=<node>] [opera‐
1526       tion=<operation> [interval=<interval>]]] [--full]
1527              This command is an alias of 'resource failcount show' command.
1528
1529              Show current failcount for resources and  stonith  devices,  op‐
1530              tionally  filtered  by a resource / stonith device, node, opera‐
1531              tion and its interval. If --full is specified do not  sum  fail‐
1532              counts per resource / stonith device and node. Use 'pcs resource
1533              cleanup' or 'pcs resource refresh' to reset failcounts.
1534
1535       enable <stonith id>... [--wait[=n]]
1536              Allow the cluster to use the stonith devices. If --wait is spec‐
1537              ified,  pcs  will wait up to 'n' seconds for the stonith devices
1538              to start and then return 0 if the stonith devices  are  started,
1539              or  1 if the stonith devices have not yet started. If 'n' is not
1540              specified it defaults to 60 minutes.
1541
1542       disable <stonith id>... [--wait[=n]]
1543              Attempt to stop the stonith devices if they are running and dis‐
1544              allow  the cluster to use them. If --wait is specified, pcs will
1545              wait up to 'n' seconds for the stonith devices to stop and  then
1546              return  0 if the stonith devices are stopped or 1 if the stonith
1547              devices have not stopped. If 'n' is not specified it defaults to
1548              60 minutes.
1549
1550       level [config]
1551              Lists all of the fencing levels currently configured.
1552
1553       level add <level> <target> <stonith id> [stonith id]...
1554              Add  the fencing level for the specified target with the list of
1555              stonith devices to attempt for that target at that level.  Fence
1556              levels  are attempted in numerical order (starting with 1). If a
1557              level succeeds (meaning all devices are successfully  fenced  in
1558              that  level)  then  no other levels are tried, and the target is
1559              considered fenced. Target may be  a  node  name  <node_name>  or
1560              %<node_name> or node%<node_name>, a node name regular expression
1561              regexp%<node_pattern>   or   a   node   attribute   value    at‐
1562              trib%<name>=<value>.
1563
1564       level delete <level> [target <target>] [stonith <stonith id>...]
1565              Removes  the  fence  level  for the level, target and/or devices
1566              specified. If no target or devices are specified then the  fence
1567              level  is  removed.  Target  may  be  a node name <node_name> or
1568              %<node_name> or node%<node_name>, a node name regular expression
1569              regexp%<node_pattern>    or   a   node   attribute   value   at‐
1570              trib%<name>=<value>.
1571
1572       level remove <level> [target <target>] [stonith <stonith id>...]
1573              Removes the fence level for the  level,  target  and/or  devices
1574              specified.  If no target or devices are specified then the fence
1575              level is removed. Target may  be  a  node  name  <node_name>  or
1576              %<node_name> or node%<node_name>, a node name regular expression
1577              regexp%<node_pattern>   or   a   node   attribute   value    at‐
1578              trib%<name>=<value>.
1579
1580       level clear [target <target> | stonith <stonith id>...]
1581              Clears  the fence levels on the target (or stonith id) specified
1582              or clears all fence levels if a target/stonith id is not  speci‐
1583              fied.  Target  may be a node name <node_name> or %<node_name> or
1584              node%<node_name>,  a   node   name   regular   expression   reg‐
1585              exp%<node_pattern>    or    a    node    attribute   value   at‐
1586              trib%<name>=<value>. Example: pcs stonith  level  clear  stonith
1587              dev_a dev_b
1588
1589       level verify
1590              Verifies  all  fence devices and nodes specified in fence levels
1591              exist.
1592
1593       fence <node> [--off]
1594              Fence the node specified (if --off is specified, use  the  'off'
1595              API  call to stonith which will turn the node off instead of re‐
1596              booting it).
1597
1598       confirm <node> [--force]
1599              Confirm to the cluster that the specified node is  powered  off.
1600              This  allows  the  cluster  to recover from a situation where no
1601              stonith device is able to fence the node.  This  command  should
1602              ONLY  be  used  after manually ensuring that the node is powered
1603              off and has no access to shared resources.
1604
1605              WARNING: If this node is not actually powered  off  or  it  does
1606              have access to shared resources, data corruption/cluster failure
1607              can occur.  To  prevent  accidental  running  of  this  command,
1608              --force  or  interactive  user  response is required in order to
1609              proceed.
1610
1611              NOTE: It is not checked if the  specified  node  exists  in  the
1612              cluster  in order to be able to work with nodes not visible from
1613              the local cluster partition.
1614
1615       history [show [<node>]]
1616              Show fencing history for the specified node or all nodes  if  no
1617              node specified.
1618
1619       history cleanup [<node>]
1620              Cleanup  fence  history of the specified node or all nodes if no
1621              node specified.
1622
1623       history update
1624              Update fence history from all nodes.
1625
1626       sbd  enable  [watchdog=<path>[@<node>]]...  [device=<path>[@<node>]]...
1627       [<SBD_OPTION>=<value>]... [--no-watchdog-validation]
1628              Enable  SBD  in  cluster.  Default  path  for watchdog device is
1629              /dev/watchdog. Allowed SBD  options:  SBD_WATCHDOG_TIMEOUT  (de‐
1630              fault:  5),  SBD_DELAY_START  (default:  no), SBD_STARTMODE (de‐
1631              fault: always) and SBD_TIMEOUT_ACTION.  SBD  options  are  docu‐
1632              mented in sbd(8) man page. It is possible to specify up to 3 de‐
1633              vices per node. If --no-watchdog-validation is specified,  vali‐
1634              dation of watchdogs will be skipped.
1635
1636              WARNING:  Cluster  has  to  be restarted in order to apply these
1637              changes.
1638
1639              WARNING: By default, it is tested whether the specified watchdog
1640              is  supported.  This  may  cause  a restart of the system when a
1641              watchdog  with  no-way-out-feature  enabled  is   present.   Use
1642              --no-watchdog-validation to skip watchdog validation.
1643
1644              Example  of enabling SBD in cluster with watchdogs on node1 will
1645              be /dev/watchdog2, on node2  /dev/watchdog1,  /dev/watchdog0  on
1646              all  other  nodes,  device /dev/sdb on node1, device /dev/sda on
1647              all other nodes and watchdog timeout will bet set to 10 seconds:
1648
1649              pcs  stonith  sbd  enable  watchdog=/dev/watchdog2@node1  watch‐
1650              dog=/dev/watchdog1@node2       watchdog=/dev/watchdog0       de‐
1651              vice=/dev/sdb@node1 device=/dev/sda SBD_WATCHDOG_TIMEOUT=10
1652
1653
1654       sbd disable
1655              Disable SBD in cluster.
1656
1657              WARNING: Cluster has to be restarted in  order  to  apply  these
1658              changes.
1659
1660       sbd   device  setup  device=<path>  [device=<path>]...  [watchdog-time‐
1661       out=<integer>]  [allocate-timeout=<integer>]   [loop-timeout=<integer>]
1662       [msgwait-timeout=<integer>]
1663              Initialize SBD structures on device(s) with specified timeouts.
1664
1665              WARNING: All content on device(s) will be overwritten.
1666
1667       sbd device message <device-path> <node> <message-type>
1668              Manually  set  a message of the specified type on the device for
1669              the node. Possible message types (they are documented in  sbd(8)
1670              man page): test, reset, off, crashdump, exit, clear
1671
1672       sbd status [--full]
1673              Show  status of SBD services in cluster and local device(s) con‐
1674              figured. If --full is specified, also dump of SBD headers on de‐
1675              vice(s) will be shown.
1676
1677       sbd config
1678              Show SBD configuration in cluster.
1679
1680
1681       sbd watchdog list
1682              Show all available watchdog devices on the local node.
1683
1684              WARNING:  Listing available watchdogs may cause a restart of the
1685              system  when  a  watchdog  with  no-way-out-feature  enabled  is
1686              present.
1687
1688
1689       sbd watchdog test [<watchdog-path>]
1690              This  operation  is  expected  to  force-reboot the local system
1691              without following any shutdown procedures using a  watchdog.  If
1692              no  watchdog  is  specified,  available watchdog will be used if
1693              only one watchdog device is available on the local system.
1694
1695
1696   acl
1697       [config]
1698              List all current access control lists.
1699
1700       enable Enable access control lists.
1701
1702       disable
1703              Disable access control lists.
1704
1705       role create <role id> [description=<description>]  [((read  |  write  |
1706       deny) (xpath <query> | id <id>))...]
1707              Create  a role with the id and (optional) description specified.
1708              Each role can also  have  an  unlimited  number  of  permissions
1709              (read/write/deny)  applied to either an xpath query or the id of
1710              a specific element in the cib.
1711              Permissions are applied to the selected XML element's entire XML
1712              subtree  (all  elements  enclosed  within  it). Write permission
1713              grants the ability to create, modify, or remove the element  and
1714              its  subtree,  and  also the ability to create any "scaffolding"
1715              elements (enclosing elements that do not have  attributes  other
1716              than  an ID). Permissions for more specific matches (more deeply
1717              nested elements) take precedence over more general ones. If mul‐
1718              tiple  permissions  are configured for the same match (for exam‐
1719              ple, in different roles applied to the same user), any deny per‐
1720              mission takes precedence, then write, then lastly read.
1721              An xpath may include an attribute expression to select only ele‐
1722              ments that match the expression, but the  permission  still  ap‐
1723              plies to the entire element (and its subtree), not to the attri‐
1724              bute alone. For example, using the xpath  "//*[@name]"  to  give
1725              write permission would allow changes to the entirety of all ele‐
1726              ments that have a "name" attribute and  everything  enclosed  by
1727              those  elements.  There  is no way currently to give permissions
1728              for just one attribute of an element. That is to  say,  you  can
1729              not  define  an ACL that allows someone to read just the dc-uuid
1730              attribute of the cib tag - that would select the cib element and
1731              give read access to the entire CIB.
1732
1733       role delete <role id>
1734              Delete the role specified and remove it from any users/groups it
1735              was assigned to.
1736
1737       role remove <role id>
1738              Delete the role specified and remove it from any users/groups it
1739              was assigned to.
1740
1741       role assign <role id> [to] [user|group] <username/group>
1742              Assign  a  role to a user or group already created with 'pcs acl
1743              user/group create'. If there is user and group with the same  id
1744              and  it is not specified which should be used, user will be pri‐
1745              oritized. In cases like this  specify  whenever  user  or  group
1746              should be used.
1747
1748       role unassign <role id> [from] [user|group] <username/group>
1749              Remove  a  role  from  the  specified user. If there is user and
1750              group with the same id and it is not specified which  should  be
1751              used, user will be prioritized. In cases like this specify when‐
1752              ever user or group should be used.
1753
1754       user create <username> [<role id>]...
1755              Create an ACL for the user specified and  assign  roles  to  the
1756              user.
1757
1758       user delete <username>
1759              Remove the user specified (and roles assigned will be unassigned
1760              for the specified user).
1761
1762       user remove <username>
1763              Remove the user specified (and roles assigned will be unassigned
1764              for the specified user).
1765
1766       group create <group> [<role id>]...
1767              Create  an  ACL  for the group specified and assign roles to the
1768              group.
1769
1770       group delete <group>
1771              Remove the group specified (and roles  assigned  will  be  unas‐
1772              signed for the specified group).
1773
1774       group remove <group>
1775              Remove  the  group  specified  (and roles assigned will be unas‐
1776              signed for the specified group).
1777
1778       permission add <role id> ((read | write | deny)  (xpath  <query>  |  id
1779       <id>))...
1780              Add  the  listed  permissions to the role specified. Permissions
1781              are applied to either an xpath query or the id of a specific el‐
1782              ement in the CIB.
1783              Permissions are applied to the selected XML element's entire XML
1784              subtree (all elements  enclosed  within  it).  Write  permission
1785              grants  the ability to create, modify, or remove the element and
1786              its subtree, and also the ability to  create  any  "scaffolding"
1787              elements  (enclosing  elements that do not have attributes other
1788              than an ID). Permissions for more specific matches (more  deeply
1789              nested elements) take precedence over more general ones. If mul‐
1790              tiple permissions are configured for the same match  (for  exam‐
1791              ple, in different roles applied to the same user), any deny per‐
1792              mission takes precedence, then write, then lastly read.
1793              An xpath may include an attribute expression to select only ele‐
1794              ments  that  match  the expression, but the permission still ap‐
1795              plies to the entire element (and its subtree), not to the attri‐
1796              bute  alone.  For  example, using the xpath "//*[@name]" to give
1797              write permission would allow changes to the entirety of all ele‐
1798              ments  that  have  a "name" attribute and everything enclosed by
1799              those elements. There is no way currently  to  give  permissions
1800              for  just  one  attribute of an element. That is to say, you can
1801              not define an ACL that allows someone to read just  the  dc-uuid
1802              attribute of the cib tag - that would select the cib element and
1803              give read access to the entire CIB.
1804
1805       permission delete <permission id>
1806              Remove the permission id specified (permission id's  are  listed
1807              in parenthesis after permissions in 'pcs acl' output).
1808
1809       permission remove <permission id>
1810              Remove  the  permission id specified (permission id's are listed
1811              in parenthesis after permissions in 'pcs acl' output).
1812
1813   property
1814       [config [<property> | --all | --defaults]] | [--all | --defaults]
1815              List property settings (default: lists  configured  properties).
1816              If  --defaults  is specified will show all property defaults, if
1817              --all is specified, current configured properties will be  shown
1818              with  unset  properties  and their defaults.  See pacemaker-con‐
1819              trold(7) and pacemaker-schedulerd(7) man pages for a description
1820              of the properties.
1821
1822       set <property>=[<value>] ... [--force]
1823              Set  specific  pacemaker  properties (if the value is blank then
1824              the property is removed from the configuration).  If a  property
1825              is not recognized by pcs the property will not be created unless
1826              the --force is used.  See pacemaker-controld(7)  and  pacemaker-
1827              schedulerd(7) man pages for a description of the properties.
1828
1829       unset <property> ...
1830              Remove  property  from configuration.  See pacemaker-controld(7)
1831              and pacemaker-schedulerd(7) man pages for a description  of  the
1832              properties.
1833
1834   constraint
1835       [config] [--full] [--all]
1836              List  all  current constraints that are not expired. If --all is
1837              specified also show expired constraints. If --full is  specified
1838              also list the constraint ids.
1839
1840       location <resource> prefers <node>[=<score>] [<node>[=<score>]]...
1841              Create  a location constraint on a resource to prefer the speci‐
1842              fied node with score (default score: INFINITY). Resource may  be
1843              either  a  resource  id  <resource_id>  or %<resource_id> or re‐
1844              source%<resource_id>, or a resource name regular expression reg‐
1845              exp%<resource_pattern>.
1846
1847       location <resource> avoids <node>[=<score>] [<node>[=<score>]]...
1848              Create  a  location constraint on a resource to avoid the speci‐
1849              fied node with score (default score: INFINITY). Resource may  be
1850              either  a  resource  id  <resource_id>  or %<resource_id> or re‐
1851              source%<resource_id>, or a resource name regular expression reg‐
1852              exp%<resource_pattern>.
1853
1854       location  <resource>  rule [id=<rule id>] [resource-discovery=<option>]
1855       [role=Promoted|Unpromoted]   [constraint-id=<id>]   [score=<score>    |
1856       score-attribute=<attribute>] <expression>
1857              Creates  a  location constraint with a rule on the specified re‐
1858              source where expression looks like one of the following:
1859                defined|not_defined <node attribute>
1860                <node  attribute>   lt|gt|lte|gte|eq|ne   [string|integer|num‐
1861              ber|version] <value>
1862                date gt|lt <date>
1863                date in_range <date> to <date>
1864                date in_range <date> to duration <duration options>...
1865                date-spec <date spec options>...
1866                <expression> and|or <expression>
1867                ( <expression> )
1868              where  duration options and date spec options are: hours, month‐
1869              days, weekdays, yeardays, months, weeks, years, weekyears, moon.
1870              Resource  may  be  either  a  resource id <resource_id> or %<re‐
1871              source_id> or resource%<resource_id>, or a resource name regular
1872              expression regexp%<resource_pattern>. If score is omitted it de‐
1873              faults to INFINITY. If id is omitted one is generated  from  the
1874              resource  id.  If  resource-discovery  is omitted it defaults to
1875              'always'.
1876
1877       location [config [resources  [<resource>...]]  |  [nodes  [<node>...]]]
1878       [--full] [--all]
1879              List  all the current location constraints that are not expired.
1880              If 'resources' is specified, location constraints are  displayed
1881              per  resource  (default). If 'nodes' is specified, location con‐
1882              straints are displayed per node. If specific nodes or  resources
1883              are specified then we only show information about them. Resource
1884              may be either a resource id <resource_id> or  %<resource_id>  or
1885              resource%<resource_id>,  or  a  resource name regular expression
1886              regexp%<resource_pattern>. If --full is specified show  the  in‐
1887              ternal  constraint  id's as well. If --all is specified show the
1888              expired constraints.
1889
1890       location add <id> <resource>  <node>  <score>  [resource-discovery=<op‐
1891       tion>]
1892              Add a location constraint with the appropriate id for the speci‐
1893              fied resource, node name and score. Resource may be either a re‐
1894              source  id  <resource_id>  or  %<resource_id>  or  resource%<re‐
1895              source_id>, or a resource name  regular  expression  regexp%<re‐
1896              source_pattern>.
1897
1898       location delete <id>
1899              Remove a location constraint with the appropriate id.
1900
1901       location remove <id>
1902              Remove a location constraint with the appropriate id.
1903
1904       order [config] [--full]
1905              List  all  current  ordering constraints (if --full is specified
1906              show the internal constraint id's as well).
1907
1908       order [action] <resource id> then [action] <resource id> [options]
1909              Add an ordering constraint specifying actions (start, stop, pro‐
1910              mote,  demote)  and if no action is specified the default action
1911              will  be  start.   Available  options  are  kind=Optional/Manda‐
1912              tory/Serialize,  symmetrical=true/false,  require-all=true/false
1913              and id=<constraint-id>.
1914
1915       order set <resource1> [resourceN]...  [options]  [set  <resourceX>  ...
1916       [options]] [setoptions [constraint_options]]
1917              Create  an  ordered  set of resources. Available options are se‐
1918              quential=true/false,     require-all=true/false     and      ac‐
1919              tion=start/promote/demote/stop. Available constraint_options are
1920              id=<constraint-id>, kind=Optional/Mandatory/Serialize  and  sym‐
1921              metrical=true/false.
1922
1923       order delete <resource1> [resourceN]...
1924              Remove resource from any ordering constraint
1925
1926       order remove <resource1> [resourceN]...
1927              Remove resource from any ordering constraint
1928
1929       colocation [config] [--full]
1930              List  all current colocation constraints (if --full is specified
1931              show the internal constraint id's as well).
1932
1933       colocation add [<role>] <source resource id> with [<role>] <target  re‐
1934       source id> [score] [options] [id=constraint-id]
1935              Request  <source  resource>  to run on the same node where pace‐
1936              maker has determined <target  resource>  should  run.   Positive
1937              values  of  score  mean  the resources should be run on the same
1938              node, negative values mean the resources should not  be  run  on
1939              the  same  node.  Specifying 'INFINITY' (or '-INFINITY') for the
1940              score forces <source resource> to run (or not run) with  <target
1941              resource>  (score  defaults to "INFINITY"). A role can be: 'Pro‐
1942              moted', 'Unpromoted', 'Started', 'Stopped' (if no role is speci‐
1943              fied, it defaults to 'Started').
1944
1945       colocation  set  <resource1>  [resourceN]... [options] [set <resourceX>
1946       ... [options]] [setoptions [constraint_options]]
1947              Create a colocation constraint with a  resource  set.  Available
1948              options  are sequential=true/false and role=Stopped/Started/Pro‐
1949              moted/Unpromoted. Available constraint_options are id and either
1950              of: score, score-attribute, score-attribute-mangle.
1951
1952       colocation delete <source resource id> <target resource id>
1953              Remove colocation constraints with specified resources.
1954
1955       colocation remove <source resource id> <target resource id>
1956              Remove colocation constraints with specified resources.
1957
1958       ticket [config] [--full]
1959              List all current ticket constraints (if --full is specified show
1960              the internal constraint id's as well).
1961
1962       ticket  add  <ticket>  [<role>]  <resource  id>  [<options>]  [id=<con‐
1963       straint-id>]
1964              Create  a  ticket constraint for <resource id>. Available option
1965              is loss-policy=fence/stop/freeze/demote. A role can be Promoted,
1966              Unpromoted, Started or Stopped.
1967
1968       ticket  set  <resource1>  [<resourceN>]... [<options>] [set <resourceX>
1969       ... [<options>]] setoptions <constraint_options>
1970              Create a ticket constraint with a resource  set.  Available  op‐
1971              tions   are  role=Stopped/Started/Promoted/Unpromoted.  Required
1972              constraint option is ticket=<ticket>.  Optional  constraint  op‐
1973              tions       are       id=<constraint-id>      and      loss-pol‐
1974              icy=fence/stop/freeze/demote.
1975
1976       ticket delete <ticket> <resource id>
1977              Remove all ticket constraints with <ticket> from <resource id>.
1978
1979       ticket remove <ticket> <resource id>
1980              Remove all ticket constraints with <ticket> from <resource id>.
1981
1982       delete <constraint id>...
1983              Remove constraint(s) or  constraint  rules  with  the  specified
1984              id(s).
1985
1986       remove <constraint id>...
1987              Remove  constraint(s)  or  constraint  rules  with the specified
1988              id(s).
1989
1990       ref <resource>...
1991              List constraints referencing specified resource.
1992
1993       rule add  <constraint  id>  [id=<rule  id>]  [role=Promoted|Unpromoted]
1994       [score=<score>|score-attribute=<attribute>] <expression>
1995              Add a rule to a location constraint specified by 'constraint id'
1996              where the expression looks like one of the following:
1997                defined|not_defined <node attribute>
1998                <node  attribute>   lt|gt|lte|gte|eq|ne   [string|integer|num‐
1999              ber|version] <value>
2000                date gt|lt <date>
2001                date in_range <date> to <date>
2002                date in_range <date> to duration <duration options>...
2003                date-spec <date spec options>...
2004                <expression> and|or <expression>
2005                ( <expression> )
2006              where  duration options and date spec options are: hours, month‐
2007              days, weekdays, yeardays, months, weeks, years, weekyears, moon.
2008              If  score  is  omitted it defaults to INFINITY. If id is omitted
2009              one is generated from the constraint id.
2010
2011       rule delete <rule id>
2012              Remove a rule from its location constraint and if it's the  last
2013              rule, the constraint will also be removed.
2014
2015       rule remove <rule id>
2016              Remove  a rule from its location constraint and if it's the last
2017              rule, the constraint will also be removed.
2018
2019   qdevice
2020       status <device model> [--full] [<cluster name>]
2021              Show  runtime  status  of  specified  model  of  quorum   device
2022              provider.   Using  --full  will  give  more detailed output.  If
2023              <cluster name> is specified, only information about  the  speci‐
2024              fied cluster will be displayed.
2025
2026       setup model <device model> [--enable] [--start]
2027              Configure specified model of quorum device provider.  Quorum de‐
2028              vice then can be added to clusters by running "pcs quorum device
2029              add"  command  in  a  cluster.   --start  will  also  start  the
2030              provider.  --enable will configure  the  provider  to  start  on
2031              boot.
2032
2033       destroy <device model>
2034              Disable  and  stop specified model of quorum device provider and
2035              delete its configuration files.
2036
2037       start <device model>
2038              Start specified model of quorum device provider.
2039
2040       stop <device model>
2041              Stop specified model of quorum device provider.
2042
2043       kill <device model>
2044              Force specified model of quorum device provider  to  stop  (per‐
2045              forms kill -9).  Note that init system (e.g. systemd) can detect
2046              that the qdevice is not running and start it again.  If you want
2047              to stop the qdevice, run "pcs qdevice stop" command.
2048
2049       enable <device model>
2050              Configure  specified model of quorum device provider to start on
2051              boot.
2052
2053       disable <device model>
2054              Configure specified model of quorum device provider to not start
2055              on boot.
2056
2057   quorum
2058       [config]
2059              Show quorum configuration.
2060
2061       status Show quorum runtime status.
2062
2063       device  add  [<generic options>] model <device model> [<model options>]
2064       [heuristics <heuristics options>]
2065              Add a quorum device to the cluster. Quorum device should be con‐
2066              figured  first  with  "pcs qdevice setup". It is not possible to
2067              use more than one quorum device in a cluster simultaneously.
2068              Currently the only supported model is 'net'. It  requires  model
2069              options 'algorithm' and 'host' to be specified. Options are doc‐
2070              umented in corosync-qdevice(8) man  page;  generic  options  are
2071              'sync_timeout'  and  'timeout',  for model net options check the
2072              quorum.device.net section, for heuristics options see  the  quo‐
2073              rum.device.heuristics  section.  Pcs  automatically  creates and
2074              distributes TLS certificates and sets the 'tls' model option  to
2075              the default value 'on'.
2076              Example:   pcs   quorum   device  add  model  net  algorithm=lms
2077              host=qnetd.internal.example.com
2078
2079       device heuristics delete
2080              Remove all heuristics settings of the configured quorum device.
2081
2082       device heuristics remove
2083              Remove all heuristics settings of the configured quorum device.
2084
2085       device delete
2086              Remove a quorum device from the cluster.
2087
2088       device remove
2089              Remove a quorum device from the cluster.
2090
2091       device status [--full]
2092              Show quorum device runtime status.  Using --full will give  more
2093              detailed output.
2094
2095       device  update  [<generic options>] [model <model options>] [heuristics
2096       <heuristics options>]
2097              Add, remove or change quorum device options. Unspecified options
2098              will  be kept unchanged. If you wish to remove an option, set it
2099              to empty value, i.e. 'option_name='. Requires the cluster to  be
2100              stopped.  Model  and options are all documented in corosync-qde‐
2101              vice(8) man page; for heuristics options  check  the  quorum.de‐
2102              vice.heuristics subkey section, for model options check the quo‐
2103              rum.device.<device model> subkey sections.
2104
2105              WARNING: If you want to change "host" option  of  qdevice  model
2106              net,  use "pcs quorum device remove" and "pcs quorum device add"
2107              commands to set up configuration properly  unless  old  and  new
2108              host is the same machine.
2109
2110       expected-votes <votes>
2111              Set expected votes in the live cluster to specified value.  This
2112              only affects the live cluster,  not  changes  any  configuration
2113              files.
2114
2115       unblock [--force]
2116              Cancel  waiting  for all nodes when establishing quorum.  Useful
2117              in situations where you know the cluster is inquorate,  but  you
2118              are confident that the cluster should proceed with resource man‐
2119              agement regardless.  This command should ONLY be used when nodes
2120              which  the cluster is waiting for have been confirmed to be pow‐
2121              ered off and to have no access to shared resources.
2122
2123              WARNING: If the nodes are not actually powered off  or  they  do
2124              have access to shared resources, data corruption/cluster failure
2125              can occur.  To  prevent  accidental  running  of  this  command,
2126              --force  or  interactive  user  response is required in order to
2127              proceed.
2128
2129       update        [auto_tie_breaker=[0|1]]        [last_man_standing=[0|1]]
2130       [last_man_standing_window=[<time in ms>]] [wait_for_all=[0|1]]
2131              Add,  remove  or change quorum options. At least one option must
2132              be specified. Unspecified options will be kept unchanged. If you
2133              wish  to  remove  an  option,  set  it to empty value, i.e. 'op‐
2134              tion_name='. Options are documented in corosync's  votequorum(5)
2135              man page. Requires the cluster to be stopped.
2136
2137   booth
2138       setup  sites  <address> <address> [<address>...] [arbitrators <address>
2139       ...] [--force]
2140              Write new booth configuration with specified sites and  arbitra‐
2141              tors.   Total  number  of  peers (sites and arbitrators) must be
2142              odd.  When the configuration file already exists, command  fails
2143              unless --force is specified.
2144
2145       destroy
2146              Remove booth configuration files.
2147
2148       ticket add <ticket> [<name>=<value> ...]
2149              Add  new ticket to the current configuration. Ticket options are
2150              specified in booth manpage.
2151
2152       ticket delete <ticket>
2153              Remove the specified ticket from the current configuration.
2154
2155       ticket remove <ticket>
2156              Remove the specified ticket from the current configuration.
2157
2158       config [<node>]
2159              Show booth configuration from the specified  node  or  from  the
2160              current node if node not specified.
2161
2162       create ip <address>
2163              Make  the  cluster run booth service on the specified ip address
2164              as a cluster resource.  Typically this  is  used  to  run  booth
2165              site.
2166
2167       clean-enable-authfile
2168              Remove  'enable-authfile'  option from booth configuration. This
2169              is useful when upgrading from booth that required the option  to
2170              be present to a new version which doesn't tolerate the option.
2171
2172       delete Remove  booth  resources  created by the "pcs booth create" com‐
2173              mand.
2174
2175       remove Remove booth resources created by the "pcs  booth  create"  com‐
2176              mand.
2177
2178       restart
2179              Restart  booth  resources created by the "pcs booth create" com‐
2180              mand.
2181
2182       ticket grant <ticket> [<site address>]
2183              Grant the ticket to the site specified by the address, hence  to
2184              the booth formation this site is a member of. When this specifi‐
2185              cation is omitted, site address that  has  been  specified  with
2186              'pcs  booth  create' command is used. Specifying site address is
2187              therefore mandatory when running this command at a  host  in  an
2188              arbitrator role.
2189              Note  that the ticket must not be already granted in given booth
2190              formation; for an ad-hoc (and, in the worst case, abrupt, for  a
2191              lack of a direct atomicity) change of this preference baring di‐
2192              rect interventions at the sites, the ticket needs to be  revoked
2193              first, only then it can be granted at another site again.
2194
2195       ticket revoke <ticket> [<site address>]
2196              Revoke  the ticket in the booth formation as identified with one
2197              of its member sites specified by the address. When this specifi‐
2198              cation  is  omitted, site address that has been specified with a
2199              prior 'pcs booth create' command is used.  Specifying  site  ad‐
2200              dress is therefore mandatory when running this command at a host
2201              in an arbitrator role.
2202
2203       status Print current status of booth on the local node.
2204
2205       pull <node>
2206              Pull booth configuration from the specified node.
2207
2208       sync [--skip-offline]
2209              Send booth configuration from the local node to all nodes in the
2210              cluster.
2211
2212       enable Enable booth arbitrator service.
2213
2214       disable
2215              Disable booth arbitrator service.
2216
2217       start  Start booth arbitrator service.
2218
2219       stop   Stop booth arbitrator service.
2220
2221   status
2222       [status] [--full] [--hide-inactive]
2223              View  all  information  about  the cluster and resources (--full
2224              provides  more  details,  --hide-inactive  hides  inactive   re‐
2225              sources).
2226
2227       resources [<resource id | tag id>] [node=<node>] [--hide-inactive]
2228              Show status of all currently configured resources. If --hide-in‐
2229              active is specified, only show active resources. If  a  resource
2230              or  tag  id  is specified, only show status of the specified re‐
2231              source or resources in the specified tag. If node is  specified,
2232              only show status of resources configured for the specified node.
2233
2234       cluster
2235              View current cluster status.
2236
2237       corosync
2238              View current membership information as seen by corosync.
2239
2240       quorum View current quorum status.
2241
2242       qdevice <device model> [--full] [<cluster name>]
2243              Show   runtime  status  of  specified  model  of  quorum  device
2244              provider.  Using --full will  give  more  detailed  output.   If
2245              <cluster  name>  is specified, only information about the speci‐
2246              fied cluster will be displayed.
2247
2248       booth  Print current status of booth on the local node.
2249
2250       nodes [corosync | both | config]
2251              View current status of nodes from pacemaker.  If  'corosync'  is
2252              specified,  view  current status of nodes from corosync instead.
2253              If 'both' is specified, view current status of nodes  from  both
2254              corosync & pacemaker. If 'config' is specified, print nodes from
2255              corosync & pacemaker configuration.
2256
2257       pcsd [<node>]...
2258              Show current status of pcsd on nodes specified, or on all  nodes
2259              configured in the local cluster if no nodes are specified.
2260
2261       xml    View xml version of status (output from crm_mon -r -1 -X).
2262
2263   config
2264       [show] View full cluster configuration.
2265
2266       backup [filename]
2267              Creates  the tarball containing the cluster configuration files.
2268              If filename is not specified the standard output will be used.
2269
2270       restore [--local] [filename]
2271              Restores the cluster configuration files on all nodes  from  the
2272              backup.  If filename is not specified the standard input will be
2273              used.  If --local is specified only the  files  on  the  current
2274              node will be restored.
2275
2276       checkpoint
2277              List all available configuration checkpoints.
2278
2279       checkpoint view <checkpoint_number>
2280              Show specified configuration checkpoint.
2281
2282       checkpoint diff <checkpoint_number> <checkpoint_number>
2283              Show  differences  between  the  two  specified checkpoints. Use
2284              checkpoint number 'live' to compare a checkpoint to the  current
2285              live configuration.
2286
2287       checkpoint restore <checkpoint_number>
2288              Restore cluster configuration to specified checkpoint.
2289
2290   pcsd
2291       certkey <certificate file> <key file>
2292              Load custom certificate and key files for use in pcsd.
2293
2294       status [<node>]...
2295              Show  current status of pcsd on nodes specified, or on all nodes
2296              configured in the local cluster if no nodes are specified.
2297
2298       sync-certificates
2299              Sync pcsd certificates to all nodes in the local cluster.
2300
2301       deauth [<token>]...
2302              Delete locally stored authentication tokens used by remote  sys‐
2303              tems  to  connect  to  the local pcsd instance. If no tokens are
2304              specified all tokens will be deleted. After this command is  run
2305              other nodes will need to re-authenticate against this node to be
2306              able to connect to it.
2307
2308   host
2309       auth (<host name>  [addr=<address>[:<port>]])...  [-u  <username>]  [-p
2310       <password>]
2311              Authenticate  local pcs/pcsd against pcsd on specified hosts. It
2312              is possible to specify an address and a port via which  pcs/pcsd
2313              will  communicate with each host. If an address is not specified
2314              a host name will be used. If a port is not specified  2224  will
2315              be used.
2316
2317       deauth [<host name>]...
2318              Delete authentication tokens which allow pcs/pcsd on the current
2319              system to connect to remote pcsd  instances  on  specified  host
2320              names.  If  the current system is a member of a cluster, the to‐
2321              kens will be deleted from all nodes in the cluster. If  no  host
2322              names  are specified all tokens will be deleted. After this com‐
2323              mand is run this node will need to re-authenticate against other
2324              nodes to be able to connect to them.
2325
2326   node
2327       attribute [[<node>] [--name <name>] | <node> <name>=<value> ...]
2328              Manage  node  attributes.   If no parameters are specified, show
2329              attributes of all nodes.  If one parameter  is  specified,  show
2330              attributes  of  specified  node.   If  --name is specified, show
2331              specified attribute's value from all nodes.  If more  parameters
2332              are specified, set attributes of specified node.  Attributes can
2333              be removed by setting an attribute without a value.
2334
2335       maintenance [--all | <node>...] [--wait[=n]]
2336              Put specified node(s) into maintenance mode, if no nodes or  op‐
2337              tions  are  specified  the current node will be put into mainte‐
2338              nance mode, if --all is specified all nodes  will  be  put  into
2339              maintenance  mode.  If  --wait is specified, pcs will wait up to
2340              'n' seconds for the node(s) to be put into maintenance mode  and
2341              then  return  0  on  success or 1 if the operation not succeeded
2342              yet. If 'n' is not specified it defaults to 60 minutes.
2343
2344       unmaintenance [--all | <node>...] [--wait[=n]]
2345              Remove node(s) from maintenance mode, if no nodes or options are
2346              specified  the  current  node  will  be removed from maintenance
2347              mode, if --all is specified all nodes will be removed from main‐
2348              tenance  mode.  If  --wait is specified, pcs will wait up to 'n'
2349              seconds for the node(s) to be removed from maintenance mode  and
2350              then  return  0  on  success or 1 if the operation not succeeded
2351              yet. If 'n' is not specified it defaults to 60 minutes.
2352
2353       standby [--all | <node>...] [--wait[=n]]
2354              Put specified node(s) into standby mode (the node specified will
2355              no longer be able to host resources), if no nodes or options are
2356              specified the current node will be put  into  standby  mode,  if
2357              --all  is  specified all nodes will be put into standby mode. If
2358              --wait is specified, pcs will wait up to  'n'  seconds  for  the
2359              node(s) to be put into standby mode and then return 0 on success
2360              or 1 if the operation not succeeded yet. If 'n' is not specified
2361              it defaults to 60 minutes.
2362
2363       unstandby [--all | <node>...] [--wait[=n]]
2364              Remove node(s) from standby mode (the node specified will now be
2365              able to host resources), if no nodes or  options  are  specified
2366              the  current node will be removed from standby mode, if --all is
2367              specified all nodes will be removed from standby mode. If --wait
2368              is specified, pcs will wait up to 'n' seconds for the node(s) to
2369              be removed from standby mode and then return 0 on success  or  1
2370              if  the  operation not succeeded yet. If 'n' is not specified it
2371              defaults to 60 minutes.
2372
2373       utilization [[<node>] [--name <name>] | <node> <name>=<value> ...]
2374              Add specified utilization options to specified node.  If node is
2375              not  specified,  shows  utilization  of all nodes.  If --name is
2376              specified, shows specified utilization value from all nodes.  If
2377              utilization  options  are  not  specified,  shows utilization of
2378              specified  node.   Utilization  option  should  be   in   format
2379              name=value,  value has to be integer.  Options may be removed by
2380              setting an option without a value.  Example: pcs  node  utiliza‐
2381              tion node1 cpu=4 ram=
2382
2383   alert
2384       [config]
2385              Show all configured alerts.
2386
2387       create path=<path> [id=<alert-id>] [description=<description>] [options
2388       [<option>=<value>]...] [meta [<meta-option>=<value>]...]
2389              Define an alert handler with specified path. Id will be automat‐
2390              ically generated if it is not specified.
2391
2392       update  <alert-id>  [path=<path>]  [description=<description>] [options
2393       [<option>=<value>]...] [meta [<meta-option>=<value>]...]
2394              Update an existing alert handler with specified id.  Unspecified
2395              options will be kept unchanged. If you wish to remove an option,
2396              set it to empty value, i.e. 'option_name='.
2397
2398       delete <alert-id> ...
2399              Remove alert handlers with specified ids.
2400
2401       remove <alert-id> ...
2402              Remove alert handlers with specified ids.
2403
2404       recipient add  <alert-id>  value=<recipient-value>  [id=<recipient-id>]
2405       [description=<description>]   [options   [<option>=<value>]...]   [meta
2406       [<meta-option>=<value>]...]
2407              Add new recipient to specified alert handler.
2408
2409       recipient  update  <recipient-id>  [value=<recipient-value>]  [descrip‐
2410       tion=<description>]  [options  [<option>=<value>]...]  [meta [<meta-op‐
2411       tion>=<value>]...]
2412              Update an existing recipient identified by its  id.  Unspecified
2413              options will be kept unchanged. If you wish to remove an option,
2414              set it to empty value, i.e. 'option_name='.
2415
2416       recipient delete <recipient-id> ...
2417              Remove specified recipients.
2418
2419       recipient remove <recipient-id> ...
2420              Remove specified recipients.
2421
2422   client
2423       local-auth [<pcsd-port>] [-u <username>] [-p <password>]
2424              Authenticate current user to local pcsd. This is required to run
2425              some  pcs  commands  which  may require permissions of root user
2426              such as 'pcs cluster start'.
2427
2428   dr
2429       config Display disaster-recovery configuration from the local node.
2430
2431       status [--full] [--hide-inactive]
2432              Display status of the local and the remote site cluster  (--full
2433              provides   more  details,  --hide-inactive  hides  inactive  re‐
2434              sources).
2435
2436       set-recovery-site <recovery site node>
2437              Set up disaster-recovery with the local cluster being  the  pri‐
2438              mary  site. The recovery site is defined by a name of one of its
2439              nodes.
2440
2441       destroy
2442              Permanently  destroy  disaster-recovery  configuration  on   all
2443              sites.
2444
2445   tag
2446       [config|list [<tag id>...]]
2447              Display configured tags.
2448
2449       create <tag id> <id> [<id>]...
2450              Create a tag containing the specified ids.
2451
2452       delete <tag id>...
2453              Delete specified tags.
2454
2455       remove <tag id>...
2456              Delete specified tags.
2457
2458       update  <tag  id>  [add  <id> [<id>]... [--before <id> | --after <id>]]
2459       [remove <id> [<id>]...]
2460              Update a tag using the specified ids. Ids can be added,  removed
2461              or  moved  in  a tag. You can use --before or --after to specify
2462              the position of the added ids relatively to some id already  ex‐
2463              isting  in  the  tag. By adding ids to a tag they are already in
2464              and specifying --after or --before you can move the ids  in  the
2465              tag.
2466

EXAMPLES

2468       Show all resources
2469              # pcs resource config
2470
2471       Show options specific to the 'VirtualIP' resource
2472              # pcs resource config VirtualIP
2473
2474       Create a new resource called 'VirtualIP' with options
2475              #    pcs   resource   create   VirtualIP   ocf:heartbeat:IPaddr2
2476              ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s
2477
2478       Create a new resource called 'VirtualIP' with options
2479              #  pcs  resource  create   VirtualIP   IPaddr2   ip=192.168.0.99
2480              cidr_netmask=32 nic=eth2 op monitor interval=30s
2481
2482       Change the ip address of VirtualIP and remove the nic option
2483              # pcs resource update VirtualIP ip=192.168.0.98 nic=
2484
2485       Delete the VirtualIP resource
2486              # pcs resource delete VirtualIP
2487
2488       Create  the  MyStonith  stonith  fence_virt device which can fence host
2489       'f1'
2490              # pcs stonith create MyStonith fence_virt pcmk_host_list=f1
2491
2492       Set the stonith-enabled property to false on the  cluster  (which  dis‐
2493       ables stonith)
2494              # pcs property set stonith-enabled=false
2495

USING --FORCE IN PCS COMMANDS

2497       Various pcs commands accept the --force option. Its purpose is to over‐
2498       ride some of checks that pcs is doing or some of errors that may  occur
2499       when  a  pcs command is run. When such error occurs, pcs will print the
2500       error with a note it may be overridden. The exact behavior of  the  op‐
2501       tion  is  different  for each pcs command. Using the --force option can
2502       lead into situations that would normally be prevented by logic  of  pcs
2503       commands  and therefore its use is strongly discouraged unless you know
2504       what you are doing.
2505

ENVIRONMENT VARIABLES

2507       EDITOR
2508               Path to a plain-text editor. This is used when pcs is requested
2509              to present a text for the user to edit.
2510
2511       no_proxy, https_proxy, all_proxy, NO_PROXY, HTTPS_PROXY, ALL_PROXY
2512               These  environment variables (listed according to their priori‐
2513              ties) control how pcs handles proxy servers when  connecting  to
2514              cluster nodes. See curl(1) man page for details.
2515

CHANGES IN PCS-0.11

2517       This  section summarizes the most important changes in commands done in
2518       pcs-0.11.x compared to pcs-0.10.x. For detailed description of  current
2519       commands see above.
2520
2521   Legacy role names
2522       Roles  'Master'  and 'Slave' are deprecated and should not be used any‐
2523       more. Instead use 'Promoted' and 'Unpromoted' respectively.  Similarly,
2524       --master has been deprecated and replaced with --promoted.
2525
2526   cluster
2527       uidgid rm
2528              This  command has been replaced with 'pcs cluster uidgid delete'
2529              and 'pcs cluster uidgid remove'.
2530
2531   resource
2532       move   The 'pcs resource move' now automatically removes location  con‐
2533              straint used for moving a resource. It is equivalent of 'pcs re‐
2534              source move --autodelete' from pcs-0.10.9. Legacy  functionality
2535              of  the  'resource move' command is still available as 'resource
2536              move-with-constraint <resource id>'.
2537
2538       show --full
2539              This command has been replaced with 'pcs resource config'.
2540
2541       show --groups
2542              This command has been replaced with 'pcs resource group list'.
2543
2544       show   This command has been replaced with 'pcs resource status'.
2545
2546   stonith
2547       show --full
2548              This command has been replaced with 'pcs stonith config'.
2549
2550       show   This command has been replaced with 'pcs stonith status'.
2551

CHANGES IN PCS-0.10

2553       This section summarizes the most important changes in commands done  in
2554       pcs-0.10.x  compared  to pcs-0.9.x. For detailed description of current
2555       commands see above.
2556
2557   acl
2558       show   The 'pcs acl show' command has been deprecated and will  be  re‐
2559              moved.  Please  use  'pcs  acl  config'  instead.  Applicable in
2560              pcs-0.10.9 and newer.
2561
2562   alert
2563       show   The 'pcs alert show' command has been deprecated and will be re‐
2564              moved.  Please  use  'pcs  alert  config' instead. Applicable in
2565              pcs-0.10.9 and newer.
2566
2567   cluster
2568       auth   The 'pcs cluster auth' command only authenticates nodes in a lo‐
2569              cal cluster and does not accept a node list. The new command for
2570              authentication is 'pcs host auth'. It allows one to specify host
2571              names, addresses and pcsd ports.
2572
2573       node add
2574              Custom node names and Corosync 3.x with knet are fully supported
2575              now, therefore the syntax has been completely changed.
2576              The --device and --watchdog options have been replaced with 'de‐
2577              vice' and 'watchdog' options, respectively.
2578
2579       pcsd-status
2580              The  'pcs  cluster  pcsd-status' command has been deprecated and
2581              will be removed. Please use 'pcs pcsd  status'  or  'pcs  status
2582              pcsd' instead. Applicable in pcs-0.10.9 and newer.
2583
2584       quorum This command has been replaced with 'pcs quorum'.
2585
2586       remote-node add
2587              This   command   has   been  replaced  with  'pcs  cluster  node
2588              add-guest'.
2589
2590       remote-node remove
2591              This  command  has  been  replaced  with   'pcs   cluster   node
2592              delete-guest' and its alias 'pcs cluster node remove-guest'.
2593
2594       setup  Custom node names and Corosync 3.x with knet are fully supported
2595              now, therefore the syntax has been completely changed.
2596              The --name option has been removed. The first parameter  of  the
2597              command is the cluster name now.
2598              The  --local  option  has  been  replaced  with  --corosync_conf
2599              <path>.
2600
2601       standby
2602              This command has been replaced with 'pcs node standby'.
2603
2604       uidgid rm
2605              This command  has  been  deprecated,  use  'pcs  cluster  uidgid
2606              delete' or 'pcs cluster uidgid remove' instead.
2607
2608       unstandby
2609              This command has been replaced with 'pcs node unstandby'.
2610
2611       verify The -V option has been replaced with --full.
2612              To specify a filename, use the -f option.
2613
2614   constraint
2615       list   The  'pcs constraint list' command, as well as its variants 'pcs
2616              constraint [location | colocation | order | ticket]  list',  has
2617              been  deprecated and will be removed. Please use 'pcs constraint
2618              [location | colocation | order | ticket] config' instead. Appli‐
2619              cable in pcs-0.10.9 and newer.
2620
2621       show   The  'pcs constraint show' command, as well as its variants 'pcs
2622              constraint [location | colocation | order | ticket]  show',  has
2623              been  deprecated and will be removed. Please use 'pcs constraint
2624              [location | colocation | order | ticket] config' instead. Appli‐
2625              cable in pcs-0.10.9 and newer.
2626
2627   pcsd
2628       clear-auth
2629              This  command  has been replaced with 'pcs host deauth' and 'pcs
2630              pcsd deauth'.
2631
2632   property
2633       list   The 'pcs property list' command has been deprecated and will  be
2634              removed. Please use 'pcs property config' instead. Applicable in
2635              pcs-0.10.9 and newer.
2636
2637       set    The --node option is no longer supported. Use the 'pcs node  at‐
2638              tribute' command to set node attributes.
2639
2640       show   The  --node option is no longer supported. Use the 'pcs node at‐
2641              tribute' command to view node attributes.
2642              The 'pcs property show' command has been deprecated and will  be
2643              removed. Please use 'pcs property config' instead. Applicable in
2644              pcs-0.10.9 and newer.
2645
2646       unset  The --node option is no longer supported. Use the 'pcs node  at‐
2647              tribute' command to unset node attributes.
2648
2649   resource
2650       create The 'master' keyword has been changed to 'promotable'.
2651
2652       failcount reset
2653              The  command has been removed as 'pcs resource cleanup' is doing
2654              exactly the same job.
2655
2656       master This command has been replaced with 'pcs resource promotable'.
2657
2658       show   Previously, this command displayed either status  or  configura‐
2659              tion  of  resources  depending on the parameters specified. This
2660              was confusing, therefore the command was replaced by several new
2661              commands.  To  display  resources  status, run 'pcs resource' or
2662              'pcs resource status'. To display resources  configuration,  run
2663              'pcs  resource config' or 'pcs resource config <resource name>'.
2664              To display configured resource groups, run 'pcs  resource  group
2665              list'.
2666
2667   status
2668       groups This command has been replaced with 'pcs resource group list'.
2669
2670   stonith
2671       level add | clear | delete | remove
2672              Delimiting  stonith  devices  with  a comma is deprecated, use a
2673              space instead. Applicable in pcs-0.10.9 and newer.
2674
2675       level clear
2676              Syntax of the command has been fixed so that it is not ambiguous
2677              any  more.  New syntax is 'pcs stonith level clear [target <tar‐
2678              get> | stonith <stonith id>...]'. Old syntax 'pcs stonith  level
2679              clear  [<target> | <stonith ids>]' is deprecated but still func‐
2680              tional in pcs-0.10.x. Applicable in pcs-0.10.9 and newer.
2681
2682       level delete | remove
2683              Syntax of the command has been fixed so that it is not ambiguous
2684              any more. New syntax is 'pcs stonith level delete | remove [tar‐
2685              get  <target>]  [stonith  <stonith  id>]...'.  Old  syntax  'pcs
2686              stonith  level  delete | remove [<target>] [<stonith id>]...' is
2687              deprecated but still functional  in  pcs-0.10.x.  Applicable  in
2688              pcs-0.10.9 and newer.
2689
2690       sbd device setup
2691              The --device option has been replaced with the 'device' option.
2692
2693       sbd enable
2694              The --device and --watchdog options have been replaced with 'de‐
2695              vice' and 'watchdog' options, respectively.
2696
2697       show   Previously, this command displayed either status  or  configura‐
2698              tion of stonith resources depending on the parameters specified.
2699              This was confusing, therefore the command was replaced  by  sev‐
2700              eral new commands. To display stonith resources status, run 'pcs
2701              stonith' or 'pcs stonith status'. To display  stonith  resources
2702              configuration,  run  'pcs stonith config' or 'pcs stonith config
2703              <stonith name>'.
2704
2705   tag
2706       list   The 'pcs tag list' command has been deprecated and will  be  re‐
2707              moved.  Please  use  'pcs  tag  config'  instead.  Applicable in
2708              pcs-0.10.9 and newer.
2709

SEE ALSO

2711       http://clusterlabs.org/doc/
2712
2713       pcsd(8), pcs_snmp_agent(8)
2714
2715       corosync_overview(8),  votequorum(5),  corosync.conf(5),  corosync-qde‐
2716       vice(8),          corosync-qdevice-tool(8),          corosync-qnetd(8),
2717       corosync-qnetd-tool(8)
2718
2719       pacemaker-controld(7),  pacemaker-fenced(7),   pacemaker-schedulerd(7),
2720       crm_mon(8), crm_report(8), crm_simulate(8)
2721
2722       boothd(8), sbd(8)
2723
2724
2725
2726pcs 0.11.4.15-f7301               2022-12-08                            PCS(8)
Impressum