1PCS(8)                  System Administration Utilities                 PCS(8)
2
3
4

NAME

6       pcs - pacemaker/corosync configuration system
7

SYNOPSIS

9       pcs [-f file] [-h] [commands]...
10

DESCRIPTION

12       Control and configure pacemaker and corosync.
13

OPTIONS

15       -h, --help
16              Display usage and exit.
17
18       -f file
19              Perform actions on file instead of active CIB.
20              Commands  supporting  the  option  use  the initial state of the
21              specified file as their input and then overwrite the  file  with
22              the state reflecting the requested operation(s).
23              A  few  commands  only  use the specified file in read-only mode
24              since their effect is not a CIB modification.
25
26       --debug
27              Print all network traffic and external commands run.
28
29       --version
30              Print pcs version information. List pcs capabilities  if  --full
31              is specified.
32
33       --request-timeout=<timeout>
34              Timeout  for  each  outgoing request to another node in seconds.
35              Default is 60s.
36
37   Commands:
38       cluster
39               Configure cluster options and nodes.
40
41       resource
42               Manage cluster resources.
43
44       stonith
45               Manage fence devices.
46
47       constraint
48               Manage resource constraints.
49
50       property
51               Manage pacemaker properties.
52
53       acl
54               Manage pacemaker access control lists.
55
56       qdevice
57               Manage quorum device provider on the local host.
58
59       quorum
60               Manage cluster quorum settings.
61
62       booth
63               Manage booth (cluster ticket manager).
64
65       status
66               View cluster status.
67
68       config
69               View and manage cluster configuration.
70
71       pcsd
72               Manage pcs daemon.
73
74       host
75               Manage hosts known to pcs/pcsd.
76
77       node
78               Manage cluster nodes.
79
80       alert
81               Manage pacemaker alerts.
82
83       client
84               Manage pcsd client configuration.
85
86       dr
87               Manage disaster recovery configuration.
88
89       tag
90               Manage pacemaker tags.
91
92   resource
93       [status [<resource id | tag id>] [node=<node>] [--hide-inactive]]
94              Show status of all currently configured resources. If --hide-in‐
95              active  is  specified, only show active resources. If a resource
96              or tag id is specified, only show status of  the  specified  re‐
97              source  or resources in the specified tag. If node is specified,
98              only show status of resources configured for the specified node.
99
100       config [--output-format text|cmd|json] [<resource id>]...
101              Show options of all currently configured  resources  or  if  re‐
102              source  ids are specified show the options for the specified re‐
103              source ids. There are 3  formats  of  output  available:  'cmd',
104              'json'  and  'text', default is 'text'. Format 'text' is a human
105              friendly output. Format 'cmd' prints pcs commands which  can  be
106              used  to recreate the same configuration. Format 'json' is a ma‐
107              chine oriented output of the configuration.
108
109       list [filter] [--nodesc]
110              Show list of all available resource agents (if  filter  is  pro‐
111              vided  then  only  resource  agents  matching the filter will be
112              shown). If --nodesc is used then descriptions of resource agents
113              are not printed.
114
115       describe [<standard>:[<provider>:]]<type> [--full]
116              Show options for the specified resource. If --full is specified,
117              all options including advanced and deprecated ones are shown.
118
119       create <resource id>  [<standard>:[<provider>:]]<type>  [<resource  op‐
120       tions>]  [op <operation action> <operation options> [<operation action>
121       <operation options>]...] [meta  <meta  options>]  [clone  [<clone  id>]
122       [<clone  meta options>] | promotable [<clone id>] [<promotable meta op‐
123       tions>] | --group <group id> [--before <resource  id>  |  --after  <re‐
124       source  id>]  |  bundle  <bundle id>] [--disabled] [--agent-validation]
125       [--no-default-ops] [--wait[=n]]
126       create --future <resource  id>  [<standard>:[<provider>:]]<type>  [<re‐
127       source options>] [op <operation action> <operation options> [<operation
128       action> <operation options>]...] [meta <meta options>]  [clone  [<clone
129       id>]  [meta <clone meta options>] | promotable [<clone id>] [meta <pro‐
130       motable meta options>] | --group <group id> [--before <resource  id>  |
131       --after <resource id>] | bundle <bundle id>] [--disabled] [--agent-val‐
132       idation] [--no-default-ops] [--wait[=n]]
133              Create specified resource.
134              If clone is used, a clone resource is created. If promotable  is
135              used, a promotable clone resource is created.
136              If  --group  is  specified,  the  resource is added to the group
137              named. You can use --before or --after to specify  the  position
138              of the added resource relatively to some resource already exist‐
139              ing in the group.
140              If bundle is specified, resource will be created inside  of  the
141              specified bundle.
142              If --disabled is specified, the resource is not started automat‐
143              ically.
144              If --agent-validation is specified, resource agent  validate-all
145              action will be used to validate resource options.
146              If  --no-default-ops  is  specified, only monitor operations are
147              created for the resource and all other  operations  use  default
148              settings.
149              If  --wait is specified, pcs will wait up to 'n' seconds for the
150              resource to start and then return 0 if the resource is  started,
151              or  1  if the resource has not yet started. If 'n' is not speci‐
152              fied it defaults to 60 minutes.
153              Specifying --future switches to  the  new  command  format  that
154              changes  the  way clone and promotable meta options are expected
155              to be specified.
156
157              Example: Create a new resource called 'VirtualIP'  with  IP  ad‐
158              dress  192.168.0.99, netmask of 32, monitored everything 30 sec‐
159              onds, on eth2:
160              pcs    resource    create    VirtualIP     ocf:heartbeat:IPaddr2
161              ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s
162
163       delete <resource id|group id|bundle id|clone id>
164              Deletes  the resource, group, bundle or clone (and all resources
165              within the group/bundle/clone).
166
167       remove <resource id|group id|bundle id|clone id>
168              Deletes the resource, group, bundle or clone (and all  resources
169              within the group/bundle/clone).
170
171       enable <resource id | tag id>... [--wait[=n]]
172              Allow  the cluster to start the resources. Depending on the rest
173              of the configuration (constraints, options, failures, etc),  the
174              resources  may  remain stopped. If --wait is specified, pcs will
175              wait up to 'n' seconds for the resources to start and  then  re‐
176              turn  0 if the resources are started, or 1 if the resources have
177              not yet started. If 'n' is not specified it defaults to 60  min‐
178              utes.
179
180       disable  <resource  id  |  tag  id>... [--safe [--brief] [--no-strict]]
181       [--simulate [--brief]] [--wait[=n]]
182              Attempt to stop the resources if they are running and forbid the
183              cluster  from  starting them again. Depending on the rest of the
184              configuration (constraints, options,  failures,  etc),  the  re‐
185              sources may remain started.
186              If  --safe is specified, no changes to the cluster configuration
187              will be made if other than specified resources would be affected
188              in  any  way.  If  --brief  is  also  specified, only errors are
189              printed.
190              If --no-strict is specified, no changes to the cluster  configu‐
191              ration  will be made if other than specified resources would get
192              stopped or demoted. Moving resources between nodes is allowed.
193              If --simulate is specified, no changes to the cluster configura‐
194              tion  will be made and the effect of the changes will be printed
195              instead. If --brief is also specified, only a list  of  affected
196              resources not specified in the command will be printed.
197              If  --wait is specified, pcs will wait up to 'n' seconds for the
198              resources to stop and then return 0 if the resources are stopped
199              or  1 if the resources have not stopped. If 'n' is not specified
200              it defaults to 60 minutes.
201
202       safe-disable <resource id | tag id>... [--brief] [--no-strict] [--simu‐
203       late [--brief]] [--wait[=n]] [--force]
204              Attempt to stop the resources if they are running and forbid the
205              cluster from starting them again. Depending on the rest  of  the
206              configuration  (constraints,  options,  failures,  etc), the re‐
207              sources may remain started. No changes to the cluster configura‐
208              tion will be made if other than specified resources would be af‐
209              fected in any way.
210              If --brief is specified, only errors are printed.
211              If --no-strict is specified, no changes to the cluster  configu‐
212              ration  will be made if other than specified resources would get
213              stopped or demoted. Moving resources between nodes is allowed.
214              If --simulate is specified, no changes to the cluster configura‐
215              tion  will be made and the effect of the changes will be printed
216              instead. If --brief is also specified, only a list  of  affected
217              resources not specified in the command will be printed.
218              If  --wait is specified, pcs will wait up to 'n' seconds for the
219              resources to stop and then return 0 if the resources are stopped
220              or  1 if the resources have not stopped. If 'n' is not specified
221              it defaults to 60 minutes.
222              If --force  is  specified,  checks  for  safe  disable  will  be
223              skipped.
224
225       restart <resource id> [node] [--wait=n]
226              Restart  the  resource  specified. If a node is specified and if
227              the resource is a clone or bundle it will be restarted  only  on
228              the node specified. If --wait is specified, then we will wait up
229              to 'n' seconds for the resource to be restarted and return 0  if
230              the restart was successful or 1 if it was not.
231
232       debug-start <resource id> [--full]
233              This  command will force the specified resource to start on this
234              node ignoring the cluster recommendations and print  the  output
235              from  starting  the  resource.   Using --full will give more de‐
236              tailed output.  This is mainly used for debugging resources that
237              fail to start.
238
239       debug-stop <resource id> [--full]
240              This  command  will force the specified resource to stop on this
241              node ignoring the cluster recommendations and print  the  output
242              from  stopping  the  resource.   Using --full will give more de‐
243              tailed output.  This is mainly used for debugging resources that
244              fail to stop.
245
246       debug-promote <resource id> [--full]
247              This command will force the specified resource to be promoted on
248              this node ignoring the cluster  recommendations  and  print  the
249              output from promoting the resource.  Using --full will give more
250              detailed output.  This is mainly used  for  debugging  resources
251              that fail to promote.
252
253       debug-demote <resource id> [--full]
254              This  command will force the specified resource to be demoted on
255              this node ignoring the cluster  recommendations  and  print  the
256              output  from demoting the resource.  Using --full will give more
257              detailed output.  This is mainly used  for  debugging  resources
258              that fail to demote.
259
260       debug-monitor <resource id> [--full]
261              This  command  will force the specified resource to be monitored
262              on this node ignoring the cluster recommendations and print  the
263              output  from  monitoring  the  resource.  Using --full will give
264              more detailed output.  This is mainly  used  for  debugging  re‐
265              sources that fail to be monitored.
266
267       move   <resource   id>   [destination   node]  [--promoted]  [--strict]
268       [--wait[=n]]
269              Move the resource off the node it is currently running on.  This
270              is  achieved  by creating a -INFINITY location constraint to ban
271              the node.  If destination node is specified the resource will be
272              moved  to  that node by creating an INFINITY location constraint
273              to prefer the destination node. The constraint needed for moving
274              the  resource will be automatically removed once the resource is
275              running on it's new location. The command will fail in  case  it
276              is  not  possible  to verify that the resource will not be moved
277              back after deleting the constraint.
278
279              If --strict is specified, the command will also  fail  if  other
280              resources would be affected.
281
282              If --promoted is used the scope of the command is limited to the
283              Promoted role and promotable clone id must be used  (instead  of
284              the resource id).
285
286              If  --wait is specified, pcs will wait up to 'n' seconds for the
287              resource to move and then return 0 on success or 1 on error.  If
288              'n' is not specified it defaults to 60 minutes.
289
290              NOTE:  This  command has been changed in pcs-0.11. It is equiva‐
291              lent to command 'resource move <resource id> --autodelete'  from
292              pcs-0.10.9.  Legacy functionality of the 'resource move' command
293              is still available as 'resource  move-with-constraint  <resource
294              id>'.
295
296              If  you  want  the  resource to preferably avoid running on some
297              nodes but be able to failover to them use 'pcs constraint  loca‐
298              tion avoids'.
299
300       move-with-constraint  <resource id> [destination node] [lifetime=<life‐
301       time>] [--promoted] [--wait[=n]]
302              Move the resource off the node it is  currently  running  on  by
303              creating  a  -INFINITY  location  constraint to ban the node. If
304              destination node is specified the resource will be moved to that
305              node  by  creating an INFINITY location constraint to prefer the
306              destination node.
307
308              If lifetime is specified then the constraint will  expire  after
309              that  time, otherwise it defaults to infinity and the constraint
310              can be cleared manually with 'pcs resource clear' or  'pcs  con‐
311              straint  delete'.  Lifetime  is  expected to be specified as ISO
312              8601 duration (see  https://en.wikipedia.org/wiki/ISO_8601#Dura‐
313              tions).
314
315              If --promoted is used the scope of the command is limited to the
316              Promoted role and promotable clone id must be used  (instead  of
317              the resource id).
318
319              If  --wait is specified, pcs will wait up to 'n' seconds for the
320              resource to move and then return 0 on success or 1 on error.  If
321              'n' is not specified it defaults to 60 minutes.
322
323              If  you  want  the  resource to preferably avoid running on some
324              nodes but be able to failover to them use 'pcs constraint  loca‐
325              tion avoids'.
326
327       ban    <resource    id>   [node]   [--promoted]   [lifetime=<lifetime>]
328       [--wait[=n]]
329              Prevent the resource id specified from running on the  node  (or
330              on the current node it is running on if no node is specified) by
331              creating a -INFINITY location constraint.
332
333              If --promoted is used the scope of the command is limited to the
334              Promoted  role  and promotable clone id must be used (instead of
335              the resource id).
336
337              If lifetime is specified then the constraint will  expire  after
338              that  time, otherwise it defaults to infinity and the constraint
339              can be cleared manually with 'pcs resource clear' or  'pcs  con‐
340              straint  delete'.  Lifetime  is  expected to be specified as ISO
341              8601 duration (see  https://en.wikipedia.org/wiki/ISO_8601#Dura‐
342              tions).
343
344              If  --wait is specified, pcs will wait up to 'n' seconds for the
345              resource to move and then return 0 on success or 1 on error.  If
346              'n' is not specified it defaults to 60 minutes.
347
348              If  you  want  the  resource to preferably avoid running on some
349              nodes but be able to failover to them use 'pcs constraint  loca‐
350              tion avoids'.
351
352       clear <resource id> [node] [--promoted] [--expired] [--wait[=n]]
353              Remove  constraints  created by move and/or ban on the specified
354              resource (and node if specified).
355
356              If --promoted is used the scope of the command is limited to the
357              Promoted  role  and promotable clone id must be used (instead of
358              the resource id).
359
360              If --expired is specified, only constraints with  expired  life‐
361              times will be removed.
362
363              If  --wait is specified, pcs will wait up to 'n' seconds for the
364              operation to finish (including starting and/or moving  resources
365              if  appropriate)  and then return 0 on success or 1 on error. If
366              'n' is not specified it defaults to 60 minutes.
367
368       standards
369              List available resource agent standards supported  by  this  in‐
370              stallation (OCF, LSB, etc.).
371
372       providers
373              List available OCF resource agent providers.
374
375       agents [standard[:provider]]
376              List  available  agents  optionally  filtered  by  standard  and
377              provider.
378
379       update <resource id> [resource options] [op [<operation action> <opera‐
380       tion   options>]...]   [meta   <meta   options>]   [--agent-validation]
381       [--wait[=n]]
382              Add, remove or change options of specified  resource,  clone  or
383              multi-state  resource.  Unspecified  options  will  be  kept un‐
384              changed. If you wish to remove an option, set it to empty value,
385              i.e. 'option_name='.
386
387              If an operation (op) is specified it will update the first found
388              operation with the same action on the specified resource. If  no
389              operation  with  that action exists then a new operation will be
390              created. (WARNING: all existing options on the updated operation
391              will  be reset if not specified.) If you want to create multiple
392              monitor operations you should use the 'op  add'  &  'op  remove'
393              commands.
394
395              If  --agent-validation is specified, resource agent validate-all
396              action will be used to validate resource options.
397
398              If --wait is specified, pcs will wait up to 'n' seconds for  the
399              changes  to  take  effect  and then return 0 if the changes have
400              been processed or 1 otherwise. If 'n' is not  specified  it  de‐
401              faults to 60 minutes.
402
403       op add <resource id> <operation action> [operation properties]
404              Add operation for specified resource.
405
406       op delete <resource id> <operation action> [<operation properties>...]
407              Remove specified operation (note: you must specify the exact op‐
408              eration properties to properly remove an existing operation).
409
410       op delete <operation id>
411              Remove the specified operation id.
412
413       op remove <resource id> <operation action> [<operation properties>...]
414              Remove specified operation (note: you must specify the exact op‐
415              eration properties to properly remove an existing operation).
416
417       op remove <operation id>
418              Remove the specified operation id.
419
420       op defaults [config] [--all] [--full] [--no-check-expired]
421              List  currently  configured  default  values  for operations. If
422              --all is specified, also list expired sets of values. If  --full
423              is  specified, also list ids. If --no-expire-check is specified,
424              do not evaluate whether sets of values are expired.
425
426       op defaults <name>=<value>...
427              Set default values for operations.
428              NOTE: Defaults do not apply to resources / stonith devices which
429              override them with their own defined values.
430
431       op defaults set create [<set options>] [meta [<name>=<value>]...] [rule
432       [<expression>]]
433              Create a new set of default values for resource / stonith device
434              operations.  You  may  specify  a  rule  describing  resources /
435              stonith devices and / or operations to which the set applies.
436
437              Set options are: id, score
438
439              Expression looks like one of the following:
440                op <operation name> [interval=<interval>]
441                resource [<standard>]:[<provider>]:[<type>]
442                defined|not_defined <node attribute>
443                <node  attribute>   lt|gt|lte|gte|eq|ne   [string|integer|num‐
444              ber|version] <value>
445                date gt|lt <date>
446                date in_range [<date>] to <date>
447                date in_range <date> to duration <duration options>
448                date-spec <date-spec options>
449                <expression> and|or <expression>
450                (<expression>)
451
452              You  may specify all or any of 'standard', 'provider' and 'type'
453              in a resource expression. For example: 'resource ocf::'  matches
454              all  resources  of  'ocf'  standard,  while  'resource  ::Dummy'
455              matches all resources of 'Dummy' type regardless of their  stan‐
456              dard and provider.
457
458              Dates are expected to conform to ISO 8601 format.
459
460              Duration  options  are:  hours,  monthdays, weekdays, yearsdays,
461              months, weeks, years, weekyears, moon. Value for  these  options
462              is an integer.
463
464              Date-spec  options  are:  hours, monthdays, weekdays, yearsdays,
465              months, weeks, years, weekyears, moon. Value for  these  options
466              is an integer or a range written as integer-integer.
467
468              NOTE: Defaults do not apply to resources / stonith devices which
469              override them with their own defined values.
470
471       op defaults set delete [<set id>]...
472              Delete specified options sets.
473
474       op defaults set remove [<set id>]...
475              Delete specified options sets.
476
477       op defaults set update <set id> [meta [<name>=<value>]...]
478              Add, remove or change values in specified set of default  values
479              for  resource  /  stonith device operations. Unspecified options
480              will be kept unchanged. If you wish to remove an option, set  it
481              to empty value, i.e. 'option_name='.
482
483              NOTE: Defaults do not apply to resources / stonith devices which
484              override them with their own defined values.
485
486       op defaults update <name>=<value>...
487              Add, remove or change default values for operations. This  is  a
488              simplified command useful for cases when you only manage one set
489              of default values. Unspecified options will be  kept  unchanged.
490              If  you  wish  to  remove an option, set it to empty value, i.e.
491              'option_name='.
492
493              NOTE: Defaults do not apply to resources / stonith devices which
494              override them with their own defined values.
495
496       meta <resource id | group id | clone id> <meta options> [--wait[=n]]
497              Add specified options to the specified resource, group or clone.
498              Meta options should be in the format of name=value, options  may
499              be  removed  by  setting an option without a value. If --wait is
500              specified, pcs will wait up to 'n' seconds for  the  changes  to
501              take effect and then return 0 if the changes have been processed
502              or 1 otherwise. If 'n' is not specified it defaults to  60  min‐
503              utes.
504              Example:  pcs  resource meta TestResource failure-timeout=50 re‐
505              source-stickiness=
506
507       group list
508              Show all currently configured  resource  groups  and  their  re‐
509              sources.
510
511       group  add  <group  id>  <resource  id> [resource id] ... [resource id]
512       [--before <resource id> | --after <resource id>] [--wait[=n]]
513              Add the specified resource to the group, creating the  group  if
514              it  does  not exist. If the resource is present in another group
515              it is moved to the new group. If the group remains  empty  after
516              move,  it is deleted (for cloned groups, the clone is deleted as
517              well). The delete operation may fail in case the group is refer‐
518              enced  within  the  configuration,  e.g. by constraints. In that
519              case, use 'pcs resource ungroup' command prior to moving all re‐
520              sources out of the group.
521
522              You  can  use --before or --after to specify the position of the
523              added resources relatively to some resource already existing  in
524              the  group.  By  adding resources to a group they are already in
525              and specifying --after or --before you can move the resources in
526              the group.
527
528              If  --wait is specified, pcs will wait up to 'n' seconds for the
529              operation to finish (including moving resources if  appropriate)
530              and then return 0 on success or 1 on error. If 'n' is not speci‐
531              fied it defaults to 60 minutes.
532
533       group delete <group id> [resource id]... [--wait[=n]]
534              Remove the group (note: this does not remove any resources  from
535              the cluster) or if resources are specified, remove the specified
536              resources from the group.  If --wait is specified, pcs will wait
537              up  to 'n' seconds for the operation to finish (including moving
538              resources if appropriate) and the return 0 on success  or  1  on
539              error.  If 'n' is not specified it defaults to 60 minutes.
540
541       group remove <group id> [resource id]... [--wait[=n]]
542              Remove  the group (note: this does not remove any resources from
543              the cluster) or if resources are specified, remove the specified
544              resources from the group.  If --wait is specified, pcs will wait
545              up to 'n' seconds for the operation to finish (including  moving
546              resources  if  appropriate)  and the return 0 on success or 1 on
547              error.  If 'n' is not specified it defaults to 60 minutes.
548
549       ungroup <group id> [resource id]... [--wait[=n]]
550              Remove the group (note: this does not remove any resources  from
551              the cluster) or if resources are specified, remove the specified
552              resources from the group.  If --wait is specified, pcs will wait
553              up  to 'n' seconds for the operation to finish (including moving
554              resources if appropriate) and the return 0 on success  or  1  on
555              error.  If 'n' is not specified it defaults to 60 minutes.
556
557       clone <resource id | group id> [<clone id>] [meta <clone meta options>]
558       [--wait[=n]]
559              Set up the specified resource or group as a clone. If --wait  is
560              specified,  pcs will wait up to 'n' seconds for the operation to
561              finish (including starting clone instances if  appropriate)  and
562              then  return 0 on success or 1 on error. If 'n' is not specified
563              it defaults to 60 minutes.
564
565       promotable <resource id | group id> [<clone id>] [meta <clone meta  op‐
566       tions>] [--wait[=n]]
567              Set  up  the  specified resource or group as a promotable clone.
568              This is an alias for 'pcs  resource  clone  <resource  id>  pro‐
569              motable=true'.
570
571       unclone <clone id | resource id | group id> [--wait[=n]]
572              Remove the specified clone or the clone which contains the spec‐
573              ified group or resource (the resource or group will not  be  re‐
574              moved).  If --wait is specified, pcs will wait up to 'n' seconds
575              for the operation to finish (including stopping clone  instances
576              if  appropriate)  and then return 0 on success or 1 on error. If
577              'n' is not specified it defaults to 60 minutes.
578
579       bundle create <bundle id> container <container  type>  [<container  op‐
580       tions>] [network <network options>] [port-map <port options>]... [stor‐
581       age-map  <storage  options>]...  [meta  <meta  options>]   [--disabled]
582       [--wait[=n]]
583              Create  a  new bundle encapsulating no resources. The bundle can
584              be used either as it is or a resource may be put into it at  any
585              time.  If --disabled is specified, the bundle is not started au‐
586              tomatically. If --wait is specified, pcs will  wait  up  to  'n'
587              seconds  for the bundle to start and then return 0 on success or
588              1 on error. If 'n' is not specified it defaults to 60 minutes.
589
590       bundle reset <bundle id> [container <container options>] [network <net‐
591       work  options>]  [port-map <port options>]... [storage-map <storage op‐
592       tions>]... [meta <meta options>] [--disabled] [--wait[=n]]
593              Configure specified bundle with given options. Unlike bundle up‐
594              date,  this  command resets the bundle according given options -
595              no previous options are kept. Resources inside  the  bundle  are
596              kept  as they are. If --disabled is specified, the bundle is not
597              started automatically. If --wait is specified, pcs will wait  up
598              to 'n' seconds for the bundle to start and then return 0 on suc‐
599              cess or 1 on error. If 'n' is not specified it  defaults  to  60
600              minutes.
601
602       bundle  update  <bundle  id>  [container  <container options>] [network
603       <network options>] [port-map (add <port options>) |  (delete  |  remove
604       <id>...)]...  [storage-map  (add  <storage options>) | (delete | remove
605       <id>...)]... [meta <meta options>] [--wait[=n]]
606              Add, remove or change options of specified  bundle.  Unspecified
607              options will be kept unchanged. If you wish to remove an option,
608              set it to empty value, i.e. 'option_name='.
609
610              If you wish to update a resource encapsulated in the bundle, use
611              the  'pcs  resource  update' command instead and specify the re‐
612              source id.
613
614              If --wait is specified, pcs will wait up to 'n' seconds for  the
615              operation  to finish (including moving resources if appropriate)
616              and then return 0 on success or 1 on error. If 'n' is not speci‐
617              fied it defaults to 60 minutes.
618
619       manage <resource id | tag id>... [--monitor]
620              Set  resources listed to managed mode (default). If --monitor is
621              specified, enable all monitor operations of the resources.
622
623       unmanage <resource id | tag id>... [--monitor]
624              Set resources listed to unmanaged mode. When a  resource  is  in
625              unmanaged mode, the cluster is not allowed to start nor stop the
626              resource. If --monitor is specified, disable all monitor  opera‐
627              tions of the resources.
628
629       defaults [config] [--all] [--full] [--no-check-expired]
630              List currently configured default values for resources / stonith
631              devices. If --all is specified, also list expired sets  of  val‐
632              ues. If --full is specified, also list ids. If --no-expire-check
633              is specified, do not evaluate whether sets  of  values  are  ex‐
634              pired.
635
636       defaults <name>=<value>...
637              Set default values for resources / stonith devices.
638              NOTE: Defaults do not apply to resources / stonith devices which
639              override them with their own defined values.
640
641       defaults set create [<set options>]  [meta  [<name>=<value>]...]  [rule
642       [<expression>]]
643              Create  a  new set of default values for resources / stonith de‐
644              vices. You may specify a rule describing resources / stonith de‐
645              vices to which the set applies.
646
647              Set options are: id, score
648
649              Expression looks like one of the following:
650                resource [<standard>]:[<provider>]:[<type>]
651                date gt|lt <date>
652                date in_range [<date>] to <date>
653                date in_range <date> to duration <duration options>
654                date-spec <date-spec options>
655                <expression> and|or <expression>
656                (<expression>)
657
658              You  may specify all or any of 'standard', 'provider' and 'type'
659              in a resource expression. For example: 'resource ocf::'  matches
660              all  resources  of  'ocf'  standard,  while  'resource  ::Dummy'
661              matches all resources of 'Dummy' type regardless of their  stan‐
662              dard and provider.
663
664              Dates are expected to conform to ISO 8601 format.
665
666              Duration  options  are:  hours,  monthdays, weekdays, yearsdays,
667              months, weeks, years, weekyears, moon. Value for  these  options
668              is an integer.
669
670              Date-spec  options  are:  hours, monthdays, weekdays, yearsdays,
671              months, weeks, years, weekyears, moon. Value for  these  options
672              is an integer or a range written as integer-integer.
673
674              NOTE: Defaults do not apply to resources / stonith devices which
675              override them with their own defined values.
676
677       defaults set delete [<set id>]...
678              Delete specified options sets.
679
680       defaults set remove [<set id>]...
681              Delete specified options sets.
682
683       defaults set update <set id> [meta [<name>=<value>]...]
684              Add, remove or change values in specified set of default  values
685              for  resources  /  stonith  devices. Unspecified options will be
686              kept unchanged. If you wish to remove an option, set it to empty
687              value, i.e. 'option_name='.
688
689              NOTE: Defaults do not apply to resources / stonith devices which
690              override them with their own defined values.
691
692       defaults update <name>=<value>...
693              Add, remove or change default values for resources / stonith de‐
694              vices.  This  is  a simplified command useful for cases when you
695              only manage one set of default values. Unspecified options  will
696              be  kept  unchanged.  If you wish to remove an option, set it to
697              empty value, i.e. 'option_name='.
698
699              NOTE: Defaults do not apply to resources / stonith devices which
700              override them with their own defined values.
701
702       cleanup  [<resource  id | stonith id>] [node=<node>] [operation=<opera‐
703       tion> [interval=<interval>]] [--strict]
704              Make the cluster forget failed operations from  history  of  the
705              resource  / stonith device and re-detect its current state. This
706              can be useful to purge knowledge  of  past  failures  that  have
707              since been resolved.
708
709              If  the  named  resource is part of a group, or one numbered in‐
710              stance of a clone or bundled resource, the clean-up  applies  to
711              the whole collective resource unless --strict is given.
712
713              If  a  resource  id  /  stonith id is not specified then all re‐
714              sources / stonith devices will be cleaned up.
715
716              If a node is not specified then resources / stonith  devices  on
717              all nodes will be cleaned up.
718
719       refresh [<resource id | stonith id>] [node=<node>] [--strict]
720              Make  the cluster forget the complete operation history (includ‐
721              ing failures) of the resource / stonith device and re-detect its
722              current state. If you are interested in forgetting failed opera‐
723              tions only, use the 'pcs resource cleanup' command.
724
725              If the named resource is part of a group, or  one  numbered  in‐
726              stance  of  a  clone or bundled resource, the refresh applies to
727              the whole collective resource unless --strict is given.
728
729              If a resource id / stonith id is  not  specified  then  all  re‐
730              sources / stonith devices will be refreshed.
731
732              If  a  node is not specified then resources / stonith devices on
733              all nodes will be refreshed.
734
735       failcount [show [<resource id  |  stonith  id>]  [node=<node>]  [opera‐
736       tion=<operation> [interval=<interval>]]] [--full]
737              Show  current  failcount  for resources and stonith devices, op‐
738              tionally filtered by a resource / stonith device,  node,  opera‐
739              tion  and  its interval. If --full is specified do not sum fail‐
740              counts per resource / stonith device and node. Use 'pcs resource
741              cleanup' or 'pcs resource refresh' to reset failcounts.
742
743       relocate dry-run [resource1] [resource2] ...
744              The same as 'relocate run' but has no effect on the cluster.
745
746       relocate run [resource1] [resource2] ...
747              Relocate  specified  resources  to their preferred nodes.  If no
748              resources are specified, relocate all resources.   This  command
749              calculates  the  preferred node for each resource while ignoring
750              resource stickiness.  Then it creates location constraints which
751              will cause the resources to move to their preferred nodes.  Once
752              the resources have been moved the constraints are deleted  auto‐
753              matically.   Note that the preferred node is calculated based on
754              current cluster status, constraints, location of  resources  and
755              other settings and thus it might change over time.
756
757       relocate show
758              Display  current  status of resources and their optimal node ig‐
759              noring resource stickiness.
760
761       relocate clear
762              Remove all constraints created by the 'relocate run' command.
763
764       utilization [<resource id> [<name>=<value> ...]]
765              Add specified utilization options to specified resource. If  re‐
766              source  is not specified, shows utilization of all resources. If
767              utilization options are  not  specified,  shows  utilization  of
768              specified  resource.  Utilization  option  should  be  in format
769              name=value, value has to be integer. Options may be  removed  by
770              setting  an  option  without a value. Example: pcs resource uti‐
771              lization TestResource cpu= ram=20  For the utilization  configu‐
772              ration  to  be  in effect, cluster property 'placement-strategy'
773              must be configured accordingly.
774
775       relations <resource id> [--full]
776              Display relations of a resource specified by its id  with  other
777              resources in a tree structure. Supported types of resource rela‐
778              tions are: ordering constraints, ordering set constraints, rela‐
779              tions  defined  by resource hierarchy (clones, groups, bundles).
780              If --full is used, more verbose output will be printed.
781
782   cluster
783       setup <cluster name> (<node name> [addr=<node address>]...)...  [trans‐
784       port knet|udp|udpu [<transport options>] [link <link options>]... [com‐
785       pression  <compression  options>]  [crypto  <crypto  options>]]  [totem
786       <totem  options>] [quorum <quorum options>] [--no-cluster-uuid] ([--en‐
787       able] [--start  [--wait[=<n>]]]  [--no-keys-sync])  |  [--corosync_conf
788       <path>]
789              Create  a  cluster from the listed nodes and synchronize cluster
790              configuration files to them. If --corosync_conf is specified, do
791              not  connect to other nodes and save corosync.conf to the speci‐
792              fied path; see 'Local only mode' below for details.
793
794              Nodes are specified by their  names  and  optionally  their  ad‐
795              dresses. If no addresses are specified for a node, pcs will con‐
796              figure corosync to communicate with that node using  an  address
797              provided in 'pcs host auth' command. Otherwise, pcs will config‐
798              ure corosync to communicate with the node  using  the  specified
799              addresses.
800
801              Transport knet:
802              This is the default transport. It allows configuring traffic en‐
803              cryption and compression as well  as  using  multiple  addresses
804              (links) for nodes.
805              Transport    options   are:   ip_version,   knet_pmtud_interval,
806              link_mode
807              Link options are: link_priority, linknumber, mcastport, ping_in‐
808              terval, ping_precision, ping_timeout, pong_count, transport (udp
809              or sctp)
810              Each 'link' followed by options sets options for one link in the
811              order  the  links  are  defined by nodes' addresses. You can set
812              link options for a subset of links using a linknumber. See exam‐
813              ples below.
814              Compression options are: level, model, threshold
815              Crypto options are: cipher, hash, model
816              By   default,  encryption  is  enabled  with  cipher=aes256  and
817              hash=sha256.  To  disable  encryption,   set   cipher=none   and
818              hash=none.
819
820              Transports udp and udpu:
821              These  transports  are  limited to one address per node. They do
822              not support traffic encryption nor compression.
823              Transport options are: ip_version, netmtu
824              Link options are: bindnetaddr, broadcast, mcastaddr,  mcastport,
825              ttl
826
827              Totem and quorum can be configured regardless of used transport.
828              Totem  options  are:  block_unlisted_ips,  consensus, downcheck,
829              fail_recv_const,   heartbeat_failures_allowed,    hold,    join,
830              max_messages,    max_network_delay,   merge,   miss_count_const,
831              send_join, seqno_unchanged_const, token, token_coefficient,  to‐
832              ken_retransmit, token_retransmits_before_loss_const, window_size
833              Quorum   options   are:   auto_tie_breaker,   last_man_standing,
834              last_man_standing_window, wait_for_all
835
836              Transports and their  options,  link,  compression,  crypto  and
837              totem  options  are all documented in corosync.conf(5) man page;
838              knet link options are prefixed 'knet_'  there,  compression  op‐
839              tions  are  prefixed  'knet_compression_' and crypto options are
840              prefixed 'crypto_'. Quorum options are  documented  in  votequo‐
841              rum(5) man page.
842
843              --no-cluster-uuid will not generate a unique ID for the cluster.
844              --enable will configure the cluster  to  start  on  nodes  boot.
845              --start  will  start the cluster right after creating it. --wait
846              will  wait  up  to  'n'  seconds  for  the  cluster  to   start.
847              --no-keys-sync will skip creating and distributing pcsd SSL cer‐
848              tificate and key and corosync and pacemaker authkey  files.  Use
849              this if you provide your own certificates and keys.
850
851              Local only mode:
852              By  default,  pcs connects to all specified nodes to verify they
853              can be used in the new cluster and to send cluster configuration
854              files   to   them.  If  this  is  not  what  you  want,  specify
855              --corosync_conf option followed by a file path.  Pcs  will  save
856              corosync.conf  to  the  specified  file  and will not connect to
857              cluster nodes. These are the tasks that pcs skips in that case:
858              * make sure the nodes are not running or  configured  to  run  a
859              cluster already
860              *  make  sure  cluster  packages  are installed on all nodes and
861              their versions are compatible
862              * make sure there are no cluster configuration files on any node
863              (run  'pcs cluster destroy' and remove pcs_settings.conf file on
864              all nodes)
865              * synchronize corosync and pacemaker authkeys, /etc/corosync/au‐
866              thkey   and   /etc/pacemaker/authkey   respectively,   and   the
867              corosync.conf file
868              * authenticate the cluster nodes against each other ('pcs  clus‐
869              ter auth' or 'pcs host auth' command)
870              *  synchronize pcsd certificates (so that pcs web UI can be used
871              in an HA mode)
872
873              Examples:
874              Create a cluster with default settings:
875                  pcs cluster setup newcluster node1 node2
876              Create a cluster using two links:
877                  pcs   cluster   setup   newcluster   node1    addr=10.0.1.11
878              addr=10.0.2.11 node2 addr=10.0.1.12 addr=10.0.2.12
879              Set  link options for all links. Link options are matched to the
880              links in order. The first link (link 0) has sctp transport,  the
881              second link (link 1) has mcastport 55405:
882                  pcs    cluster   setup   newcluster   node1   addr=10.0.1.11
883              addr=10.0.2.11  node2  addr=10.0.1.12  addr=10.0.2.12  transport
884              knet link transport=sctp link mcastport=55405
885              Set  link options for the second and fourth links only. Link op‐
886              tions are matched to the links based on  the  linknumber  option
887              (the first link is link 0):
888                  pcs    cluster   setup   newcluster   node1   addr=10.0.1.11
889              addr=10.0.2.11     addr=10.0.3.11      addr=10.0.4.11      node2
890              addr=10.0.1.12   addr=10.0.2.12   addr=10.0.3.12  addr=10.0.4.12
891              transport knet link linknumber=3 mcastport=55405  link  linknum‐
892              ber=1 transport=sctp
893              Create a cluster using udp transport with a non-default port:
894                  pcs  cluster setup newcluster node1 node2 transport udp link
895              mcastport=55405
896
897       config [show] [--output-format text|cmd|json] [--corosync_conf <path>]
898              Show cluster configuration. There are 3 formats of output avail‐
899              able: 'cmd', 'json' and 'text', default is 'text'. Format 'text'
900              is a human friendly output. Format  'cmd'  prints  pcs  commands
901              which  can  be  used  to recreate the same configuration. Format
902              'json' is a machine oriented output  of  the  configuration.  If
903              --corosync_conf  is  specified,  configuration file specified by
904              <path> is used instead of the current cluster configuration.
905
906       config update [transport <transport options>] [compression <compression
907       options>]   [crypto   <crypto   options>]   [totem   <totem   options>]
908       [--corosync_conf <path>]
909              Update cluster configuration. Unspecified options will  be  kept
910              unchanged.  If  you  wish  to  remove an option, set it to empty
911              value, i.e. 'option_name='.
912
913              If --corosync_conf is specified, update cluster configuration in
914              a file specified by <path>.
915
916              All  options  are documented in corosync.conf(5) man page. There
917              are different transport options for transport types. Compression
918              and  crypto options are only available for knet transport. Totem
919              options can be set regardless of the transport type.
920              Transport options for knet transport are:  ip_version,  knet_pm‐
921              tud_interval, link_mode
922              Transport  options  for udp and updu transports are: ip_version,
923              netmtu
924              Compression options are: level, model, threshold
925              Crypto options are: cipher, hash, model
926              Totem options  are:  block_unlisted_ips,  consensus,  downcheck,
927              fail_recv_const,    heartbeat_failures_allowed,    hold,   join,
928              max_messages,   max_network_delay,   merge,    miss_count_const,
929              send_join,  seqno_unchanged_const, token, token_coefficient, to‐
930              ken_retransmit, token_retransmits_before_loss_const, window_size
931
932       config uuid generate [--corosync_conf <path>] [--force]
933              Generate a new cluster UUID and distribute  it  to  all  cluster
934              nodes. Cluster UUID is not used by the cluster stack in any way,
935              it is provided to easily distinguish between  multiple  clusters
936              in  a  multi-cluster environment since the cluster name does not
937              have to be unique.
938
939              If --corosync_conf is specified, update cluster configuration in
940              file specified by <path>.
941
942              If --force is specified, existing UUID will be overwritten.
943
944       authkey corosync [<path>]
945              Generate a new corosync authkey and distribute it to all cluster
946              nodes. If <path> is specified, do not generate a key and use key
947              from the file.
948
949       start [--all | <node>... ] [--wait[=<n>]] [--request-timeout=<seconds>]
950              Start  a cluster on specified node(s). If no nodes are specified
951              then start a cluster on the local node. If  --all  is  specified
952              then start a cluster on all nodes. If the cluster has many nodes
953              then the start request may time out. In  that  case  you  should
954              consider  setting  --request-timeout  to  a  suitable  value. If
955              --wait is specified, pcs waits up to 'n' seconds for the cluster
956              to  get ready to provide services after the cluster has success‐
957              fully started.
958
959       stop [--all | <node>... ] [--request-timeout=<seconds>]
960              Stop a cluster on specified node(s). If no nodes  are  specified
961              then  stop  a  cluster  on the local node. If --all is specified
962              then stop a cluster on all nodes. If the cluster is running  re‐
963              sources  which  take long time to stop then the stop request may
964              time out before the cluster actually stops.  In  that  case  you
965              should consider setting --request-timeout to a suitable value.
966
967       kill   Force  corosync  and pacemaker daemons to stop on the local node
968              (performs kill -9). Note that init system (e.g. systemd) can de‐
969              tect that cluster is not running and start it again. If you want
970              to stop cluster on a node, run pcs cluster stop on that node.
971
972       enable [--all | <node>... ]
973              Configure cluster to run on node boot on specified  node(s).  If
974              node is not specified then cluster is enabled on the local node.
975              If --all is specified then cluster is enabled on all nodes.
976
977       disable [--all | <node>... ]
978              Configure cluster to not run on node boot on specified  node(s).
979              If  node  is not specified then cluster is disabled on the local
980              node. If --all is specified then  cluster  is  disabled  on  all
981              nodes.
982
983       auth [-u <username>] [-p <password>]
984              Authenticate  pcs/pcsd  to pcsd on nodes configured in the local
985              cluster.
986
987       status View current cluster status (an alias of 'pcs status cluster').
988
989       sync   Sync cluster configuration (files which  are  supported  by  all
990              subcommands of this command) to all cluster nodes.
991
992       sync corosync
993              Sync  corosync  configuration  to  all  nodes found from current
994              corosync.conf file.
995
996       cib [filename] [scope=<scope> | --config]
997              Get the raw xml from the CIB (Cluster Information  Base).  If  a
998              filename  is  provided,  we save the CIB to that file, otherwise
999              the CIB is printed. Specify scope to get a specific  section  of
1000              the CIB. Valid values of the scope are: acls, alerts, configura‐
1001              tion, constraints, crm_config, fencing-topology,  nodes,  op_de‐
1002              faults,  resources,  rsc_defaults, tags. --config is the same as
1003              scope=configuration. Do not specify a scope if you want to  edit
1004              the saved CIB using pcs (pcs -f <command>).
1005
1006       cib-push  <filename> [--wait[=<n>]] [diff-against=<filename_original> |
1007       scope=<scope> | --config]
1008              Push the raw xml from <filename> to the CIB (Cluster Information
1009              Base).  You  can obtain the CIB by running the 'pcs cluster cib'
1010              command, which is recommended first step when you want  to  per‐
1011              form  desired  modifications  (pcs -f <command>) for the one-off
1012              push.
1013              If diff-against is specified, pcs  diffs  contents  of  filename
1014              against  contents  of filename_original and pushes the result to
1015              the CIB.
1016              Specify scope to push a specific section of the CIB. Valid  val‐
1017              ues  of the scope are: acls, alerts, configuration, constraints,
1018              crm_config,  fencing-topology,  nodes,  op_defaults,  resources,
1019              rsc_defaults, tags. --config is the same as scope=configuration.
1020              Use of --config is recommended. Do not specify a  scope  if  you
1021              need  to push the whole CIB or be warned in the case of outdated
1022              CIB.
1023              If --wait is specified wait up to 'n' seconds for changes to  be
1024              applied.
1025              WARNING:  the  selected  scope of the CIB will be overwritten by
1026              the current content of the specified file.
1027
1028              Example:
1029                  pcs cluster cib > original.xml
1030                  cp original.xml new.xml
1031                  pcs -f new.xml constraint location apache prefers node2
1032                  pcs cluster cib-push new.xml diff-against=original.xml
1033
1034       cib-upgrade
1035              Upgrade the CIB to conform to the latest version of the document
1036              schema.
1037
1038       edit [scope=<scope> | --config]
1039              Edit  the cib in the editor specified by the $EDITOR environment
1040              variable and push out any changes upon saving. Specify scope  to
1041              edit  a  specific  section of the CIB. Valid values of the scope
1042              are: acls, alerts, configuration, constraints, crm_config, fenc‐
1043              ing-topology, nodes, op_defaults, resources, rsc_defaults, tags.
1044              --config is the same as scope=configuration. Use of --config  is
1045              recommended.  Do  not  specify  a  scope if you need to edit the
1046              whole CIB or be warned in the case of outdated CIB.
1047
1048       node  add  <node  name>  [addr=<node  address>]...  [watchdog=<watchdog
1049       path>]  [device=<SBD  device  path>]... [--start [--wait[=<n>]]] [--en‐
1050       able] [--no-watchdog-validation]
1051              Add the node to the cluster and synchronize all relevant config‐
1052              uration  files  to the new node. This command can only be run on
1053              an existing cluster node.
1054
1055              The new node is specified by its name  and  optionally  its  ad‐
1056              dresses.  If  no  addresses are specified for the node, pcs will
1057              configure corosync to communicate with the node using an address
1058              provided in 'pcs host auth' command. Otherwise, pcs will config‐
1059              ure corosync to communicate with the node  using  the  specified
1060              addresses.
1061
1062              Use  'watchdog' to specify a path to a watchdog on the new node,
1063              when SBD is enabled in the cluster. If SBD  is  configured  with
1064              shared storage, use 'device' to specify path to shared device(s)
1065              on the new node.
1066
1067              If --start is specified also start cluster on the new  node,  if
1068              --wait  is  specified wait up to 'n' seconds for the new node to
1069              start. If --enable is specified configure cluster  to  start  on
1070              the  new node on boot. If --no-watchdog-validation is specified,
1071              validation of watchdog will be skipped.
1072
1073              WARNING: By default, it is tested whether the specified watchdog
1074              is  supported.  This  may  cause  a restart of the system when a
1075              watchdog  with  no-way-out-feature  enabled  is   present.   Use
1076              --no-watchdog-validation to skip watchdog validation.
1077
1078       node delete <node name> [<node name>]...
1079              Shutdown specified nodes and remove them from the cluster.
1080
1081       node remove <node name> [<node name>]...
1082              Shutdown specified nodes and remove them from the cluster.
1083
1084       node  add-remote  <node name> [<node address>] [options] [op <operation
1085       action>  <operation  options>  [<operation   action>   <operation   op‐
1086       tions>]...] [meta <meta options>] [--wait[=<n>]]
1087              Add  the node to the cluster as a remote node. Sync all relevant
1088              configuration files to the new node. Start the node and  config‐
1089              ure it to start the cluster on boot. Options are port and recon‐
1090              nect_interval. Operations and meta belong to an underlying  con‐
1091              nection  resource (ocf:pacemaker:remote). If node address is not
1092              specified for the node, pcs will configure pacemaker to communi‐
1093              cate  with the node using an address provided in 'pcs host auth'
1094              command. Otherwise, pcs will configure pacemaker to  communicate
1095              with the node using the specified addresses. If --wait is speci‐
1096              fied, wait up to 'n' seconds for the node to start.
1097
1098       node delete-remote <node identifier>
1099              Shutdown specified remote node and remove it from  the  cluster.
1100              The  node-identifier  can be the name of the node or the address
1101              of the node.
1102
1103       node remove-remote <node identifier>
1104              Shutdown specified remote node and remove it from  the  cluster.
1105              The  node-identifier  can be the name of the node or the address
1106              of the node.
1107
1108       node add-guest <node name> <resource id> [options] [--wait[=<n>]]
1109              Make the specified resource a guest node resource. Sync all rel‐
1110              evant  configuration  files  to the new node. Start the node and
1111              configure it to start the  cluster  on  boot.  Options  are  re‐
1112              mote-addr,   remote-port   and  remote-connect-timeout.  If  re‐
1113              mote-addr is not specified for  the  node,  pcs  will  configure
1114              pacemaker to communicate with the node using an address provided
1115              in 'pcs host auth' command. Otherwise, pcs will configure  pace‐
1116              maker  to  communicate  with  the  node  using the specified ad‐
1117              dresses. If --wait is specified, wait up to 'n' seconds for  the
1118              node to start.
1119
1120       node delete-guest <node identifier>
1121              Shutdown  specified  guest  node and remove it from the cluster.
1122              The node-identifier can be the name of the node or  the  address
1123              of  the  node  or  id  of the resource that is used as the guest
1124              node.
1125
1126       node remove-guest <node identifier>
1127              Shutdown specified guest node and remove it  from  the  cluster.
1128              The  node-identifier  can be the name of the node or the address
1129              of the node or id of the resource that  is  used  as  the  guest
1130              node.
1131
1132       node clear <node name>
1133              Remove specified node from various cluster caches. Use this if a
1134              removed node is still considered by the cluster to be  a  member
1135              of the cluster.
1136
1137       link add <node_name>=<node_address>... [options <link options>]
1138              Add  a  corosync  link.  One  address must be specified for each
1139              cluster node. If no linknumber is specified, pcs  will  use  the
1140              lowest available linknumber.
1141              Link  options  (documented  in  corosync.conf(5)  man page) are:
1142              link_priority, linknumber, mcastport, ping_interval, ping_preci‐
1143              sion, ping_timeout, pong_count, transport (udp or sctp)
1144
1145       link delete <linknumber> [<linknumber>]...
1146              Remove specified corosync links.
1147
1148       link remove <linknumber> [<linknumber>]...
1149              Remove specified corosync links.
1150
1151       link update <linknumber> [<node_name>=<node_address>...] [options <link
1152       options>]
1153              Add, remove or change node addresses / link options of an exist‐
1154              ing  corosync  link.  Use  this if you cannot add / remove links
1155              which is the preferred way. Unspecified options will be kept un‐
1156              changed. If you wish to remove an option, set it to empty value,
1157              i.e. 'option_name='.
1158              Link options (documented in corosync.conf(5) man page) are:
1159              for knet  transport:  link_priority,  mcastport,  ping_interval,
1160              ping_precision,  ping_timeout,  pong_count,  transport  (udp  or
1161              sctp)
1162              for udp and udpu transports: bindnetaddr, broadcast,  mcastaddr,
1163              mcastport, ttl
1164
1165       uidgid List  the  current  configured uids and gids of users allowed to
1166              connect to corosync.
1167
1168       uidgid add [uid=<uid>] [gid=<gid>]
1169              Add the specified uid and/or gid to the list of users/groups al‐
1170              lowed to connect to corosync.
1171
1172       uidgid delete [uid=<uid>] [gid=<gid>]
1173              Remove   the   specified   uid  and/or  gid  from  the  list  of
1174              users/groups allowed to connect to corosync.
1175
1176       uidgid remove [uid=<uid>] [gid=<gid>]
1177              Remove  the  specified  uid  and/or  gid  from   the   list   of
1178              users/groups allowed to connect to corosync.
1179
1180       corosync [node]
1181              Get  the  corosync.conf from the specified node or from the cur‐
1182              rent node if node not specified.
1183
1184       reload corosync
1185              Reload the corosync configuration on the current node.
1186
1187       destroy [--all] [--force]
1188              Permanently destroy the cluster on the current node, killing all
1189              cluster  processes and removing all cluster configuration files.
1190              Using --all will attempt to destroy the cluster on all nodes  in
1191              the local cluster.
1192
1193              WARNING: This command permanently removes any cluster configura‐
1194              tion that has been created. It is recommended to run 'pcs  clus‐
1195              ter  stop'  before destroying the cluster. To prevent accidental
1196              running of this command, --force or interactive user response is
1197              required in order to proceed.
1198
1199       verify [--full] [-f <filename>]
1200              Checks  the  pacemaker configuration (CIB) for syntax and common
1201              conceptual errors. If no filename is specified the check is per‐
1202              formed  on the currently running cluster. If --full is used more
1203              verbose output will be printed.
1204
1205       report [--from "YYYY-M-D H:M:S" [--to "YYYY-M-D H:M:S"]] <dest>
1206              Create a tarball containing  everything  needed  when  reporting
1207              cluster  problems.   If --from and --to are not used, the report
1208              will include the past 24 hours.
1209
1210   stonith
1211       [status [<resource id | tag id>] [node=<node>] [--hide-inactive]]
1212              Show status of all  currently  configured  stonith  devices.  If
1213              --hide-inactive  is specified, only show active stonith devices.
1214              If a resource or tag id is specified, only show  status  of  the
1215              specified resource or resources in the specified tag. If node is
1216              specified, only show status  of  resources  configured  for  the
1217              specified node.
1218
1219       config [--output-format text|cmd|json] [<stonith id>]...
1220              Show  options  of all currently configured stonith devices or if
1221              stonith device ids are specified show the options for the speci‐
1222              fied  stonith  device  ids. There are 3 formats of output avail‐
1223              able: 'cmd', 'json' and 'text', default is 'text'. Format 'text'
1224              is  a  human  friendly  output. Format 'cmd' prints pcs commands
1225              which can be used to recreate  the  same  configuration.  Format
1226              'json' is a machine oriented output of the configuration.
1227
1228       list [filter] [--nodesc]
1229              Show list of all available stonith agents (if filter is provided
1230              then only stonith agents matching the filter will be shown).  If
1231              --nodesc  is  used  then  descriptions of stonith agents are not
1232              printed.
1233
1234       describe <stonith agent> [--full]
1235              Show options for specified stonith agent. If  --full  is  speci‐
1236              fied,  all  options  including  advanced and deprecated ones are
1237              shown.
1238
1239       create <stonith id> <stonith device type> [stonith device options]  [op
1240       <operation  action>  <operation options> [<operation action> <operation
1241       options>]...] [meta  <meta  options>]  [--group  <group  id>  [--before
1242       <stonith id> | --after <stonith id>]] [--disabled] [--agent-validation]
1243       [--wait[=n]]
1244              Create stonith  device  with  specified  type  and  options.  If
1245              --group  is  specified  the stonith device is added to the group
1246              named. You can use --before or --after to specify  the  position
1247              of  the  added  stonith device relatively to some stonith device
1248              already existing in the group.  If--disabled  is  specified  the
1249              stonith  device is not used. If --agent-validation is specified,
1250              stonith agent validate-all  action  will  be  used  to  validate
1251              stonith device options. If --wait is specified, pcs will wait up
1252              to 'n' seconds for the stonith device to start and then return 0
1253              if the stonith device is started, or 1 if the stonith device has
1254              not yet started. If 'n' is not specified it defaults to 60  min‐
1255              utes.
1256
1257              Example: Create a device for nodes node1 and node2
1258              pcs stonith create MyFence fence_virt pcmk_host_list=node1,node2
1259              Example: Use port p1 for node n1 and ports p2 and p3 for node n2
1260              pcs        stonith        create        MyFence       fence_virt
1261              'pcmk_host_map=n1:p1;n2:p2,p3'
1262
1263       update <stonith id> [stonith options] [op [<operation  action>  <opera‐
1264       tion   options>]...]   [meta   <meta   options>]   [--agent-validation]
1265       [--wait[=n]]
1266              Add, remove or change options of specified stonith  device.  Un‐
1267              specified  options will be kept unchanged. If you wish to remove
1268              an option, set it to empty value, i.e. 'option_name='.
1269
1270              If an operation (op) is specified it will update the first found
1271              operation  with the same action on the specified stonith device.
1272              If no operation with that action exists  then  a  new  operation
1273              will  be  created. (WARNING: all existing options on the updated
1274              operation will be reset if not specified.) If you want to create
1275              multiple  monitor  operations  you should use the 'op add' & 'op
1276              remove' commands.
1277
1278              If --agent-validation is specified, stonith  agent  validate-all
1279              action will be used to validate stonith device options.
1280
1281              If  --wait is specified, pcs will wait up to 'n' seconds for the
1282              changes to take effect and then return 0  if  the  changes  have
1283              been  processed  or  1 otherwise. If 'n' is not specified it de‐
1284              faults to 60 minutes.
1285
1286       update-scsi-devices <stonith id> (set <device-path> [<device-path>...])
1287       |  (add  <device-path>  [<device-path>...]  delete|remove <device-path>
1288       [<device-path>...] )
1289              Update scsi fencing devices without affecting  other  resources.
1290              You  must specify either list of set devices or at least one de‐
1291              vice for add or delete/remove devices. Stonith resource must  be
1292              running  on  one  cluster  node. Each device will be unfenced on
1293              each cluster  node  running  cluster.  Supported  fence  agents:
1294              fence_scsi, fence_mpath.
1295
1296       delete <stonith id>
1297              Remove stonith id from configuration.
1298
1299       remove <stonith id>
1300              Remove stonith id from configuration.
1301
1302       op add <stonith id> <operation action> [operation properties]
1303              Add operation for specified stonith device.
1304
1305       op delete <stonith id> <operation action> [<operation properties>...]
1306              Remove specified operation (note: you must specify the exact op‐
1307              eration properties to properly remove an existing operation).
1308
1309       op delete <operation id>
1310              Remove the specified operation id.
1311
1312       op remove <stonith id> <operation action> [<operation properties>...]
1313              Remove specified operation (note: you must specify the exact op‐
1314              eration properties to properly remove an existing operation).
1315
1316       op remove <operation id>
1317              Remove the specified operation id.
1318
1319       op defaults [config] [--all] [--full] [--no-check-expired]
1320              This command is an alias of 'resource op defaults [config]' com‐
1321              mand.
1322
1323              List currently configured  default  values  for  operations.  If
1324              --all  is specified, also list expired sets of values. If --full
1325              is specified, also list ids. If --no-expire-check is  specified,
1326              do not evaluate whether sets of values are expired.
1327
1328       op defaults <name>=<value>...
1329              This command is an alias of 'resource op defaults' command.
1330
1331              Set default values for operations.
1332              NOTE: Defaults do not apply to resources / stonith devices which
1333              override them with their own defined values.
1334
1335       op defaults set create [<set options>] [meta [<name>=<value>]...] [rule
1336       [<expression>]]
1337              This  command  is  an alias of 'resource op defaults set create'
1338              command.
1339
1340              Create a new set of default values for resource / stonith device
1341              operations.  You  may  specify  a  rule  describing  resources /
1342              stonith devices and / or operations to which the set applies.
1343
1344              Set options are: id, score
1345
1346              Expression looks like one of the following:
1347                op <operation name> [interval=<interval>]
1348                resource [<standard>]:[<provider>]:[<type>]
1349                defined|not_defined <node attribute>
1350                <node  attribute>   lt|gt|lte|gte|eq|ne   [string|integer|num‐
1351              ber|version] <value>
1352                date gt|lt <date>
1353                date in_range [<date>] to <date>
1354                date in_range <date> to duration <duration options>
1355                date-spec <date-spec options>
1356                <expression> and|or <expression>
1357                (<expression>)
1358
1359              You  may specify all or any of 'standard', 'provider' and 'type'
1360              in a resource expression. For example: 'resource ocf::'  matches
1361              all  resources  of  'ocf'  standard,  while  'resource  ::Dummy'
1362              matches all resources of 'Dummy' type regardless of their  stan‐
1363              dard and provider.
1364
1365              Dates are expected to conform to ISO 8601 format.
1366
1367              Duration  options  are:  hours,  monthdays, weekdays, yearsdays,
1368              months, weeks, years, weekyears, moon. Value for  these  options
1369              is an integer.
1370
1371              Date-spec  options  are:  hours, monthdays, weekdays, yearsdays,
1372              months, weeks, years, weekyears, moon. Value for  these  options
1373              is an integer or a range written as integer-integer.
1374
1375              NOTE: Defaults do not apply to resources / stonith devices which
1376              override them with their own defined values.
1377
1378       op defaults set delete [<set id>]...
1379              This command is an alias of 'resource op  defaults  set  delete'
1380              command.
1381
1382              Delete specified options sets.
1383
1384       op defaults set remove [<set id>]...
1385              This  command  is  an alias of 'resource op defaults set delete'
1386              command.
1387
1388              Delete specified options sets.
1389
1390       op defaults set update <set id> [meta [<name>=<value>]...]
1391              This command is an alias of 'resource op  defaults  set  update'
1392              command.
1393
1394              Add,  remove or change values in specified set of default values
1395              for resource / stonith device  operations.  Unspecified  options
1396              will  be kept unchanged. If you wish to remove an option, set it
1397              to empty value, i.e. 'option_name='.
1398
1399              NOTE: Defaults do not apply to resources / stonith devices which
1400              override them with their own defined values.
1401
1402       op defaults update <name>=<value>...
1403              This  command  is an alias of 'resource op defaults update' com‐
1404              mand.
1405
1406              Add, remove or change default values for operations. This  is  a
1407              simplified command useful for cases when you only manage one set
1408              of default values. Unspecified options will be  kept  unchanged.
1409              If  you  wish  to  remove an option, set it to empty value, i.e.
1410              'option_name='.
1411
1412              NOTE: Defaults do not apply to resources / stonith devices which
1413              override them with their own defined values.
1414
1415       meta <stonith id> <meta options> [--wait[=n]]
1416              Add  specified options to the specified stonith device. Meta op‐
1417              tions should be in the format of name=value, options may be  re‐
1418              moved  by setting an option without a value. If --wait is speci‐
1419              fied, pcs will wait up to 'n' seconds for the  changes  to  take
1420              effect and then return 0 if the changes have been processed or 1
1421              otherwise. If 'n' is not specified it defaults to 60 minutes.
1422
1423              Example: pcs stonith meta  test_stonith  failure-timeout=50  re‐
1424              source-stickiness=
1425
1426       defaults [config] [--all] [--full] [--no-check-expired]
1427              This  command  is  an alias of 'resource defaults [config]' com‐
1428              mand.
1429
1430              List currently configured default values for resources / stonith
1431              devices.  If  --all is specified, also list expired sets of val‐
1432              ues. If --full is specified, also list ids. If --no-expire-check
1433              is  specified,  do  not  evaluate whether sets of values are ex‐
1434              pired.
1435
1436       defaults <name>=<value>...
1437              This command is an alias of 'resource defaults' command.
1438
1439              Set default values for resources / stonith devices.
1440              NOTE: Defaults do not apply to resources / stonith devices which
1441              override them with their own defined values.
1442
1443       defaults  set  create  [<set options>] [meta [<name>=<value>]...] [rule
1444       [<expression>]]
1445              This command is an alias of 'resource defaults set create'  com‐
1446              mand.
1447
1448              Create  a  new set of default values for resources / stonith de‐
1449              vices. You may specify a rule describing resources / stonith de‐
1450              vices to which the set applies.
1451
1452              Set options are: id, score
1453
1454              Expression looks like one of the following:
1455                resource [<standard>]:[<provider>]:[<type>]
1456                date gt|lt <date>
1457                date in_range [<date>] to <date>
1458                date in_range <date> to duration <duration options>
1459                date-spec <date-spec options>
1460                <expression> and|or <expression>
1461                (<expression>)
1462
1463              You  may specify all or any of 'standard', 'provider' and 'type'
1464              in a resource expression. For example: 'resource ocf::'  matches
1465              all  resources  of  'ocf'  standard,  while  'resource  ::Dummy'
1466              matches all resources of 'Dummy' type regardless of their  stan‐
1467              dard and provider.
1468
1469              Dates are expected to conform to ISO 8601 format.
1470
1471              Duration  options  are:  hours,  monthdays, weekdays, yearsdays,
1472              months, weeks, years, weekyears, moon. Value for  these  options
1473              is an integer.
1474
1475              Date-spec  options  are:  hours, monthdays, weekdays, yearsdays,
1476              months, weeks, years, weekyears, moon. Value for  these  options
1477              is an integer or a range written as integer-integer.
1478
1479              NOTE: Defaults do not apply to resources / stonith devices which
1480              override them with their own defined values.
1481
1482       defaults set delete [<set id>]...
1483              This command is an alias of 'resource defaults set delete'  com‐
1484              mand.
1485
1486              Delete specified options sets.
1487
1488       defaults set remove [<set id>]...
1489              This  command is an alias of 'resource defaults set delete' com‐
1490              mand.
1491
1492              Delete specified options sets.
1493
1494       defaults set update <set id> [meta [<name>=<value>]...]
1495              This command is an alias of 'resource defaults set update'  com‐
1496              mand.
1497
1498              Add,  remove or change values in specified set of default values
1499              for resources / stonith devices.  Unspecified  options  will  be
1500              kept unchanged. If you wish to remove an option, set it to empty
1501              value, i.e. 'option_name='.
1502
1503              NOTE: Defaults do not apply to resources / stonith devices which
1504              override them with their own defined values.
1505
1506       defaults update <name>=<value>...
1507              This command is an alias of 'resource defaults update' command.
1508
1509              Add, remove or change default values for resources / stonith de‐
1510              vices. This is a simplified command useful for  cases  when  you
1511              only  manage one set of default values. Unspecified options will
1512              be kept unchanged. If you wish to remove an option,  set  it  to
1513              empty value, i.e. 'option_name='.
1514
1515              NOTE: Defaults do not apply to resources / stonith devices which
1516              override them with their own defined values.
1517
1518       cleanup [<resource id | stonith id>]  [node=<node>]  [operation=<opera‐
1519       tion> [interval=<interval>]] [--strict]
1520              This command is an alias of 'resource cleanup' command.
1521
1522              Make  the  cluster  forget failed operations from history of the
1523              resource / stonith device and re-detect its current state.  This
1524              can  be  useful  to  purge  knowledge of past failures that have
1525              since been resolved.
1526
1527              If the named resource is part of a group, or  one  numbered  in‐
1528              stance  of  a clone or bundled resource, the clean-up applies to
1529              the whole collective resource unless --strict is given.
1530
1531              If a resource id / stonith id is  not  specified  then  all  re‐
1532              sources / stonith devices will be cleaned up.
1533
1534              If  a  node is not specified then resources / stonith devices on
1535              all nodes will be cleaned up.
1536
1537       refresh [<resource id | stonith id>] [node=<node>] [--strict]
1538              This command is an alias of 'resource refresh' command.
1539
1540              Make the cluster forget the complete operation history  (includ‐
1541              ing failures) of the resource / stonith device and re-detect its
1542              current state. If you are interested in forgetting failed opera‐
1543              tions only, use the 'pcs resource cleanup' command.
1544
1545              If  the  named  resource is part of a group, or one numbered in‐
1546              stance of a clone or bundled resource, the  refresh  applies  to
1547              the whole collective resource unless --strict is given.
1548
1549              If  a  resource  id  /  stonith id is not specified then all re‐
1550              sources / stonith devices will be refreshed.
1551
1552              If a node is not specified then resources / stonith  devices  on
1553              all nodes will be refreshed.
1554
1555       failcount  [show  [<resource  id  |  stonith id>] [node=<node>] [opera‐
1556       tion=<operation> [interval=<interval>]]] [--full]
1557              This command is an alias of 'resource failcount show' command.
1558
1559              Show current failcount for resources and  stonith  devices,  op‐
1560              tionally  filtered  by a resource / stonith device, node, opera‐
1561              tion and its interval. If --full is specified do not  sum  fail‐
1562              counts per resource / stonith device and node. Use 'pcs resource
1563              cleanup' or 'pcs resource refresh' to reset failcounts.
1564
1565       enable <stonith id>... [--wait[=n]]
1566              Allow the cluster to use the stonith devices. If --wait is spec‐
1567              ified,  pcs  will wait up to 'n' seconds for the stonith devices
1568              to start and then return 0 if the stonith devices  are  started,
1569              or  1 if the stonith devices have not yet started. If 'n' is not
1570              specified it defaults to 60 minutes.
1571
1572       disable <stonith id>... [--wait[=n]]
1573              Attempt to stop the stonith devices if they are running and dis‐
1574              allow  the cluster to use them. If --wait is specified, pcs will
1575              wait up to 'n' seconds for the stonith devices to stop and  then
1576              return  0 if the stonith devices are stopped or 1 if the stonith
1577              devices have not stopped. If 'n' is not specified it defaults to
1578              60 minutes.
1579
1580       level [config]
1581              Lists all of the fencing levels currently configured.
1582
1583       level add <level> <target> <stonith id> [stonith id]...
1584              Add  the fencing level for the specified target with the list of
1585              stonith devices to attempt for that target at that level.  Fence
1586              levels  are attempted in numerical order (starting with 1). If a
1587              level succeeds (meaning all devices are successfully  fenced  in
1588              that  level)  then  no other levels are tried, and the target is
1589              considered fenced. Target may be  a  node  name  <node_name>  or
1590              %<node_name> or node%<node_name>, a node name regular expression
1591              regexp%<node_pattern>   or   a   node   attribute   value    at‐
1592              trib%<name>=<value>.
1593
1594       level delete <level> [target <target>] [stonith <stonith id>...]
1595              Removes  the  fence  level  for the level, target and/or devices
1596              specified. If no target or devices are specified then the  fence
1597              level  is  removed.  Target  may  be  a node name <node_name> or
1598              %<node_name> or node%<node_name>, a node name regular expression
1599              regexp%<node_pattern>    or   a   node   attribute   value   at‐
1600              trib%<name>=<value>.
1601
1602       level remove <level> [target <target>] [stonith <stonith id>...]
1603              Removes the fence level for the  level,  target  and/or  devices
1604              specified.  If no target or devices are specified then the fence
1605              level is removed. Target may  be  a  node  name  <node_name>  or
1606              %<node_name> or node%<node_name>, a node name regular expression
1607              regexp%<node_pattern>   or   a   node   attribute   value    at‐
1608              trib%<name>=<value>.
1609
1610       level clear [target <target> | stonith <stonith id>...]
1611              Clears  the fence levels on the target (or stonith id) specified
1612              or clears all fence levels if a target/stonith id is not  speci‐
1613              fied.  Target  may be a node name <node_name> or %<node_name> or
1614              node%<node_name>,  a   node   name   regular   expression   reg‐
1615              exp%<node_pattern>    or    a    node    attribute   value   at‐
1616              trib%<name>=<value>. Example: pcs stonith  level  clear  stonith
1617              dev_a dev_b
1618
1619       level verify
1620              Verifies  all  fence devices and nodes specified in fence levels
1621              exist.
1622
1623       fence <node> [--off]
1624              Fence the node specified (if --off is specified, use  the  'off'
1625              API  call to stonith which will turn the node off instead of re‐
1626              booting it).
1627
1628       confirm <node> [--force]
1629              Confirm to the cluster that the specified node is  powered  off.
1630              This  allows  the  cluster  to recover from a situation where no
1631              stonith device is able to fence the node.  This  command  should
1632              ONLY  be  used  after manually ensuring that the node is powered
1633              off and has no access to shared resources.
1634
1635              WARNING: If this node is not actually powered  off  or  it  does
1636              have access to shared resources, data corruption/cluster failure
1637              can occur.  To  prevent  accidental  running  of  this  command,
1638              --force  or  interactive  user  response is required in order to
1639              proceed.
1640
1641              NOTE: It is not checked if the  specified  node  exists  in  the
1642              cluster  in order to be able to work with nodes not visible from
1643              the local cluster partition.
1644
1645       history [show [<node>]]
1646              Show fencing history for the specified node or all nodes  if  no
1647              node specified.
1648
1649       history cleanup [<node>]
1650              Cleanup  fence  history of the specified node or all nodes if no
1651              node specified.
1652
1653       history update
1654              Update fence history from all nodes.
1655
1656       sbd  enable  [watchdog=<path>[@<node>]]...  [device=<path>[@<node>]]...
1657       [<SBD_OPTION>=<value>]... [--no-watchdog-validation]
1658              Enable  SBD  in  cluster.  Default  path  for watchdog device is
1659              /dev/watchdog. Allowed SBD  options:  SBD_WATCHDOG_TIMEOUT  (de‐
1660              fault:  5),  SBD_DELAY_START  (default:  no), SBD_STARTMODE (de‐
1661              fault: always) and SBD_TIMEOUT_ACTION.  SBD  options  are  docu‐
1662              mented in sbd(8) man page. It is possible to specify up to 3 de‐
1663              vices per node. If --no-watchdog-validation is specified,  vali‐
1664              dation of watchdogs will be skipped.
1665
1666              WARNING:  Cluster  has  to  be restarted in order to apply these
1667              changes.
1668
1669              WARNING: By default, it is tested whether the specified watchdog
1670              is  supported.  This  may  cause  a restart of the system when a
1671              watchdog  with  no-way-out-feature  enabled  is   present.   Use
1672              --no-watchdog-validation to skip watchdog validation.
1673
1674              Example  of enabling SBD in cluster with watchdogs on node1 will
1675              be /dev/watchdog2, on node2  /dev/watchdog1,  /dev/watchdog0  on
1676              all  other  nodes,  device /dev/sdb on node1, device /dev/sda on
1677              all other nodes and watchdog timeout will bet set to 10 seconds:
1678
1679              pcs  stonith  sbd  enable  watchdog=/dev/watchdog2@node1  watch‐
1680              dog=/dev/watchdog1@node2       watchdog=/dev/watchdog0       de‐
1681              vice=/dev/sdb@node1 device=/dev/sda SBD_WATCHDOG_TIMEOUT=10
1682
1683
1684       sbd disable
1685              Disable SBD in cluster.
1686
1687              WARNING: Cluster has to be restarted in  order  to  apply  these
1688              changes.
1689
1690       sbd   device  setup  device=<path>  [device=<path>]...  [watchdog-time‐
1691       out=<integer>]  [allocate-timeout=<integer>]   [loop-timeout=<integer>]
1692       [msgwait-timeout=<integer>]
1693              Initialize SBD structures on device(s) with specified timeouts.
1694
1695              WARNING: All content on device(s) will be overwritten.
1696
1697       sbd device message <device-path> <node> <message-type>
1698              Manually  set  a message of the specified type on the device for
1699              the node. Possible message types (they are documented in  sbd(8)
1700              man page): test, reset, off, crashdump, exit, clear
1701
1702       sbd status [--full]
1703              Show  status of SBD services in cluster and local device(s) con‐
1704              figured. If --full is specified, also dump of SBD headers on de‐
1705              vice(s) will be shown.
1706
1707       sbd config
1708              Show SBD configuration in cluster.
1709
1710
1711       sbd watchdog list
1712              Show all available watchdog devices on the local node.
1713
1714              WARNING:  Listing available watchdogs may cause a restart of the
1715              system  when  a  watchdog  with  no-way-out-feature  enabled  is
1716              present.
1717
1718
1719       sbd watchdog test [<watchdog-path>]
1720              This  operation  is  expected  to  force-reboot the local system
1721              without following any shutdown procedures using a  watchdog.  If
1722              no  watchdog  is  specified,  available watchdog will be used if
1723              only one watchdog device is available on the local system.
1724
1725
1726   acl
1727       [config]
1728              List all current access control lists.
1729
1730       enable Enable access control lists.
1731
1732       disable
1733              Disable access control lists.
1734
1735       role create <role id> [description=<description>]  [((read  |  write  |
1736       deny) (xpath <query> | id <id>))...]
1737              Create  a role with the id and (optional) description specified.
1738              Each role can also  have  an  unlimited  number  of  permissions
1739              (read/write/deny)  applied to either an xpath query or the id of
1740              a specific element in the cib.
1741              Permissions are applied to the selected XML element's entire XML
1742              subtree  (all  elements  enclosed  within  it). Write permission
1743              grants the ability to create, modify, or remove the element  and
1744              its  subtree,  and  also the ability to create any "scaffolding"
1745              elements (enclosing elements that do not have  attributes  other
1746              than  an ID). Permissions for more specific matches (more deeply
1747              nested elements) take precedence over more general ones. If mul‐
1748              tiple  permissions  are configured for the same match (for exam‐
1749              ple, in different roles applied to the same user), any deny per‐
1750              mission takes precedence, then write, then lastly read.
1751              An xpath may include an attribute expression to select only ele‐
1752              ments that match the expression, but the  permission  still  ap‐
1753              plies to the entire element (and its subtree), not to the attri‐
1754              bute alone. For example, using the xpath  "//*[@name]"  to  give
1755              write permission would allow changes to the entirety of all ele‐
1756              ments that have a "name" attribute and  everything  enclosed  by
1757              those  elements.  There  is no way currently to give permissions
1758              for just one attribute of an element. That is to  say,  you  can
1759              not  define  an ACL that allows someone to read just the dc-uuid
1760              attribute of the cib tag - that would select the cib element and
1761              give read access to the entire CIB.
1762
1763       role delete <role id>
1764              Delete the role specified and remove it from any users/groups it
1765              was assigned to.
1766
1767       role remove <role id>
1768              Delete the role specified and remove it from any users/groups it
1769              was assigned to.
1770
1771       role assign <role id> [to] [user|group] <username/group>
1772              Assign  a  role to a user or group already created with 'pcs acl
1773              user/group create'. If there is user and group with the same  id
1774              and  it is not specified which should be used, user will be pri‐
1775              oritized. In cases like this  specify  whenever  user  or  group
1776              should be used.
1777
1778       role unassign <role id> [from] [user|group] <username/group>
1779              Remove  a  role  from  the  specified user. If there is user and
1780              group with the same id and it is not specified which  should  be
1781              used, user will be prioritized. In cases like this specify when‐
1782              ever user or group should be used.
1783
1784       user create <username> [<role id>]...
1785              Create an ACL for the user specified and  assign  roles  to  the
1786              user.
1787
1788       user delete <username>
1789              Remove the user specified (and roles assigned will be unassigned
1790              for the specified user).
1791
1792       user remove <username>
1793              Remove the user specified (and roles assigned will be unassigned
1794              for the specified user).
1795
1796       group create <group> [<role id>]...
1797              Create  an  ACL  for the group specified and assign roles to the
1798              group.
1799
1800       group delete <group>
1801              Remove the group specified (and roles  assigned  will  be  unas‐
1802              signed for the specified group).
1803
1804       group remove <group>
1805              Remove  the  group  specified  (and roles assigned will be unas‐
1806              signed for the specified group).
1807
1808       permission add <role id> ((read | write | deny)  (xpath  <query>  |  id
1809       <id>))...
1810              Add  the  listed  permissions to the role specified. Permissions
1811              are applied to either an xpath query or the id of a specific el‐
1812              ement in the CIB.
1813              Permissions are applied to the selected XML element's entire XML
1814              subtree (all elements  enclosed  within  it).  Write  permission
1815              grants  the ability to create, modify, or remove the element and
1816              its subtree, and also the ability to  create  any  "scaffolding"
1817              elements  (enclosing  elements that do not have attributes other
1818              than an ID). Permissions for more specific matches (more  deeply
1819              nested elements) take precedence over more general ones. If mul‐
1820              tiple permissions are configured for the same match  (for  exam‐
1821              ple, in different roles applied to the same user), any deny per‐
1822              mission takes precedence, then write, then lastly read.
1823              An xpath may include an attribute expression to select only ele‐
1824              ments  that  match  the expression, but the permission still ap‐
1825              plies to the entire element (and its subtree), not to the attri‐
1826              bute  alone.  For  example, using the xpath "//*[@name]" to give
1827              write permission would allow changes to the entirety of all ele‐
1828              ments  that  have  a "name" attribute and everything enclosed by
1829              those elements. There is no way currently  to  give  permissions
1830              for  just  one  attribute of an element. That is to say, you can
1831              not define an ACL that allows someone to read just  the  dc-uuid
1832              attribute of the cib tag - that would select the cib element and
1833              give read access to the entire CIB.
1834
1835       permission delete <permission id>
1836              Remove the permission id specified (permission id's  are  listed
1837              in parenthesis after permissions in 'pcs acl' output).
1838
1839       permission remove <permission id>
1840              Remove  the  permission id specified (permission id's are listed
1841              in parenthesis after permissions in 'pcs acl' output).
1842
1843   property
1844       [config  [<property>...  |  --all  |  --defaults   |   (--output-format
1845       text|cmd|json)]]    |    [--all   |   --defaults   |   (--output-format
1846       text|cmd|json)]
1847              List property settings (default: lists  configured  properties).
1848              If  --defaults  is specified will show all property defaults, if
1849              --all is specified, current configured properties will be  shown
1850              with unset properties and their defaults.  See 'pcs property de‐
1851              scribe' for a description of the properties. There are 3 formats
1852              of  output  available:  'cmd',  'json'  and  'text',  default is
1853              'text'. Format 'text' is a human friendly output.  Format  'cmd'
1854              prints  pcs commands which can be used to recreate the same con‐
1855              figuration. Format 'json' is a machine oriented  output  of  the
1856              configuration.
1857
1858       defaults [<property>...] | [--full]
1859              List  all property defaults or only defaults for specified prop‐
1860              erties. If --full is specified, all properties defaults  includ‐
1861              ing advanced are shown.
1862
1863
1864       describe [<property>...] [--output-format text|json] [--full]
1865              Show  cluster  properties.  There are 2 formats of output avail‐
1866              able: 'json' and 'text', default is 'text'. Format 'text'  is  a
1867              human  friendly output. Format 'json' is a machine oriented out‐
1868              put of the configuration. If --full is specified, all properties
1869              descriptions including advanced are shown.
1870
1871       set <property>=[<value>] ... [--force]
1872              Set  specific  pacemaker  properties (if the value is blank then
1873              the property is removed from the configuration).  If a  property
1874              is not recognized by pcs the property will not be created unless
1875              the --force is used.  See 'pcs property describe' for a descrip‐
1876              tion of the properties.
1877
1878       unset <property> ...
1879              Remove property from configuration.  See 'pcs property describe'
1880              for a description of the properties.
1881
1882   constraint
1883       [config] [--full] [--all] [--output-format text|cmd|json]
1884              List all current constraints that are not expired. If  --all  is
1885              specified  also show expired constraints. If --full is specified
1886              also list the constraint ids. There  are  3  formats  of  output
1887              available:  'cmd',  'json' and 'text', default is 'text'. Format
1888              'text' is a human friendly output. Format 'cmd' prints pcs  com‐
1889              mands which can be used to recreate the same configuration. For‐
1890              mat 'json' is a machine oriented output of the configuration.
1891
1892       location <resource> prefers <node>[=<score>] [<node>[=<score>]]...
1893              Create a location constraint on a resource to prefer the  speci‐
1894              fied  node with score (default score: INFINITY). Resource may be
1895              either a resource id  <resource_id>  or  %<resource_id>  or  re‐
1896              source%<resource_id>, or a resource name regular expression reg‐
1897              exp%<resource_pattern>.
1898
1899       location <resource> avoids <node>[=<score>] [<node>[=<score>]]...
1900              Create a location constraint on a resource to avoid  the  speci‐
1901              fied  node with score (default score: INFINITY). Resource may be
1902              either a resource id  <resource_id>  or  %<resource_id>  or  re‐
1903              source%<resource_id>, or a resource name regular expression reg‐
1904              exp%<resource_pattern>.
1905
1906       location <resource> rule [id=<rule  id>]  [resource-discovery=<option>]
1907       [role=Promoted|Unpromoted]    [constraint-id=<id>]   [score=<score>   |
1908       score-attribute=<attribute>] <expression>
1909              Creates a location constraint with a rule on the  specified  re‐
1910              source where expression looks like one of the following:
1911                defined|not_defined <node attribute>
1912                <node   attribute>   lt|gt|lte|gte|eq|ne  [string|integer|num‐
1913              ber|version] <value>
1914                date gt|lt <date>
1915                date in_range <date> to <date>
1916                date in_range <date> to duration <duration options>...
1917                date-spec <date spec options>...
1918                <expression> and|or <expression>
1919                ( <expression> )
1920              where duration options and date spec options are: hours,  month‐
1921              days, weekdays, yeardays, months, weeks, years, weekyears, moon.
1922              Resource may be either a  resource  id  <resource_id>  or  %<re‐
1923              source_id> or resource%<resource_id>, or a resource name regular
1924              expression regexp%<resource_pattern>. If score is omitted it de‐
1925              faults  to  INFINITY. If id is omitted one is generated from the
1926              resource id. If resource-discovery is  omitted  it  defaults  to
1927              'always'.
1928
1929       location   [config   [resources  [<resource  reference>...]]  |  [nodes
1930       [<node>...]]] [--full] [--all] [--output-format text|cmd|json]
1931              List all the current location constraints that are not  expired.
1932              If  'resources' is specified, location constraints are displayed
1933              per resource. If 'nodes' is specified, location constraints  are
1934              displayed  per  node.  If  specific nodes, resources or resource
1935              name regular expressions are specified,  only  constraints  con‐
1936              taining  those will be shown. Resource reference may be either a
1937              resource id <resource_id>  or  %<resource_id>  or  resource%<re‐
1938              source_id>,  or  a  resource name regular expression regexp%<re‐
1939              source_pattern>. If --full is specified show the  internal  con‐
1940              straint  id's  as  well.  If --all is specified show the expired
1941              constraints. There are 3 formats  of  output  available:  'cmd',
1942              'json'  and  'text', default is 'text'. Format 'text' is a human
1943              friendly output. Format 'cmd' prints pcs commands which  can  be
1944              used  to recreate the same configuration. Format 'json' is a ma‐
1945              chine oriented output of the configuration.
1946
1947       location add <id> <resource>  <node>  <score>  [resource-discovery=<op‐
1948       tion>]
1949              Add a location constraint with the appropriate id for the speci‐
1950              fied resource, node name and score. Resource may be either a re‐
1951              source  id  <resource_id>  or  %<resource_id>  or  resource%<re‐
1952              source_id>, or a resource name  regular  expression  regexp%<re‐
1953              source_pattern>.
1954
1955       location delete <id>
1956              Remove a location constraint with the appropriate id.
1957
1958       location remove <id>
1959              Remove a location constraint with the appropriate id.
1960
1961       order [config] [--full] [--output-format text|cmd|json]
1962              List  all  current  ordering constraints (if --full is specified
1963              show the internal constraint id's as well). There are 3  formats
1964              of  output  available:  'cmd',  'json'  and  'text',  default is
1965              'text'. Format 'text' is a human friendly output.  Format  'cmd'
1966              prints  pcs commands which can be used to recreate the same con‐
1967              figuration. Format 'json' is a machine oriented  output  of  the
1968              configuration.
1969
1970       order [action] <resource id> then [action] <resource id> [options]
1971              Add an ordering constraint specifying actions (start, stop, pro‐
1972              mote, demote) and if no action is specified the  default  action
1973              will  be  start.   Available  options  are  kind=Optional/Manda‐
1974              tory/Serialize,  symmetrical=true/false,  require-all=true/false
1975              and id=<constraint-id>.
1976
1977       order  set  <resource1>  [resourceN]...  [options] [set <resourceX> ...
1978       [options]] [setoptions [constraint_options]]
1979              Create an ordered set of resources. Available  options  are  se‐
1980              quential=true/false,      require-all=true/false     and     ac‐
1981              tion=start/promote/demote/stop. Available constraint_options are
1982              id=<constraint-id>,  kind=Optional/Mandatory/Serialize  and sym‐
1983              metrical=true/false.
1984
1985       order delete <resource1> [resourceN]...
1986              Remove resource from any ordering constraint
1987
1988       order remove <resource1> [resourceN]...
1989              Remove resource from any ordering constraint
1990
1991       colocation [config] [--full] [--output-format text|cmd|json]
1992              List all current colocation constraints (if --full is  specified
1993              show  the internal constraint id's as well). There are 3 formats
1994              of output  available:  'cmd',  'json'  and  'text',  default  is
1995              'text'.  Format  'text' is a human friendly output. Format 'cmd'
1996              prints pcs commands which can be used to recreate the same  con‐
1997              figuration.  Format  'json'  is a machine oriented output of the
1998              configuration.
1999
2000       colocation add [<role>] <source resource id> with [<role>] <target  re‐
2001       source id> [score] [options] [id=constraint-id]
2002              Request  <source  resource>  to run on the same node where pace‐
2003              maker has determined <target  resource>  should  run.   Positive
2004              values  of  score  mean  the resources should be run on the same
2005              node, negative values mean the resources should not  be  run  on
2006              the  same  node.  Specifying 'INFINITY' (or '-INFINITY') for the
2007              score forces <source resource> to run (or not run) with  <target
2008              resource>  (score  defaults to "INFINITY"). A role can be: 'Pro‐
2009              moted', 'Unpromoted', 'Started', 'Stopped' (if no role is speci‐
2010              fied, it defaults to 'Started').
2011
2012       colocation  set  <resource1>  [resourceN]... [options] [set <resourceX>
2013       ... [options]] [setoptions [constraint_options]]
2014              Create a colocation constraint with a  resource  set.  Available
2015              options  are sequential=true/false and role=Stopped/Started/Pro‐
2016              moted/Unpromoted. Available constraint_options are id and score.
2017
2018       colocation delete <source resource id> <target resource id>
2019              Remove colocation constraints with specified resources.
2020
2021       colocation remove <source resource id> <target resource id>
2022              Remove colocation constraints with specified resources.
2023
2024       ticket [config] [--full] [--output-format text|cmd|json]
2025              List all current ticket constraints (if --full is specified show
2026              the  internal  constraint  id's as well). There are 3 formats of
2027              output available: 'cmd', 'json' and 'text', default  is  'text'.
2028              Format  'text'  is  a human friendly output. Format 'cmd' prints
2029              pcs commands which can be used to recreate the  same  configura‐
2030              tion. Format 'json' is a machine oriented output of the configu‐
2031              ration.
2032
2033       ticket  add  <ticket>  [<role>]  <resource  id>  [<options>]  [id=<con‐
2034       straint-id>]
2035              Create  a  ticket constraint for <resource id>. Available option
2036              is loss-policy=fence/stop/freeze/demote. A role can be Promoted,
2037              Unpromoted, Started or Stopped.
2038
2039       ticket  set  <resource1>  [<resourceN>]... [<options>] [set <resourceX>
2040       ... [<options>]] setoptions <constraint_options>
2041              Create a ticket constraint with a resource  set.  Available  op‐
2042              tions   are  role=Stopped/Started/Promoted/Unpromoted.  Required
2043              constraint option is ticket=<ticket>.  Optional  constraint  op‐
2044              tions       are       id=<constraint-id>      and      loss-pol‐
2045              icy=fence/stop/freeze/demote.
2046
2047       ticket delete <ticket> <resource id>
2048              Remove all ticket constraints with <ticket> from <resource id>.
2049
2050       ticket remove <ticket> <resource id>
2051              Remove all ticket constraints with <ticket> from <resource id>.
2052
2053       delete <constraint id>...
2054              Remove constraint(s) or  constraint  rules  with  the  specified
2055              id(s).
2056
2057       remove <constraint id>...
2058              Remove  constraint(s)  or  constraint  rules  with the specified
2059              id(s).
2060
2061       ref <resource>...
2062              List constraints referencing specified resource.
2063
2064       rule add  <constraint  id>  [id=<rule  id>]  [role=Promoted|Unpromoted]
2065       [score=<score>|score-attribute=<attribute>] <expression>
2066              Add a rule to a location constraint specified by 'constraint id'
2067              where the expression looks like one of the following:
2068                defined|not_defined <node attribute>
2069                <node  attribute>   lt|gt|lte|gte|eq|ne   [string|integer|num‐
2070              ber|version] <value>
2071                date gt|lt <date>
2072                date in_range <date> to <date>
2073                date in_range <date> to duration <duration options>...
2074                date-spec <date spec options>...
2075                <expression> and|or <expression>
2076                ( <expression> )
2077              where  duration options and date spec options are: hours, month‐
2078              days, weekdays, yeardays, months, weeks, years, weekyears, moon.
2079              If  score  is  omitted it defaults to INFINITY. If id is omitted
2080              one is generated from the constraint id.
2081
2082       rule delete <rule id>
2083              Remove a rule from its location constraint and if it's the  last
2084              rule, the constraint will also be removed.
2085
2086       rule remove <rule id>
2087              Remove  a rule from its location constraint and if it's the last
2088              rule, the constraint will also be removed.
2089
2090   qdevice
2091       status <device model> [--full] [<cluster name>]
2092              Show  runtime  status  of  specified  model  of  quorum   device
2093              provider.   Using  --full  will  give  more detailed output.  If
2094              <cluster name> is specified, only information about  the  speci‐
2095              fied cluster will be displayed.
2096
2097       setup model <device model> [--enable] [--start]
2098              Configure specified model of quorum device provider.  Quorum de‐
2099              vice then can be added to clusters by running "pcs quorum device
2100              add"  command  in  a  cluster.   --start  will  also  start  the
2101              provider.  --enable will configure  the  provider  to  start  on
2102              boot.
2103
2104       destroy <device model>
2105              Disable  and  stop specified model of quorum device provider and
2106              delete its configuration files.
2107
2108       start <device model>
2109              Start specified model of quorum device provider.
2110
2111       stop <device model>
2112              Stop specified model of quorum device provider.
2113
2114       kill <device model>
2115              Force specified model of quorum device provider  to  stop  (per‐
2116              forms kill -9).  Note that init system (e.g. systemd) can detect
2117              that the qdevice is not running and start it again.  If you want
2118              to stop the qdevice, run "pcs qdevice stop" command.
2119
2120       enable <device model>
2121              Configure  specified model of quorum device provider to start on
2122              boot.
2123
2124       disable <device model>
2125              Configure specified model of quorum device provider to not start
2126              on boot.
2127
2128   quorum
2129       [config]
2130              Show quorum configuration.
2131
2132       status Show quorum runtime status.
2133
2134       device  add  [<generic options>] model <device model> [<model options>]
2135       [heuristics <heuristics options>]
2136              Add a quorum device to the cluster. Quorum device should be con‐
2137              figured  first  with  "pcs qdevice setup". It is not possible to
2138              use more than one quorum device in a cluster simultaneously.
2139              Currently the only supported model is 'net'. It  requires  model
2140              options 'algorithm' and 'host' to be specified. Options are doc‐
2141              umented in corosync-qdevice(8) man  page;  generic  options  are
2142              'sync_timeout'  and  'timeout',  for model net options check the
2143              quorum.device.net section, for heuristics options see  the  quo‐
2144              rum.device.heuristics  section.  Pcs  automatically  creates and
2145              distributes TLS certificates and sets the 'tls' model option  to
2146              the default value 'on'.
2147              Example:   pcs   quorum   device  add  model  net  algorithm=lms
2148              host=qnetd.internal.example.com
2149
2150       device heuristics delete
2151              Remove all heuristics settings of the configured quorum device.
2152
2153       device heuristics remove
2154              Remove all heuristics settings of the configured quorum device.
2155
2156       device delete
2157              Remove a quorum device from the cluster.
2158
2159       device remove
2160              Remove a quorum device from the cluster.
2161
2162       device status [--full]
2163              Show quorum device runtime status.  Using --full will give  more
2164              detailed output.
2165
2166       device  update  [<generic options>] [model <model options>] [heuristics
2167       <heuristics options>]
2168              Add, remove or change quorum device options. Unspecified options
2169              will  be kept unchanged. If you wish to remove an option, set it
2170              to empty value, i.e. 'option_name='. Requires the cluster to  be
2171              stopped.  Model  and options are all documented in corosync-qde‐
2172              vice(8) man page; for heuristics options  check  the  quorum.de‐
2173              vice.heuristics subkey section, for model options check the quo‐
2174              rum.device.<device model> subkey sections.
2175
2176              WARNING: If you want to change "host" option  of  qdevice  model
2177              net,  use "pcs quorum device remove" and "pcs quorum device add"
2178              commands to set up configuration properly  unless  old  and  new
2179              host is the same machine.
2180
2181       expected-votes <votes>
2182              Set expected votes in the live cluster to specified value.  This
2183              only affects the live cluster,  not  changes  any  configuration
2184              files.
2185
2186       unblock [--force]
2187              Cancel  waiting  for all nodes when establishing quorum.  Useful
2188              in situations where you know the cluster is inquorate,  but  you
2189              are confident that the cluster should proceed with resource man‐
2190              agement regardless.  This command should ONLY be used when nodes
2191              which  the cluster is waiting for have been confirmed to be pow‐
2192              ered off and to have no access to shared resources.
2193
2194              WARNING: If the nodes are not actually powered off  or  they  do
2195              have access to shared resources, data corruption/cluster failure
2196              can occur.  To  prevent  accidental  running  of  this  command,
2197              --force  or  interactive  user  response is required in order to
2198              proceed.
2199
2200       update        [auto_tie_breaker=[0|1]]        [last_man_standing=[0|1]]
2201       [last_man_standing_window=[<time in ms>]] [wait_for_all=[0|1]]
2202              Add,  remove  or change quorum options. At least one option must
2203              be specified. Unspecified options will be kept unchanged. If you
2204              wish  to  remove  an  option,  set  it to empty value, i.e. 'op‐
2205              tion_name='. Options are documented in corosync's  votequorum(5)
2206              man page. Requires the cluster to be stopped.
2207
2208   booth
2209       setup  sites  <address> <address> [<address>...] [arbitrators <address>
2210       ...] [--force]
2211              Write new booth configuration with specified sites and  arbitra‐
2212              tors.   Total  number  of  peers (sites and arbitrators) must be
2213              odd.  When the configuration file already exists, command  fails
2214              unless --force is specified.
2215
2216       destroy
2217              Remove booth configuration files.
2218
2219       ticket add <ticket> [<name>=<value> ...]
2220              Add  new ticket to the current configuration. Ticket options are
2221              specified in booth manpage.
2222
2223       ticket delete <ticket>
2224              Remove the specified ticket from the current configuration.
2225
2226       ticket remove <ticket>
2227              Remove the specified ticket from the current configuration.
2228
2229       config [<node>]
2230              Show booth configuration from the specified  node  or  from  the
2231              current node if node not specified.
2232
2233       create ip <address>
2234              Make  the  cluster run booth service on the specified ip address
2235              as a cluster resource.  Typically this  is  used  to  run  booth
2236              site.
2237
2238       delete Remove  booth  resources  created by the "pcs booth create" com‐
2239              mand.
2240
2241       remove Remove booth resources created by the "pcs  booth  create"  com‐
2242              mand.
2243
2244       restart
2245              Restart  booth  resources created by the "pcs booth create" com‐
2246              mand.
2247
2248       ticket grant <ticket> [<site address>]
2249              Grant the ticket to the site specified by the address, hence  to
2250              the booth formation this site is a member of. When this specifi‐
2251              cation is omitted, site address that  has  been  specified  with
2252              'pcs  booth  create' command is used. Specifying site address is
2253              therefore mandatory when running this command at a  host  in  an
2254              arbitrator role.
2255              Note  that the ticket must not be already granted in given booth
2256              formation; for an ad-hoc (and, in the worst case, abrupt, for  a
2257              lack of a direct atomicity) change of this preference baring di‐
2258              rect interventions at the sites, the ticket needs to be  revoked
2259              first, only then it can be granted at another site again.
2260
2261       ticket revoke <ticket> [<site address>]
2262              Revoke  the ticket in the booth formation as identified with one
2263              of its member sites specified by the address. When this specifi‐
2264              cation  is  omitted, site address that has been specified with a
2265              prior 'pcs booth create' command is used.  Specifying  site  ad‐
2266              dress is therefore mandatory when running this command at a host
2267              in an arbitrator role.
2268
2269       status Print current status of booth on the local node.
2270
2271       pull <node>
2272              Pull booth configuration from the specified node.
2273
2274       sync [--skip-offline]
2275              Send booth configuration from the local node to all nodes in the
2276              cluster.
2277
2278       enable Enable booth arbitrator service.
2279
2280       disable
2281              Disable booth arbitrator service.
2282
2283       start  Start booth arbitrator service.
2284
2285       stop   Stop booth arbitrator service.
2286
2287   status
2288       [status] [--full] [--hide-inactive]
2289              View  all  information  about  the cluster and resources (--full
2290              provides  more  details,  --hide-inactive  hides  inactive   re‐
2291              sources).
2292
2293       resources [<resource id | tag id>] [node=<node>] [--hide-inactive]
2294              Show status of all currently configured resources. If --hide-in‐
2295              active is specified, only show active resources. If  a  resource
2296              or  tag  id  is specified, only show status of the specified re‐
2297              source or resources in the specified tag. If node is  specified,
2298              only show status of resources configured for the specified node.
2299
2300       cluster
2301              View current cluster status.
2302
2303       corosync
2304              View current membership information as seen by corosync.
2305
2306       quorum View current quorum status.
2307
2308       qdevice <device model> [--full] [<cluster name>]
2309              Show   runtime  status  of  specified  model  of  quorum  device
2310              provider.  Using --full will  give  more  detailed  output.   If
2311              <cluster  name>  is specified, only information about the speci‐
2312              fied cluster will be displayed.
2313
2314       booth  Print current status of booth on the local node.
2315
2316       nodes [corosync | both | config]
2317              View current status of nodes from pacemaker.  If  'corosync'  is
2318              specified,  view  current status of nodes from corosync instead.
2319              If 'both' is specified, view current status of nodes  from  both
2320              corosync & pacemaker. If 'config' is specified, print nodes from
2321              corosync & pacemaker configuration.
2322
2323       pcsd [<node>]...
2324              Show current status of pcsd on nodes specified, or on all  nodes
2325              configured in the local cluster if no nodes are specified.
2326
2327       xml    View xml version of status (output from crm_mon -r -1 -X).
2328
2329   config
2330       [show] View full cluster configuration.
2331
2332       backup [filename]
2333              Creates  the tarball containing the cluster configuration files.
2334              If filename is not specified the standard output will be used.
2335
2336       restore [--local] [filename]
2337              Restores the cluster configuration files on all nodes  from  the
2338              backup.  If filename is not specified the standard input will be
2339              used.  If --local is specified only the  files  on  the  current
2340              node will be restored.
2341
2342       checkpoint
2343              List all available configuration checkpoints.
2344
2345       checkpoint view <checkpoint_number>
2346              Show specified configuration checkpoint.
2347
2348       checkpoint diff <checkpoint_number> <checkpoint_number>
2349              Show  differences  between  the  two  specified checkpoints. Use
2350              checkpoint number 'live' to compare a checkpoint to the  current
2351              live configuration.
2352
2353       checkpoint restore <checkpoint_number>
2354              Restore cluster configuration to specified checkpoint.
2355
2356   pcsd
2357       certkey <certificate file> <key file>
2358              Load custom certificate and key files for use in pcsd.
2359
2360       status [<node>]...
2361              Show  current status of pcsd on nodes specified, or on all nodes
2362              configured in the local cluster if no nodes are specified.
2363
2364       sync-certificates
2365              Sync pcsd certificates to all nodes in the local cluster.
2366
2367       deauth [<token>]...
2368              Delete locally stored authentication tokens used by remote  sys‐
2369              tems  to  connect  to  the local pcsd instance. If no tokens are
2370              specified all tokens will be deleted. After this command is  run
2371              other nodes will need to re-authenticate against this node to be
2372              able to connect to it.
2373
2374   host
2375       auth (<host name>  [addr=<address>[:<port>]])...  [-u  <username>]  [-p
2376       <password>]
2377              Authenticate  local pcs/pcsd against pcsd on specified hosts. It
2378              is possible to specify an address and a port via which  pcs/pcsd
2379              will  communicate with each host. If an address is not specified
2380              a host name will be used. If a port is not specified  2224  will
2381              be used.
2382
2383       deauth [<host name>]...
2384              Delete authentication tokens which allow pcs/pcsd on the current
2385              system to connect to remote pcsd  instances  on  specified  host
2386              names.  If  the current system is a member of a cluster, the to‐
2387              kens will be deleted from all nodes in the cluster. If  no  host
2388              names  are specified all tokens will be deleted. After this com‐
2389              mand is run this node will need to re-authenticate against other
2390              nodes to be able to connect to them.
2391
2392   node
2393       attribute [[<node>] [--name <name>] | <node> <name>=<value> ...]
2394              Manage  node  attributes.   If no parameters are specified, show
2395              attributes of all nodes.  If one parameter  is  specified,  show
2396              attributes  of  specified  node.   If  --name is specified, show
2397              specified attribute's value from all nodes.  If more  parameters
2398              are specified, set attributes of specified node.  Attributes can
2399              be removed by setting an attribute without a value.
2400
2401       maintenance [--all | <node>...] [--wait[=n]]
2402              Put specified node(s) into maintenance mode, if no nodes or  op‐
2403              tions  are  specified  the current node will be put into mainte‐
2404              nance mode, if --all is specified all nodes  will  be  put  into
2405              maintenance  mode.  If  --wait is specified, pcs will wait up to
2406              'n' seconds for the node(s) to be put into maintenance mode  and
2407              then  return  0  on  success or 1 if the operation not succeeded
2408              yet. If 'n' is not specified it defaults to 60 minutes.
2409
2410       unmaintenance [--all | <node>...] [--wait[=n]]
2411              Remove node(s) from maintenance mode, if no nodes or options are
2412              specified  the  current  node  will  be removed from maintenance
2413              mode, if --all is specified all nodes will be removed from main‐
2414              tenance  mode.  If  --wait is specified, pcs will wait up to 'n'
2415              seconds for the node(s) to be removed from maintenance mode  and
2416              then  return  0  on  success or 1 if the operation not succeeded
2417              yet. If 'n' is not specified it defaults to 60 minutes.
2418
2419       standby [--all | <node>...] [--wait[=n]]
2420              Put specified node(s) into standby mode (the node specified will
2421              no longer be able to host resources), if no nodes or options are
2422              specified the current node will be put  into  standby  mode,  if
2423              --all  is  specified all nodes will be put into standby mode. If
2424              --wait is specified, pcs will wait up to  'n'  seconds  for  the
2425              node(s) to be put into standby mode and then return 0 on success
2426              or 1 if the operation not succeeded yet. If 'n' is not specified
2427              it defaults to 60 minutes.
2428
2429       unstandby [--all | <node>...] [--wait[=n]]
2430              Remove node(s) from standby mode (the node specified will now be
2431              able to host resources), if no nodes or  options  are  specified
2432              the  current node will be removed from standby mode, if --all is
2433              specified all nodes will be removed from standby mode. If --wait
2434              is specified, pcs will wait up to 'n' seconds for the node(s) to
2435              be removed from standby mode and then return 0 on success  or  1
2436              if  the  operation not succeeded yet. If 'n' is not specified it
2437              defaults to 60 minutes.
2438
2439       utilization [[<node>] [--name <name>] | <node> <name>=<value> ...]
2440              Add specified utilization options to specified node.  If node is
2441              not  specified,  shows  utilization  of all nodes.  If --name is
2442              specified, shows specified utilization value from all nodes.  If
2443              utilization  options  are  not  specified,  shows utilization of
2444              specified  node.   Utilization  option  should  be   in   format
2445              name=value,  value has to be integer.  Options may be removed by
2446              setting an option without a value.  Example: pcs  node  utiliza‐
2447              tion  node1  cpu=4 ram=  For the utilization configuration to be
2448              in effect, cluster property 'placement-strategy' must be config‐
2449              ured accordingly.
2450
2451   alert
2452       [config]
2453              Show all configured alerts.
2454
2455       create path=<path> [id=<alert-id>] [description=<description>] [options
2456       [<option>=<value>]...] [meta [<meta-option>=<value>]...]
2457              Define an alert handler with specified path. Id will be automat‐
2458              ically generated if it is not specified.
2459
2460       update  <alert-id>  [path=<path>]  [description=<description>] [options
2461       [<option>=<value>]...] [meta [<meta-option>=<value>]...]
2462              Update an existing alert handler with specified id.  Unspecified
2463              options will be kept unchanged. If you wish to remove an option,
2464              set it to empty value, i.e. 'option_name='.
2465
2466       delete <alert-id> ...
2467              Remove alert handlers with specified ids.
2468
2469       remove <alert-id> ...
2470              Remove alert handlers with specified ids.
2471
2472       recipient add  <alert-id>  value=<recipient-value>  [id=<recipient-id>]
2473       [description=<description>]   [options   [<option>=<value>]...]   [meta
2474       [<meta-option>=<value>]...]
2475              Add new recipient to specified alert handler.
2476
2477       recipient  update  <recipient-id>  [value=<recipient-value>]  [descrip‐
2478       tion=<description>]  [options  [<option>=<value>]...]  [meta [<meta-op‐
2479       tion>=<value>]...]
2480              Update an existing recipient identified by its  id.  Unspecified
2481              options will be kept unchanged. If you wish to remove an option,
2482              set it to empty value, i.e. 'option_name='.
2483
2484       recipient delete <recipient-id> ...
2485              Remove specified recipients.
2486
2487       recipient remove <recipient-id> ...
2488              Remove specified recipients.
2489
2490   client
2491       local-auth [<pcsd-port>] [-u <username>] [-p <password>]
2492              Authenticate current user to local pcsd. This is required to run
2493              some  pcs  commands  which  may require permissions of root user
2494              such as 'pcs cluster start'.
2495
2496   dr
2497       config Display disaster-recovery configuration from the local node.
2498
2499       status [--full] [--hide-inactive]
2500              Display status of the local and the remote site cluster  (--full
2501              provides   more  details,  --hide-inactive  hides  inactive  re‐
2502              sources).
2503
2504       set-recovery-site <recovery site node>
2505              Set up disaster-recovery with the local cluster being  the  pri‐
2506              mary  site. The recovery site is defined by a name of one of its
2507              nodes.
2508
2509       destroy
2510              Permanently  destroy  disaster-recovery  configuration  on   all
2511              sites.
2512
2513   tag
2514       [config|list [<tag id>...]]
2515              Display configured tags.
2516
2517       create <tag id> <id> [<id>]...
2518              Create a tag containing the specified ids.
2519
2520       delete <tag id>...
2521              Delete specified tags.
2522
2523       remove <tag id>...
2524              Delete specified tags.
2525
2526       update  <tag  id>  [add  <id> [<id>]... [--before <id> | --after <id>]]
2527       [remove <id> [<id>]...]
2528              Update a tag using the specified ids. Ids can be added,  removed
2529              or  moved  in  a tag. You can use --before or --after to specify
2530              the position of the added ids relatively to some id already  ex‐
2531              isting  in  the  tag. By adding ids to a tag they are already in
2532              and specifying --after or --before you can move the ids  in  the
2533              tag.
2534

EXAMPLES

2536       Show all resources
2537              # pcs resource config
2538
2539       Show options specific to the 'VirtualIP' resource
2540              # pcs resource config VirtualIP
2541
2542       Create a new resource called 'VirtualIP' with options
2543              #    pcs   resource   create   VirtualIP   ocf:heartbeat:IPaddr2
2544              ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s
2545
2546       Create a new resource called 'VirtualIP' with options
2547              #  pcs  resource  create   VirtualIP   IPaddr2   ip=192.168.0.99
2548              cidr_netmask=32 nic=eth2 op monitor interval=30s
2549
2550       Change the ip address of VirtualIP and remove the nic option
2551              # pcs resource update VirtualIP ip=192.168.0.98 nic=
2552
2553       Delete the VirtualIP resource
2554              # pcs resource delete VirtualIP
2555
2556       Create  the  MyStonith  stonith  fence_virt device which can fence host
2557       'f1'
2558              # pcs stonith create MyStonith fence_virt pcmk_host_list=f1
2559
2560       Set the stonith-enabled property to false on the  cluster  (which  dis‐
2561       ables stonith)
2562              # pcs property set stonith-enabled=false
2563

USING --FORCE IN PCS COMMANDS

2565       Various pcs commands accept the --force option. Its purpose is to over‐
2566       ride some of checks that pcs is doing or some of errors that may  occur
2567       when  a  pcs command is run. When such error occurs, pcs will print the
2568       error with a note it may be overridden. The exact behavior of  the  op‐
2569       tion  is  different  for each pcs command. Using the --force option can
2570       lead into situations that would normally be prevented by logic  of  pcs
2571       commands  and therefore its use is strongly discouraged unless you know
2572       what you are doing.
2573

ENVIRONMENT VARIABLES

2575       EDITOR
2576               Path to a plain-text editor. This is used when pcs is requested
2577              to present a text for the user to edit.
2578
2579       no_proxy, https_proxy, all_proxy, NO_PROXY, HTTPS_PROXY, ALL_PROXY
2580               These  environment variables (listed according to their priori‐
2581              ties) control how pcs handles proxy servers when  connecting  to
2582              cluster nodes. See curl(1) man page for details.
2583

CHANGES IN PCS-0.11

2585       This  section summarizes the most important changes in commands done in
2586       pcs-0.11.x compared to pcs-0.10.x. For detailed description of  current
2587       commands see above.
2588
2589   Legacy role names
2590       Roles  'Master'  and 'Slave' are deprecated and should not be used any‐
2591       more. Instead use 'Promoted' and 'Unpromoted' respectively.  Similarly,
2592       --master has been deprecated and replaced with --promoted.
2593
2594   cluster
2595       uidgid rm
2596              This  command has been replaced with 'pcs cluster uidgid delete'
2597              and 'pcs cluster uidgid remove'.
2598
2599   resource
2600       move   The 'pcs resource move' now automatically removes location  con‐
2601              straint used for moving a resource. It is equivalent of 'pcs re‐
2602              source move --autodelete' from pcs-0.10.9. Legacy  functionality
2603              of  the  'resource move' command is still available as 'resource
2604              move-with-constraint <resource id>'.
2605
2606       show --full
2607              This command has been replaced with 'pcs resource config'.
2608
2609       show --groups
2610              This command has been replaced with 'pcs resource group list'.
2611
2612       show   This command has been replaced with 'pcs resource status'.
2613
2614   stonith
2615       show --full
2616              This command has been replaced with 'pcs stonith config'.
2617
2618       show   This command has been replaced with 'pcs stonith status'.
2619

CHANGES IN PCS-0.10

2621       This section summarizes the most important changes in commands done  in
2622       pcs-0.10.x  compared  to pcs-0.9.x. For detailed description of current
2623       commands see above.
2624
2625   acl
2626       show   The 'pcs acl show' command has been deprecated and will  be  re‐
2627              moved.  Please  use  'pcs  acl  config'  instead.  Applicable in
2628              pcs-0.10.9 and newer.
2629
2630   alert
2631       show   The 'pcs alert show' command has been deprecated and will be re‐
2632              moved.  Please  use  'pcs  alert  config' instead. Applicable in
2633              pcs-0.10.9 and newer.
2634
2635   cluster
2636       auth   The 'pcs cluster auth' command only authenticates nodes in a lo‐
2637              cal cluster and does not accept a node list. The new command for
2638              authentication is 'pcs host auth'. It allows one to specify host
2639              names, addresses and pcsd ports.
2640
2641       node add
2642              Custom node names and Corosync 3.x with knet are fully supported
2643              now, therefore the syntax has been completely changed.
2644              The --device and --watchdog options have been replaced with 'de‐
2645              vice' and 'watchdog' options, respectively.
2646
2647       pcsd-status
2648              The  'pcs  cluster  pcsd-status' command has been deprecated and
2649              will be removed. Please use 'pcs pcsd  status'  or  'pcs  status
2650              pcsd' instead. Applicable in pcs-0.10.9 and newer.
2651
2652       quorum This command has been replaced with 'pcs quorum'.
2653
2654       remote-node add
2655              This   command   has   been  replaced  with  'pcs  cluster  node
2656              add-guest'.
2657
2658       remote-node remove
2659              This  command  has  been  replaced  with   'pcs   cluster   node
2660              delete-guest' and its alias 'pcs cluster node remove-guest'.
2661
2662       setup  Custom node names and Corosync 3.x with knet are fully supported
2663              now, therefore the syntax has been completely changed.
2664              The --name option has been removed. The first parameter  of  the
2665              command is the cluster name now.
2666              The  --local  option  has  been  replaced  with  --corosync_conf
2667              <path>.
2668
2669       standby
2670              This command has been replaced with 'pcs node standby'.
2671
2672       uidgid rm
2673              This command  has  been  deprecated,  use  'pcs  cluster  uidgid
2674              delete' or 'pcs cluster uidgid remove' instead.
2675
2676       unstandby
2677              This command has been replaced with 'pcs node unstandby'.
2678
2679       verify The -V option has been replaced with --full.
2680              To specify a filename, use the -f option.
2681
2682   constraint
2683       list   The  'pcs constraint list' command, as well as its variants 'pcs
2684              constraint [location | colocation | order | ticket]  list',  has
2685              been  deprecated and will be removed. Please use 'pcs constraint
2686              [location | colocation | order | ticket] config' instead. Appli‐
2687              cable in pcs-0.10.9 and newer.
2688
2689       show   The  'pcs constraint show' command, as well as its variants 'pcs
2690              constraint [location | colocation | order | ticket]  show',  has
2691              been  deprecated and will be removed. Please use 'pcs constraint
2692              [location | colocation | order | ticket] config' instead. Appli‐
2693              cable in pcs-0.10.9 and newer.
2694
2695   pcsd
2696       clear-auth
2697              This  command  has been replaced with 'pcs host deauth' and 'pcs
2698              pcsd deauth'.
2699
2700   property
2701       list   The 'pcs property list' command has been deprecated and will  be
2702              removed. Please use 'pcs property config' instead. Applicable in
2703              pcs-0.10.9 and newer.
2704
2705       set    The --node option is no longer supported. Use the 'pcs node  at‐
2706              tribute' command to set node attributes.
2707
2708       show   The  --node option is no longer supported. Use the 'pcs node at‐
2709              tribute' command to view node attributes.
2710              The 'pcs property show' command has been deprecated and will  be
2711              removed. Please use 'pcs property config' instead. Applicable in
2712              pcs-0.10.9 and newer.
2713
2714       unset  The --node option is no longer supported. Use the 'pcs node  at‐
2715              tribute' command to unset node attributes.
2716
2717   resource
2718       create The 'master' keyword has been changed to 'promotable'.
2719
2720       failcount reset
2721              The  command has been removed as 'pcs resource cleanup' is doing
2722              exactly the same job.
2723
2724       master This command has been replaced with 'pcs resource promotable'.
2725
2726       show   Previously, this command displayed either status  or  configura‐
2727              tion  of  resources  depending on the parameters specified. This
2728              was confusing, therefore the command was replaced by several new
2729              commands.  To  display  resources  status, run 'pcs resource' or
2730              'pcs resource status'. To display resources  configuration,  run
2731              'pcs  resource config' or 'pcs resource config <resource name>'.
2732              To display configured resource groups, run 'pcs  resource  group
2733              list'.
2734
2735   status
2736       groups This command has been replaced with 'pcs resource group list'.
2737
2738   stonith
2739       level add | clear | delete | remove
2740              Delimiting  stonith  devices  with  a comma is deprecated, use a
2741              space instead. Applicable in pcs-0.10.9 and newer.
2742
2743       level clear
2744              Syntax of the command has been fixed so that it is not ambiguous
2745              any  more.  New syntax is 'pcs stonith level clear [target <tar‐
2746              get> | stonith <stonith id>...]'. Old syntax 'pcs stonith  level
2747              clear  [<target> | <stonith ids>]' is deprecated but still func‐
2748              tional in pcs-0.10.x. Applicable in pcs-0.10.9 and newer.
2749
2750       level delete | remove
2751              Syntax of the command has been fixed so that it is not ambiguous
2752              any more. New syntax is 'pcs stonith level delete | remove [tar‐
2753              get  <target>]  [stonith  <stonith  id>]...'.  Old  syntax  'pcs
2754              stonith  level  delete | remove [<target>] [<stonith id>]...' is
2755              deprecated but still functional  in  pcs-0.10.x.  Applicable  in
2756              pcs-0.10.9 and newer.
2757
2758       sbd device setup
2759              The --device option has been replaced with the 'device' option.
2760
2761       sbd enable
2762              The --device and --watchdog options have been replaced with 'de‐
2763              vice' and 'watchdog' options, respectively.
2764
2765       show   Previously, this command displayed either status  or  configura‐
2766              tion of stonith resources depending on the parameters specified.
2767              This was confusing, therefore the command was replaced  by  sev‐
2768              eral new commands. To display stonith resources status, run 'pcs
2769              stonith' or 'pcs stonith status'. To display  stonith  resources
2770              configuration,  run  'pcs stonith config' or 'pcs stonith config
2771              <stonith name>'.
2772
2773   tag
2774       list   The 'pcs tag list' command has been deprecated and will  be  re‐
2775              moved.  Please  use  'pcs  tag  config'  instead.  Applicable in
2776              pcs-0.10.9 and newer.
2777

SEE ALSO

2779       http://clusterlabs.org/doc/
2780
2781       pcsd(8), pcs_snmp_agent(8)
2782
2783       corosync_overview(8),  votequorum(5),  corosync.conf(5),  corosync-qde‐
2784       vice(8),          corosync-qdevice-tool(8),          corosync-qnetd(8),
2785       corosync-qnetd-tool(8)
2786
2787       pacemaker-controld(7),  pacemaker-fenced(7),   pacemaker-schedulerd(7),
2788       crm_mon(8), crm_report(8), crm_simulate(8)
2789
2790       boothd(8), sbd(8)
2791
2792
2793
2794pcs 0.11.6                        2023-06-20                            PCS(8)
Impressum