1PCS(8)                  System Administration Utilities                 PCS(8)
2
3
4

NAME

6       pcs - pacemaker/corosync configuration system
7

SYNOPSIS

9       pcs [-f file] [-h] [commands]...
10

DESCRIPTION

12       Control and configure pacemaker and corosync.
13

OPTIONS

15       -h, --help
16              Display usage and exit.
17
18       -f file
19              Perform actions on file instead of active CIB.
20              Commands  supporting  the  option  use  the initial state of the
21              specified file as their input and then overwrite the  file  with
22              the state reflecting the requested operation(s).
23              A  few  commands  only  use the specified file in read-only mode
24              since their effect is not a CIB modification.
25
26       --debug
27              Print all network traffic and external commands run.
28
29       --version
30              Print pcs version information. List pcs capabilities  if  --full
31              is specified.
32
33       --request-timeout=<timeout>
34              Timeout  for  each  outgoing request to another node in seconds.
35              Default is 60s.
36
37   Commands:
38       cluster
39               Configure cluster options and nodes.
40
41       resource
42               Manage cluster resources.
43
44       stonith
45               Manage fence devices.
46
47       constraint
48               Manage resource constraints.
49
50       property
51               Manage pacemaker properties.
52
53       acl
54               Manage pacemaker access control lists.
55
56       qdevice
57               Manage quorum device provider on the local host.
58
59       quorum
60               Manage cluster quorum settings.
61
62       booth
63               Manage booth (cluster ticket manager).
64
65       status
66               View cluster status.
67
68       config
69               View and manage cluster configuration.
70
71       pcsd
72               Manage pcs daemon.
73
74       host
75               Manage hosts known to pcs/pcsd.
76
77       node
78               Manage cluster nodes.
79
80       alert
81               Manage pacemaker alerts.
82
83       client
84               Manage pcsd client configuration.
85
86       dr
87               Manage disaster recovery configuration.
88
89       tag
90               Manage pacemaker tags.
91
92   resource
93       [status [--hide-inactive]]
94              Show  status  of  all   currently   configured   resources.   If
95              --hide-inactive is specified, only show active resources.
96
97       config [<resource id>]...
98              Show  options  of  all  currently  configured  resources  or  if
99              resource ids are specified show the options  for  the  specified
100              resource ids.
101
102       list [filter] [--nodesc]
103              Show  list  of  all available resource agents (if filter is pro‐
104              vided then only resource agents  matching  the  filter  will  be
105              shown). If --nodesc is used then descriptions of resource agents
106              are not printed.
107
108       describe [<standard>:[<provider>:]]<type> [--full]
109              Show options for the specified resource. If --full is specified,
110              all options including advanced and deprecated ones are shown.
111
112       create   <resource   id>   [<standard>:[<provider>:]]<type>   [resource
113       options] [op <operation action> <operation options> [<operation action>
114       <operation  options>]...]  [meta <meta options>...] [clone [<clone id>]
115       [<clone options>] | promotable [<clone id>]  [<promotable  options>]  |
116       --group  <group  id> [--before <resource id> | --after <resource id>] |
117       bundle <bundle id>] [--disabled] [--no-default-ops] [--wait[=n]]
118              Create specified resource. If clone is used a clone resource  is
119              created.  If  promotable  is used a promotable clone resource is
120              created. If --group is specified the resource is  added  to  the
121              group  named.  You  can  use  --before or --after to specify the
122              position of the  added  resource  relatively  to  some  resource
123              already  existing in the group. If bundle is specified, resource
124              will be created inside of the specified bundle. If --disabled is
125              specified   the   resource  is  not  started  automatically.  If
126              --no-default-ops is specified, only monitor operations are  cre‐
127              ated  for the resource and all other operations use default set‐
128              tings. If --wait is specified, pcs will wait up to  'n'  seconds
129              for  the  resource to start and then return 0 if the resource is
130              started, or 1 if the resource has not yet started. If 'n' is not
131              specified it defaults to 60 minutes.
132
133              Example:  Create  a  new  resource  called  'VirtualIP'  with IP
134              address 192.168.0.99, netmask of  32,  monitored  everything  30
135              seconds,  on  eth2:  pcs  resource  create  VirtualIP ocf:heart‐
136              beat:IPaddr2 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor
137              interval=30s
138
139       delete <resource id|group id|bundle id|clone id>
140              Deletes  the resource, group, bundle or clone (and all resources
141              within the group/bundle/clone).
142
143       remove <resource id|group id|bundle id|clone id>
144              Deletes the resource, group, bundle or clone (and all  resources
145              within the group/bundle/clone).
146
147       enable <resource id | tag id>... [--wait[=n]]
148              Allow  the cluster to start the resources. Depending on the rest
149              of the configuration (constraints, options, failures, etc),  the
150              resources  may  remain stopped. If --wait is specified, pcs will
151              wait up to 'n' seconds for  the  resources  to  start  and  then
152              return  0  if  the  resources are started, or 1 if the resources
153              have not yet started. If 'n' is not specified it defaults to  60
154              minutes.
155
156       disable  <resource  id  | tag id>... [--safe [--no-strict]] [--simulate
157       [--brief]] [--wait[=n]]
158              Attempt to stop the resources if they are running and forbid the
159              cluster  from  starting them again. Depending on the rest of the
160              configuration  (constraints,  options,   failures,   etc),   the
161              resources may remain started.
162              If  --safe is specified, no changes to the cluster configuration
163              will be made if other than specified resources would be affected
164              in any way.
165              If  --no-strict is specified, no changes to the cluster configu‐
166              ration will be made if other than specified resources would  get
167              stopped or demoted. Moving resources between nodes is allowed.
168              If --simulate is specified, no changes to the cluster configura‐
169              tion will be made and the effect of the changes will be  printed
170              instead.  If  --brief is also specified, only a list of affected
171              resources will be printed.
172              If --wait is specified, pcs will wait up to 'n' seconds for  the
173              resources to stop and then return 0 if the resources are stopped
174              or 1 if the resources have not stopped. If 'n' is not  specified
175              it defaults to 60 minutes.
176
177       safe-disable  <resource  id  |  tag  id>...  [--no-strict]  [--simulate
178       [--brief]] [--wait[=n]] [--force]
179              Attempt to stop the resources if they are running and forbid the
180              cluster  from  starting them again. Depending on the rest of the
181              configuration  (constraints,  options,   failures,   etc),   the
182              resources may remain started. No changes to the cluster configu‐
183              ration will be made if other than specified resources  would  be
184              affected in any way.
185              If  --no-strict is specified, no changes to the cluster configu‐
186              ration will be made if other than specified resources would  get
187              stopped or demoted. Moving resources between nodes is allowed.
188              If --simulate is specified, no changes to the cluster configura‐
189              tion will be made and the effect of the changes will be  printed
190              instead.  If  --brief is also specified, only a list of affected
191              resources will be printed.
192              If --wait is specified, pcs will wait up to 'n' seconds for  the
193              resources to stop and then return 0 if the resources are stopped
194              or 1 if the resources have not stopped. If 'n' is not  specified
195              it defaults to 60 minutes.
196              If  --force  is  specified,  checks  for  safe  disable  will be
197              skipped.
198
199       restart <resource id> [node] [--wait=n]
200              Restart the resource specified. If a node is  specified  and  if
201              the  resource  is a clone or bundle it will be restarted only on
202              the node specified. If --wait is specified, then we will wait up
203              to  'n' seconds for the resource to be restarted and return 0 if
204              the restart was successful or 1 if it was not.
205
206       debug-start <resource id> [--full]
207              This command will force the specified resource to start on  this
208              node  ignoring  the cluster recommendations and print the output
209              from  starting  the  resource.   Using  --full  will  give  more
210              detailed  output.   This  is mainly used for debugging resources
211              that fail to start.
212
213       debug-stop <resource id> [--full]
214              This command will force the specified resource to stop  on  this
215              node  ignoring  the cluster recommendations and print the output
216              from  stopping  the  resource.   Using  --full  will  give  more
217              detailed  output.   This  is mainly used for debugging resources
218              that fail to stop.
219
220       debug-promote <resource id> [--full]
221              This command will force the specified resource to be promoted on
222              this  node  ignoring  the  cluster recommendations and print the
223              output from promoting the resource.  Using --full will give more
224              detailed  output.   This  is mainly used for debugging resources
225              that fail to promote.
226
227       debug-demote <resource id> [--full]
228              This command will force the specified resource to be demoted  on
229              this  node  ignoring  the  cluster recommendations and print the
230              output from demoting the resource.  Using --full will give  more
231              detailed  output.   This  is mainly used for debugging resources
232              that fail to demote.
233
234       debug-monitor <resource id> [--full]
235              This command will force the specified resource to  be  monitored
236              on  this node ignoring the cluster recommendations and print the
237              output from monitoring the resource.   Using  --full  will  give
238              more  detailed  output.   This  is  mainly  used  for  debugging
239              resources that fail to be monitored.
240
241       move <resource id> [destination node] [--master]  [lifetime=<lifetime>]
242       [--wait[=n]]
243              Move  the  resource  off  the node it is currently running on by
244              creating a -INFINITY location constraint to  ban  the  node.  If
245              destination node is specified the resource will be moved to that
246              node by creating an INFINITY location constraint to  prefer  the
247              destination  node.  If --master is used the scope of the command
248              is limited to the master role and you must  use  the  promotable
249              clone id (instead of the resource id).
250
251              If  lifetime  is specified then the constraint will expire after
252              that time, otherwise it defaults to infinity and the  constraint
253              can  be  cleared manually with 'pcs resource clear' or 'pcs con‐
254              straint delete'. Lifetime is expected to  be  specified  as  ISO
255              8601  duration (see https://en.wikipedia.org/wiki/ISO_8601#Dura‐
256              tions).
257
258              If --wait is specified, pcs will wait up to 'n' seconds for  the
259              resource  to move and then return 0 on success or 1 on error. If
260              'n' is not specified it defaults to 60 minutes.
261
262              If you want the resource to preferably  avoid  running  on  some
263              nodes  but be able to failover to them use 'pcs constraint loca‐
264              tion avoids'.
265
266       ban <resource id> [node] [--master] [lifetime=<lifetime>] [--wait[=n]]
267              Prevent the resource id specified from running on the  node  (or
268              on the current node it is running on if no node is specified) by
269              creating a -INFINITY location constraint. If  --master  is  used
270              the  scope  of the command is limited to the master role and you
271              must use the promotable clone id (instead of the resource id).
272
273              If lifetime is specified then the constraint will  expire  after
274              that  time, otherwise it defaults to infinity and the constraint
275              can be cleared manually with 'pcs resource clear' or  'pcs  con‐
276              straint  delete'.  Lifetime  is  expected to be specified as ISO
277              8601 duration (see  https://en.wikipedia.org/wiki/ISO_8601#Dura‐
278              tions).
279
280              If  --wait is specified, pcs will wait up to 'n' seconds for the
281              resource to move and then return 0 on success or 1 on error.  If
282              'n' is not specified it defaults to 60 minutes.
283
284              If  you  want  the  resource to preferably avoid running on some
285              nodes but be able to failover to them use 'pcs constraint  loca‐
286              tion avoids'.
287
288       clear <resource id> [node] [--master] [--expired] [--wait[=n]]
289              Remove  constraints  created by move and/or ban on the specified
290              resource (and node if specified). If --master is used the  scope
291              of  the  command  is limited to the master role and you must use
292              the master id (instead of the  resource  id).  If  --expired  is
293              specified,  only  constraints  with  expired  lifetimes  will be
294              removed. If --wait is specified, pcs will wait up to 'n' seconds
295              for  the  operation  to finish (including starting and/or moving
296              resources if appropriate) and then return 0 on success or  1  on
297              error. If 'n' is not specified it defaults to 60 minutes.
298
299       standards
300              List  available  resource  agent  standards  supported  by  this
301              installation (OCF, LSB, etc.).
302
303       providers
304              List available OCF resource agent providers.
305
306       agents [standard[:provider]]
307              List  available  agents  optionally  filtered  by  standard  and
308              provider.
309
310       update <resource id> [resource options] [op [<operation action> <opera‐
311       tion options>]...] [meta <meta operations>...] [--wait[=n]]
312              Add/Change options to specified resource, clone  or  multi-state
313              resource.   If an operation (op) is specified it will update the
314              first found operation with the  same  action  on  the  specified
315              resource,  if  no  operation  with that action exists then a new
316              operation will be created.  (WARNING: all  existing  options  on
317              the  updated  operation will be reset if not specified.)  If you
318              want to create multiple monitor operations you  should  use  the
319              'op  add'  &  'op remove' commands.  If --wait is specified, pcs
320              will wait up to 'n' seconds for the changes to take  effect  and
321              then return 0 if the changes have been processed or 1 otherwise.
322              If 'n' is not specified it defaults to 60 minutes.
323
324       op add <resource id> <operation action> [operation properties]
325              Add operation for specified resource.
326
327       op delete <resource id> <operation action> [<operation properties>...]
328              Remove specified operation (note: you  must  specify  the  exact
329              operation properties to properly remove an existing operation).
330
331       op delete <operation id>
332              Remove the specified operation id.
333
334       op remove <resource id> <operation action> [<operation properties>...]
335              Remove  specified  operation  (note:  you must specify the exact
336              operation properties to properly remove an existing operation).
337
338       op remove <operation id>
339              Remove the specified operation id.
340
341       op defaults [config] [--all] [--full] [--no-check-expired]
342              List currently configured  default  values  for  operations.  If
343              --all  is specified, also list expired sets of values. If --full
344              is specified, also list ids. If --no-expire-check is  specified,
345              do not evaluate whether sets of values are expired.
346
347       op defaults <name>=<value>
348              Set default values for operations.
349              NOTE:  Defaults  do  not  apply to resources which override them
350              with their own defined values.
351
352       op defaults set create [<set options>] [meta [<name>=<value>]...] [rule
353       [<expression>]]
354              Create  a new set of default values for resource operations. You
355              may specify a rule describing resources and / or  operations  to
356              which the set applies.
357
358              Set options are: id, score
359
360              Expression looks like one of the following:
361                op <operation name> [interval=<interval>]
362                resource [<standard>]:[<provider>]:[<type>]
363                defined|not_defined <node attribute>
364                <node   attribute>   lt|gt|lte|gte|eq|ne  [string|integer|num‐
365              ber|version] <value>
366                date gt|lt <date>
367                date in_range [<date>] to <date>
368                date in_range <date> to duration <duration options>
369                date-spec <date-spec options>
370                <expression> and|or <expression>
371                (<expression>)
372
373              You may specify all or any of 'standard', 'provider' and  'type'
374              in  a resource expression. For example: 'resource ocf::' matches
375              all  resources  of  'ocf'  standard,  while  'resource  ::Dummy'
376              matches  all resources of 'Dummy' type regardless of their stan‐
377              dard and provider.
378
379              Dates are expected to conform to ISO 8601 format.
380
381              Duration options are:  hours,  monthdays,  weekdays,  yearsdays,
382              months,  weeks,  years, weekyears, moon. Value for these options
383              is an integer.
384
385              Date-spec options are: hours,  monthdays,  weekdays,  yearsdays,
386              months,  weeks,  years, weekyears, moon. Value for these options
387              is an integer or a range written as integer-integer.
388
389              NOTE: Defaults do not apply to  resources  which  override  them
390              with their own defined values.
391
392       op defaults set delete [<set id>]...
393              Delete specified options sets.
394
395       op defaults set remove [<set id>]...
396              Delete specified options sets.
397
398       op defaults set update <set id> [meta [<name>=<value>]...]
399              Add,  remove or change values in specified set of default values
400              for resource operations.
401              NOTE: Defaults do not apply to  resources  which  override  them
402              with their own defined values.
403
404       op defaults update <name>=<value>...
405              Set  default values for operations. This is a simplified command
406              useful for cases when you only manage one set of default values.
407              NOTE: Defaults do not apply to  resources  which  override  them
408              with their own defined values.
409
410       meta <resource id | group id | clone id> <meta options> [--wait[=n]]
411              Add specified options to the specified resource, group or clone.
412              Meta options should be in the format of name=value, options  may
413              be  removed  by  setting an option without a value. If --wait is
414              specified, pcs will wait up to 'n' seconds for  the  changes  to
415              take effect and then return 0 if the changes have been processed
416              or 1 otherwise. If 'n' is not specified it defaults to  60  min‐
417              utes.
418              Example:   pcs  resource  meta  TestResource  failure-timeout=50
419              stickiness=
420
421       group list
422              Show  all  currently  configured  resource  groups   and   their
423              resources.
424
425       group  add  <group  id>  <resource  id> [resource id] ... [resource id]
426       [--before <resource id> | --after <resource id>] [--wait[=n]]
427              Add the specified resource to the group, creating the  group  if
428              it  does  not exist. If the resource is present in another group
429              it is moved to the new group. You can use --before or --after to
430              specify  the  position of the added resources relatively to some
431              resource already existing in the group. By adding resources to a
432              group they are already in and specifying --after or --before you
433              can move the resources in the group. If --wait is specified, pcs
434              will wait up to 'n' seconds for the operation to finish (includ‐
435              ing moving resources if appropriate) and then return 0  on  suc‐
436              cess  or  1  on error. If 'n' is not specified it defaults to 60
437              minutes.
438
439       group delete <group id> [resource id]... [--wait[=n]]
440              Remove the group (note: this does not remove any resources  from
441              the cluster) or if resources are specified, remove the specified
442              resources from the group.  If --wait is specified, pcs will wait
443              up  to 'n' seconds for the operation to finish (including moving
444              resources if appropriate) and the return 0 on success  or  1  on
445              error.  If 'n' is not specified it defaults to 60 minutes.
446
447       group remove <group id> [resource id]... [--wait[=n]]
448              Remove  the group (note: this does not remove any resources from
449              the cluster) or if resources are specified, remove the specified
450              resources from the group.  If --wait is specified, pcs will wait
451              up to 'n' seconds for the operation to finish (including  moving
452              resources  if  appropriate)  and the return 0 on success or 1 on
453              error.  If 'n' is not specified it defaults to 60 minutes.
454
455       ungroup <group id> [resource id]... [--wait[=n]]
456              Remove the group (note: this does not remove any resources  from
457              the cluster) or if resources are specified, remove the specified
458              resources from the group.  If --wait is specified, pcs will wait
459              up  to 'n' seconds for the operation to finish (including moving
460              resources if appropriate) and the return 0 on success  or  1  on
461              error.  If 'n' is not specified it defaults to 60 minutes.
462
463       clone  <resource  id  |  group  id>  [<clone  id>]  [clone  options]...
464       [--wait[=n]]
465              Set up the specified resource or group as a clone. If --wait  is
466              specified,  pcs will wait up to 'n' seconds for the operation to
467              finish (including starting clone instances if  appropriate)  and
468              then  return 0 on success or 1 on error. If 'n' is not specified
469              it defaults to 60 minutes.
470
471       promotable <resource id | group id>  [<clone  id>]  [clone  options]...
472       [--wait[=n]]
473              Set  up  the  specified resource or group as a promotable clone.
474              This is an alias for 'pcs  resource  clone  <resource  id>  pro‐
475              motable=true'.
476
477       unclone <resource id | group id> [--wait[=n]]
478              Remove  the clone which contains the specified group or resource
479              (the resource or group will not be removed).  If --wait is spec‐
480              ified, pcs will wait up to 'n' seconds for the operation to fin‐
481              ish (including stopping clone instances if appropriate) and then
482              return  0  on success or 1 on error.  If 'n' is not specified it
483              defaults to 60 minutes.
484
485       bundle  create  <bundle  id>  container  <container  type>  [<container
486       options>]  [network  <network  options>]  [port-map  <port options>]...
487       [storage-map <storage options>]... [meta <meta  options>]  [--disabled]
488       [--wait[=n]]
489              Create  a  new bundle encapsulating no resources. The bundle can
490              be used either as it is or a resource may be put into it at  any
491              time.  If  --disabled  is  specified,  the bundle is not started
492              automatically. If --wait is specified, pcs will wait up  to  'n'
493              seconds  for the bundle to start and then return 0 on success or
494              1 on error. If 'n' is not specified it defaults to 60 minutes.
495
496       bundle reset <bundle id> [container <container options>] [network <net‐
497       work  options>]  [port-map  <port  options>]...  [storage-map  <storage
498       options>]... [meta <meta options>] [--disabled] [--wait[=n]]
499              Configure specified bundle with  given  options.  Unlike  bundle
500              update, this command resets the bundle according given options -
501              no previous options are kept. Resources inside  the  bundle  are
502              kept  as they are. If --disabled is specified, the bundle is not
503              started automatically. If --wait is specified, pcs will wait  up
504              to 'n' seconds for the bundle to start and then return 0 on suc‐
505              cess or 1 on error. If 'n' is not specified it  defaults  to  60
506              minutes.
507
508       bundle  update  <bundle  id>  [container  <container options>] [network
509       <network options>] [port-map (add <port options>) |  (delete  |  remove
510       <id>...)]...  [storage-map  (add  <storage options>) | (delete | remove
511       <id>...)]... [meta <meta options>] [--wait[=n]]
512              Add, remove or change options to specified bundle. If  you  wish
513              to  update  a  resource encapsulated in the bundle, use the 'pcs
514              resource update' command instead and specify  the  resource  id.
515              If  --wait is specified, pcs will wait up to 'n' seconds for the
516              operation to finish (including moving resources if  appropriate)
517              and then return 0 on success or 1 on error.  If 'n' is not spec‐
518              ified it defaults to 60 minutes.
519
520       manage <resource id | tag id>... [--monitor]
521              Set resources listed to managed mode (default). If --monitor  is
522              specified, enable all monitor operations of the resources.
523
524       unmanage <resource id | tag id>... [--monitor]
525              Set  resources  listed  to unmanaged mode. When a resource is in
526              unmanaged mode, the cluster is not allowed to start nor stop the
527              resource.  If --monitor is specified, disable all monitor opera‐
528              tions of the resources.
529
530       defaults [config] [--all] [--full] [--no-check-expired]
531              List currently configured default values for resources. If --all
532              is  specified,  also  list  expired sets of values. If --full is
533              specified, also list ids. If --no-expire-check is specified,  do
534              not evaluate whether sets of values are expired.
535
536       defaults <name>=<value>
537              Set default values for resources.
538              NOTE:  Defaults  do  not  apply to resources which override them
539              with their own defined values.
540
541       defaults set create [<set options>]  [meta  [<name>=<value>]...]  [rule
542       [<expression>]]
543              Create  a new set of default values for resources. You may spec‐
544              ify a rule describing resources to which the set applies.
545
546              Set options are: id, score
547
548              Expression looks like one of the following:
549                resource [<standard>]:[<provider>]:[<type>]
550                defined|not_defined <node attribute>
551                <node  attribute>   lt|gt|lte|gte|eq|ne   [string|integer|num‐
552              ber|version] <value>
553                date gt|lt <date>
554                date in_range [<date>] to <date>
555                date in_range <date> to duration <duration options>
556                date-spec <date-spec options>
557                <expression> and|or <expression>
558                (<expression>)
559
560              You  may specify all or any of 'standard', 'provider' and 'type'
561              in a resource expression. For example: 'resource ocf::'  matches
562              all  resources  of  'ocf'  standard,  while  'resource  ::Dummy'
563              matches all resources of 'Dummy' type regardless of their  stan‐
564              dard and provider.
565
566              Dates are expected to conform to ISO 8601 format.
567
568              Duration  options  are:  hours,  monthdays, weekdays, yearsdays,
569              months, weeks, years, weekyears, moon. Value for  these  options
570              is an integer.
571
572              Date-spec  options  are:  hours, monthdays, weekdays, yearsdays,
573              months, weeks, years, weekyears, moon. Value for  these  options
574              is an integer or a range written as integer-integer.
575
576              NOTE:  Defaults  do  not  apply to resources which override them
577              with their own defined values.
578
579       defaults set delete [<set id>]...
580              Delete specified options sets.
581
582       defaults set remove [<set id>]...
583              Delete specified options sets.
584
585       defaults set update <set id> [meta [<name>=<value>]...]
586              Add, remove or change values in specified set of default  values
587              for resources.
588              NOTE:  Defaults  do  not  apply to resources which override them
589              with their own defined values.
590
591       defaults update <name>=<value>...
592              Set default values for resources. This is a  simplified  command
593              useful for cases when you only manage one set of default values.
594              NOTE:  Defaults  do  not  apply to resources which override them
595              with their own defined values.
596
597       cleanup [<resource id>]  [node=<node>]  [operation=<operation>  [inter‐
598       val=<interval>]] [--strict]
599              Make  the  cluster  forget failed operations from history of the
600              resource and re-detect its current state. This can be useful  to
601              purge knowledge of past failures that have since been resolved.
602              If  the  named  resource  is  part  of  a group, or one numbered
603              instance of a clone or bundled resource, the clean-up applies to
604              the whole collective resource unless --strict is given.
605              If  a  resource id is not specified then all resources / stonith
606              devices will be cleaned up.
607              If a node is not specified then resources / stonith  devices  on
608              all nodes will be cleaned up.
609
610       refresh [<resource id>] [node=<node>] [--strict]
611              Make  the cluster forget the complete operation history (includ‐
612              ing failures) of the resource and re-detect its  current  state.
613              If  you are interested in forgetting failed operations only, use
614              the 'pcs resource cleanup' command.
615              If the named resource is  part  of  a  group,  or  one  numbered
616              instance  of a clone or bundled resource, the refresh applies to
617              the whole collective resource unless --strict is given.
618              If a resource id is not specified then all resources  /  stonith
619              devices will be refreshed.
620              If  a  node is not specified then resources / stonith devices on
621              all nodes will be refreshed.
622
623       failcount show  [<resource  id>]  [node=<node>]  [operation=<operation>
624       [interval=<interval>]] [--full]
625              Show  current  failcount for resources, optionally filtered by a
626              resource, node, operation and its interval. If --full is  speci‐
627              fied  do  not  sum  failcounts  per  resource and node. Use 'pcs
628              resource cleanup' or 'pcs resource refresh' to reset failcounts.
629
630       relocate dry-run [resource1] [resource2] ...
631              The same as 'relocate run' but has no effect on the cluster.
632
633       relocate run [resource1] [resource2] ...
634              Relocate specified resources to their preferred  nodes.   If  no
635              resources  are  specified, relocate all resources.  This command
636              calculates the preferred node for each resource  while  ignoring
637              resource stickiness.  Then it creates location constraints which
638              will cause the resources to move to their preferred nodes.  Once
639              the  resources have been moved the constraints are deleted auto‐
640              matically.  Note that the preferred node is calculated based  on
641              current  cluster  status, constraints, location of resources and
642              other settings and thus it might change over time.
643
644       relocate show
645              Display current status  of  resources  and  their  optimal  node
646              ignoring resource stickiness.
647
648       relocate clear
649              Remove all constraints created by the 'relocate run' command.
650
651       utilization [<resource id> [<name>=<value> ...]]
652              Add  specified  utilization  options  to  specified resource. If
653              resource is not specified, shows utilization of  all  resources.
654              If  utilization  options are not specified, shows utilization of
655              specified resource.  Utilization  option  should  be  in  format
656              name=value,  value  has to be integer. Options may be removed by
657              setting an option without a value. Example:  pcs  resource  uti‐
658              lization TestResource cpu= ram=20
659
660       relations <resource id> [--full]
661              Display  relations  of a resource specified by its id with other
662              resources in a tree structure. Supported types of resource rela‐
663              tions are: ordering constraints, ordering set constraints, rela‐
664              tions defined by resource hierarchy (clones,  groups,  bundles).
665              If --full is used, more verbose output will be printed.
666
667   cluster
668       setup  <cluster name> (<node name> [addr=<node address>]...)... [trans‐
669       port knet|udp|udpu [<transport options>] [link <link options>]... [com‐
670       pression  <compression  options>]  [crypto  <crypto  options>]]  [totem
671       <totem  options>]  [quorum  <quorum  options>]   ([--enable]   [--start
672       [--wait[=<n>]]] [--no-keys-sync]) | [--corosync_conf <path>]
673              Create  a  cluster from the listed nodes and synchronize cluster
674              configuration files to them. If --corosync_conf is specified, do
675              not  connect to other nodes and save corosync.conf to the speci‐
676              fied path; see 'Local only mode' below for details.
677
678              Nodes  are  specified  by  their  names  and  optionally   their
679              addresses.  If  no  addresses are specified for a node, pcs will
680              configure corosync  to  communicate  with  that  node  using  an
681              address provided in 'pcs host auth' command. Otherwise, pcs will
682              configure corosync to communicate with the node using the speci‐
683              fied addresses.
684
685              Transport knet:
686              This  is  the  default  transport. It allows configuring traffic
687              encryption and compression as well as using  multiple  addresses
688              (links) for nodes.
689              Transport    options   are:   ip_version,   knet_pmtud_interval,
690              link_mode
691              Link  options   are:   link_priority,   linknumber,   mcastport,
692              ping_interval,  ping_precision, ping_timeout, pong_count, trans‐
693              port (udp or sctp)
694              Each 'link' followed by options sets options for one link in the
695              order  the  links  are  defined by nodes' addresses. You can set
696              link options for a subset of links using a linknumber. See exam‐
697              ples below.
698              Compression options are: level, model, threshold
699              Crypto options are: cipher, hash, model
700              By   default,  encryption  is  enabled  with  cipher=aes256  and
701              hash=sha256.  To  disable  encryption,   set   cipher=none   and
702              hash=none.
703
704              Transports udp and udpu:
705              These  transports  are  limited to one address per node. They do
706              not support traffic encryption nor compression.
707              Transport options are: ip_version, netmtu
708              Link options are: bindnetaddr, broadcast, mcastaddr,  mcastport,
709              ttl
710
711              Totem and quorum can be configured regardless of used transport.
712              Totem options are: consensus, downcheck, fail_recv_const, heart‐
713              beat_failures_allowed,  hold,   join,   max_messages,   max_net‐
714              work_delay,       merge,       miss_count_const,      send_join,
715              seqno_unchanged_const, token, token_coefficient,  token_retrans‐
716              mit, token_retransmits_before_loss_const, window_size
717              Quorum   options   are:   auto_tie_breaker,   last_man_standing,
718              last_man_standing_window, wait_for_all
719
720              Transports and their  options,  link,  compression,  crypto  and
721              totem  options  are all documented in corosync.conf(5) man page;
722              knet  link  options  are  prefixed  'knet_'  there,  compression
723              options  are prefixed 'knet_compression_' and crypto options are
724              prefixed 'crypto_'. Quorum options are  documented  in  votequo‐
725              rum(5) man page.
726
727              --enable  will  configure  the  cluster  to start on nodes boot.
728              --start will start the cluster right after creating  it.  --wait
729              will   wait  up  to  'n'  seconds  for  the  cluster  to  start.
730              --no-keys-sync will skip creating and distributing pcsd SSL cer‐
731              tificate  and  key and corosync and pacemaker authkey files. Use
732              this if you provide your own certificates and keys.
733
734              Local only mode:
735              NOTE: This feature is still being worked  on  and  thus  may  be
736              changed in future.
737              By  default,  pcs connects to all specified nodes to verify they
738              can be used in the new cluster and to send cluster configuration
739              files   to   them.  If  this  is  not  what  you  want,  specify
740              --corosync_conf option followed by a file path.  Pcs  will  save
741              corosync.conf  to  the  specified  file  and will not connect to
742              cluster nodes. These are the task pcs skips in that case:
743              * make sure the nodes are not running or  configured  to  run  a
744              cluster already
745              *  make  sure  cluster  packages  are installed on all nodes and
746              their versions are compatible
747              * make sure there are no cluster configuration files on any node
748              (run  'pcs cluster destroy' and remove pcs_settings.conf file on
749              all nodes)
750              *    synchronize    corosync     and     pacemaker     authkeys,
751              /etc/corosync/authkey  and  /etc/pacemaker/authkey respectively,
752              and the corosync.conf file
753              * authenticate the cluster nodes against each other ('pcs  clus‐
754              ter auth' or 'pcs host auth' command)
755              *  synchronize pcsd certificates (so that pcs web UI can be used
756              in an HA mode)
757
758              Examples:
759              Create a cluster with default settings:
760                  pcs cluster setup newcluster node1 node2
761              Create a cluster using two links:
762                  pcs   cluster   setup   newcluster   node1    addr=10.0.1.11
763              addr=10.0.2.11 node2 addr=10.0.1.12 addr=10.0.2.12
764              Set  link options for all links. Link options are matched to the
765              links in order. The first link (link 0) has sctp transport,  the
766              second link (link 1) has mcastport 55405:
767                  pcs    cluster   setup   newcluster   node1   addr=10.0.1.11
768              addr=10.0.2.11  node2  addr=10.0.1.12  addr=10.0.2.12  transport
769              knet link transport=sctp link mcastport=55405
770              Set  link  options  for  the  second and fourth links only. Link
771              options are matched to the links based on the linknumber  option
772              (the first link is link 0):
773                  pcs    cluster   setup   newcluster   node1   addr=10.0.1.11
774              addr=10.0.2.11     addr=10.0.3.11      addr=10.0.4.11      node2
775              addr=10.0.1.12   addr=10.0.2.12   addr=10.0.3.12  addr=10.0.4.12
776              transport knet link linknumber=3 mcastport=55405  link  linknum‐
777              ber=1 transport=sctp
778              Create a cluster using udp transport with a non-default port:
779                  pcs  cluster setup newcluster node1 node2 transport udp link
780              mcastport=55405
781
782       config  [show]   [--output-format   <cmd|json|text>]   [--corosync_conf
783       <path>]
784              Show cluster configuration. There are 3 formats of output avail‐
785              able: 'cmd', 'json' and 'text', default is 'text'. Format 'text'
786              is  a human friendly output. Format 'cmd' prints a cluster setup
787              command which recreates a cluster with the  same  configuration.
788              Format 'json' is a machine oriented output with cluster configu‐
789              ration. If  --corosync_conf  is  specified,  configuration  file
790              specified  by <path> is used instead of the current cluster con‐
791              figuration.
792
793       config update [transport <transport options>] [compression <compression
794       options>]   [crypto   <crypto   options>]   [totem   <totem   options>]
795       [--corosync_conf <path>]
796              Update cluster configuration. If --corosync_conf  is  specified,
797              update  cluster configuration in a file specified by <path>. All
798              options are documented in corosync.conf(5) man page.  There  are
799              different transport options for transport types. Compression and
800              crypto options are only  available  for  knet  transport.  Totem
801              options can be set regardless of the transport type.
802              Transport   options   for   knet   transport   are:  ip_version,
803              knet_pmtud_interval, link_mode
804              Transport options for udp and updu transports  are:  ip_version,
805              netmtu
806              Compression options are: level, model, threshold
807              Crypto options are: cipher, hash, model
808              Totem options are: consensus, downcheck, fail_recv_const, heart‐
809              beat_failures_allowed,  hold,   join,   max_messages,   max_net‐
810              work_delay,       merge,       miss_count_const,      send_join,
811              seqno_unchanged_const, token, token_coefficient,  token_retrans‐
812              mit, token_retransmits_before_loss_const, window_size
813
814       authkey corosync [<path>]
815              Generate a new corosync authkey and distribute it to all cluster
816              nodes. If <path> is specified, do not generate a key and use key
817              from the file.
818
819       start [--all | <node>... ] [--wait[=<n>]] [--request-timeout=<seconds>]
820              Start  a cluster on specified node(s). If no nodes are specified
821              then start a cluster on the local node. If  --all  is  specified
822              then start a cluster on all nodes. If the cluster has many nodes
823              then the start request may time out. In  that  case  you  should
824              consider  setting  --request-timeout  to  a  suitable  value. If
825              --wait is specified, pcs waits up to 'n' seconds for the cluster
826              to  get ready to provide services after the cluster has success‐
827              fully started.
828
829       stop [--all | <node>... ] [--request-timeout=<seconds>]
830              Stop a cluster on specified node(s). If no nodes  are  specified
831              then  stop  a  cluster  on the local node. If --all is specified
832              then stop a cluster on all nodes.  If  the  cluster  is  running
833              resources which take long time to stop then the stop request may
834              time out before the cluster actually stops.  In  that  case  you
835              should consider setting --request-timeout to a suitable value.
836
837       kill   Force  corosync  and pacemaker daemons to stop on the local node
838              (performs kill -9). Note that init  system  (e.g.  systemd)  can
839              detect  that  cluster  is not running and start it again. If you
840              want to stop cluster on a node, run pcs  cluster  stop  on  that
841              node.
842
843       enable [--all | <node>... ]
844              Configure  cluster  to run on node boot on specified node(s). If
845              node is not specified then cluster is enabled on the local node.
846              If --all is specified then cluster is enabled on all nodes.
847
848       disable [--all | <node>... ]
849              Configure  cluster to not run on node boot on specified node(s).
850              If node is not specified then cluster is disabled on  the  local
851              node.  If  --all  is  specified  then cluster is disabled on all
852              nodes.
853
854       auth [-u <username>] [-p <password>]
855              Authenticate pcs/pcsd to pcsd on nodes configured in  the  local
856              cluster.
857
858       status View current cluster status (an alias of 'pcs status cluster').
859
860       pcsd-status [<node>]...
861              Show  current status of pcsd on nodes specified, or on all nodes
862              configured in the local cluster if no nodes are specified.
863
864       sync   Sync cluster configuration (files which  are  supported  by  all
865              subcommands of this command) to all cluster nodes.
866
867       sync corosync
868              Sync  corosync  configuration  to  all  nodes found from current
869              corosync.conf file.
870
871       cib [filename] [scope=<scope> | --config]
872              Get the raw xml from the CIB (Cluster Information  Base).  If  a
873              filename  is  provided,  we save the CIB to that file, otherwise
874              the CIB is printed. Specify scope to get a specific  section  of
875              the CIB. Valid values of the scope are: acls, alerts, configura‐
876              tion,   constraints,   crm_config,   fencing-topology,    nodes,
877              op_defaults, resources, rsc_defaults, tags. --config is the same
878              as scope=configuration. Do not specify a scope if  you  want  to
879              edit the saved CIB using pcs (pcs -f <command>).
880
881       cib-push  <filename> [--wait[=<n>]] [diff-against=<filename_original> |
882       scope=<scope> | --config]
883              Push the raw xml from <filename> to the CIB (Cluster Information
884              Base).  You  can obtain the CIB by running the 'pcs cluster cib'
885              command, which is recommended first step when you want  to  per‐
886              form  desired  modifications  (pcs -f <command>) for the one-off
887              push.
888              If diff-against is specified, pcs  diffs  contents  of  filename
889              against  contents  of filename_original and pushes the result to
890              the CIB.
891              Specify scope to push a specific section of the CIB. Valid  val‐
892              ues  of the scope are: acls, alerts, configuration, constraints,
893              crm_config,  fencing-topology,  nodes,  op_defaults,  resources,
894              rsc_defaults, tags. --config is the same as scope=configuration.
895              Use of --config is recommended. Do not specify a  scope  if  you
896              need  to push the whole CIB or be warned in the case of outdated
897              CIB.
898              If --wait is specified wait up to 'n' seconds for changes to  be
899              applied.
900              WARNING:  the  selected  scope of the CIB will be overwritten by
901              the current content of the specified file.
902
903              Example:
904                  pcs cluster cib > original.xml
905                  cp original.xml new.xml
906                  pcs -f new.xml constraint location apache prefers node2
907                  pcs cluster cib-push new.xml diff-against=original.xml
908
909       cib-upgrade
910              Upgrade the CIB to conform to the latest version of the document
911              schema.
912
913       edit [scope=<scope> | --config]
914              Edit  the cib in the editor specified by the $EDITOR environment
915              variable and push out any changes upon saving. Specify scope  to
916              edit  a  specific  section of the CIB. Valid values of the scope
917              are: acls, alerts, configuration, constraints, crm_config, fenc‐
918              ing-topology, nodes, op_defaults, resources, rsc_defaults, tags.
919              --config is the same as scope=configuration. Use of --config  is
920              recommended.  Do  not  specify  a  scope if you need to edit the
921              whole CIB or be warned in the case of outdated CIB.
922
923       node  add  <node  name>  [addr=<node  address>]...  [watchdog=<watchdog
924       path>]   [device=<SBD   device   path>]...   [--start   [--wait[=<n>]]]
925       [--enable] [--no-watchdog-validation]
926              Add the node to the cluster and synchronize all relevant config‐
927              uration  files  to the new node. This command can only be run on
928              an existing cluster node.
929
930              The new node  is  specified  by  its  name  and  optionally  its
931              addresses.  If no addresses are specified for the node, pcs will
932              configure corosync to communicate with the node using an address
933              provided in 'pcs host auth' command. Otherwise, pcs will config‐
934              ure corosync to communicate with the node  using  the  specified
935              addresses.
936
937              Use  'watchdog' to specify a path to a watchdog on the new node,
938              when SBD is enabled in the cluster. If SBD  is  configured  with
939              shared storage, use 'device' to specify path to shared device(s)
940              on the new node.
941
942              If --start is specified also start cluster on the new  node,  if
943              --wait  is  specified wait up to 'n' seconds for the new node to
944              start. If --enable is specified configure cluster  to  start  on
945              the  new node on boot. If --no-watchdog-validation is specified,
946              validation of watchdog will be skipped.
947
948              WARNING: By default, it is tested whether the specified watchdog
949              is  supported.  This  may  cause  a restart of the system when a
950              watchdog  with  no-way-out-feature  enabled  is   present.   Use
951              --no-watchdog-validation to skip watchdog validation.
952
953       node delete <node name> [<node name>]...
954              Shutdown specified nodes and remove them from the cluster.
955
956       node remove <node name> [<node name>]...
957              Shutdown specified nodes and remove them from the cluster.
958
959       node  add-remote  <node name> [<node address>] [options] [op <operation
960       action>   <operation   options>    [<operation    action>    <operation
961       options>]...] [meta <meta options>...] [--wait[=<n>]]
962              Add  the node to the cluster as a remote node. Sync all relevant
963              configuration files to the new node. Start the node and  config‐
964              ure it to start the cluster on boot. Options are port and recon‐
965              nect_interval. Operations and meta belong to an underlying  con‐
966              nection  resource (ocf:pacemaker:remote). If node address is not
967              specified for the node, pcs will configure pacemaker to communi‐
968              cate  with the node using an address provided in 'pcs host auth'
969              command. Otherwise, pcs will configure pacemaker to  communicate
970              with the node using the specified addresses. If --wait is speci‐
971              fied, wait up to 'n' seconds for the node to start.
972
973       node delete-remote <node identifier>
974              Shutdown specified remote node and remove it from  the  cluster.
975              The  node-identifier  can be the name of the node or the address
976              of the node.
977
978       node remove-remote <node identifier>
979              Shutdown specified remote node and remove it from  the  cluster.
980              The  node-identifier  can be the name of the node or the address
981              of the node.
982
983       node add-guest <node name> <resource id> [options] [--wait[=<n>]]
984              Make the specified resource a guest node resource. Sync all rel‐
985              evant  configuration  files  to the new node. Start the node and
986              configure  it  to  start  the  cluster  on  boot.  Options   are
987              remote-addr,    remote-port   and   remote-connect-timeout.   If
988              remote-addr is not specified for the node,  pcs  will  configure
989              pacemaker to communicate with the node using an address provided
990              in 'pcs host auth' command. Otherwise, pcs will configure  pace‐
991              maker   to   communicate  with  the  node  using  the  specified
992              addresses. If --wait is specified, wait up to  'n'  seconds  for
993              the node to start.
994
995       node delete-guest <node identifier>
996              Shutdown  specified  guest  node and remove it from the cluster.
997              The node-identifier can be the name of the node or  the  address
998              of  the  node  or  id  of the resource that is used as the guest
999              node.
1000
1001       node remove-guest <node identifier>
1002              Shutdown specified guest node and remove it  from  the  cluster.
1003              The  node-identifier  can be the name of the node or the address
1004              of the node or id of the resource that  is  used  as  the  guest
1005              node.
1006
1007       node clear <node name>
1008              Remove specified node from various cluster caches. Use this if a
1009              removed node is still considered by the cluster to be  a  member
1010              of the cluster.
1011
1012       link add <node_name>=<node_address>... [options <link options>]
1013              Add  a  corosync  link.  One  address must be specified for each
1014              cluster node. If no linknumber is specified, pcs  will  use  the
1015              lowest available linknumber.
1016              Link  options  (documented  in  corosync.conf(5)  man page) are:
1017              link_priority, linknumber, mcastport, ping_interval, ping_preci‐
1018              sion, ping_timeout, pong_count, transport (udp or sctp)
1019
1020       link delete <linknumber> [<linknumber>]...
1021              Remove specified corosync links.
1022
1023       link remove <linknumber> [<linknumber>]...
1024              Remove specified corosync links.
1025
1026       link update <linknumber> [<node_name>=<node_address>...] [options <link
1027       options>]
1028              Change node addresses / link options  of  an  existing  corosync
1029              link.  Use  this  if  you cannot add / remove links which is the
1030              preferred way.
1031              Link options (documented in corosync.conf(5) man page) are:
1032              for knet  transport:  link_priority,  mcastport,  ping_interval,
1033              ping_precision,  ping_timeout,  pong_count,  transport  (udp  or
1034              sctp)
1035              for udp and udpu transports: bindnetaddr, broadcast,  mcastaddr,
1036              mcastport, ttl
1037
1038       uidgid List  the  current  configured uids and gids of users allowed to
1039              connect to corosync.
1040
1041       uidgid add [uid=<uid>] [gid=<gid>]
1042              Add the specified uid and/or gid to  the  list  of  users/groups
1043              allowed to connect to corosync.
1044
1045       uidgid delete [uid=<uid>] [gid=<gid>]
1046              Remove   the   specified   uid  and/or  gid  from  the  list  of
1047              users/groups allowed to connect to corosync.
1048
1049       uidgid remove [uid=<uid>] [gid=<gid>]
1050              Remove  the  specified  uid  and/or  gid  from   the   list   of
1051              users/groups allowed to connect to corosync.
1052
1053       corosync [node]
1054              Get  the  corosync.conf from the specified node or from the cur‐
1055              rent node if node not specified.
1056
1057       reload corosync
1058              Reload the corosync configuration on the current node.
1059
1060       destroy [--all]
1061              Permanently destroy the cluster on the current node, killing all
1062              cluster  processes and removing all cluster configuration files.
1063              Using --all will attempt to destroy the cluster on all nodes  in
1064              the local cluster.
1065
1066              WARNING: This command permanently removes any cluster configura‐
1067              tion that has been created. It is recommended to run 'pcs  clus‐
1068              ter stop' before destroying the cluster.
1069
1070       verify [--full] [-f <filename>]
1071              Checks  the  pacemaker configuration (CIB) for syntax and common
1072              conceptual errors. If no filename is specified the check is per‐
1073              formed  on the currently running cluster. If --full is used more
1074              verbose output will be printed.
1075
1076       report [--from "YYYY-M-D H:M:S" [--to "YYYY-M-D H:M:S"]] <dest>
1077              Create a tarball containing  everything  needed  when  reporting
1078              cluster  problems.   If --from and --to are not used, the report
1079              will include the past 24 hours.
1080
1081   stonith
1082       [status [--hide-inactive]]
1083              Show status of all  currently  configured  stonith  devices.  If
1084              --hide-inactive is specified, only show active stonith devices.
1085
1086       config [<stonith id>]...
1087              Show  options  of all currently configured stonith devices or if
1088              stonith ids are specified show the  options  for  the  specified
1089              stonith device ids.
1090
1091       list [filter] [--nodesc]
1092              Show list of all available stonith agents (if filter is provided
1093              then only stonith agents matching the filter will be shown).  If
1094              --nodesc  is  used  then  descriptions of stonith agents are not
1095              printed.
1096
1097       describe <stonith agent> [--full]
1098              Show options for specified stonith agent. If  --full  is  speci‐
1099              fied,  all  options  including  advanced and deprecated ones are
1100              shown.
1101
1102       create <stonith id> <stonith device type> [stonith device options]  [op
1103       <operation  action>  <operation options> [<operation action> <operation
1104       options>]...] [meta <meta options>...] [--group  <group  id>  [--before
1105       <stonith id> | --after <stonith id>]] [--disabled] [--wait[=n]]
1106              Create  stonith  device  with  specified  type  and  options. If
1107              --group is specified the stonith device is added  to  the  group
1108              named.  You  can use --before or --after to specify the position
1109              of the added stonith device relatively to  some  stonith  device
1110              already  existing  in  the  group. If--disabled is specified the
1111              stonith device is not used. If --wait  is  specified,  pcs  will
1112              wait  up to 'n' seconds for the stonith device to start and then
1113              return 0 if the stonith device is started, or 1 if  the  stonith
1114              device  has not yet started. If 'n' is not specified it defaults
1115              to 60 minutes.
1116
1117              Example: Create a device for nodes node1 and node2
1118              pcs stonith create MyFence fence_virt pcmk_host_list=node1,node2
1119              Example: Use port p1 for node n1 and ports p2 and p3 for node n2
1120              pcs       stonith        create        MyFence        fence_virt
1121              'pcmk_host_map=n1:p1;n2:p2,p3'
1122
1123       update <stonith id> [stonith device options]
1124              Add/Change options to specified stonith id.
1125
1126       delete <stonith id>
1127              Remove stonith id from configuration.
1128
1129       remove <stonith id>
1130              Remove stonith id from configuration.
1131
1132       enable <stonith id>... [--wait[=n]]
1133              Allow the cluster to use the stonith devices. If --wait is spec‐
1134              ified, pcs will wait up to 'n' seconds for the  stonith  devices
1135              to  start  and then return 0 if the stonith devices are started,
1136              or 1 if the stonith devices have not yet started. If 'n' is  not
1137              specified it defaults to 60 minutes.
1138
1139       disable <stonith id>... [--wait[=n]]
1140              Attempt to stop the stonith devices if they are running and dis‐
1141              allow the cluster to use them. If --wait is specified, pcs  will
1142              wait  up to 'n' seconds for the stonith devices to stop and then
1143              return 0 if the stonith devices are stopped or 1 if the  stonith
1144              devices have not stopped. If 'n' is not specified it defaults to
1145              60 minutes.
1146
1147       cleanup [<stonith id>] [--node <node>] [--strict]
1148              Make the cluster forget failed operations from  history  of  the
1149              stonith device and re-detect its current state. This can be use‐
1150              ful to purge knowledge of past failures  that  have  since  been
1151              resolved.
1152              If  the named stonith device is part of a group, or one numbered
1153              instance of a clone or bundled resource, the clean-up applies to
1154              the whole collective resource unless --strict is given.
1155              If  a  stonith  id is not specified then all resources / stonith
1156              devices will be cleaned up.
1157              If a node is not specified then resources / stonith  devices  on
1158              all nodes will be cleaned up.
1159
1160       refresh [<stonith id>] [--node <node>] [--strict]
1161              Make  the cluster forget the complete operation history (includ‐
1162              ing failures) of the stonith device and  re-detect  its  current
1163              state.  If  you  are  interested in forgetting failed operations
1164              only, use the 'pcs stonith cleanup' command.
1165              If the named stonith device is part of a group, or one  numbered
1166              instance  of a clone or bundled resource, the refresh applies to
1167              the whole collective resource unless --strict is given.
1168              If a stonith id is not specified then all  resources  /  stonith
1169              devices will be refreshed.
1170              If  a  node is not specified then resources / stonith devices on
1171              all nodes will be refreshed.
1172
1173       level [config]
1174              Lists all of the fencing levels currently configured.
1175
1176       level add <level> <target> <stonith id> [stonith id]...
1177              Add the fencing level for the specified target with the list  of
1178              stonith  devices to attempt for that target at that level. Fence
1179              levels are attempted in numerical order (starting with 1). If  a
1180              level  succeeds  (meaning all devices are successfully fenced in
1181              that level) then no other levels are tried, and  the  target  is
1182              considered  fenced.  Target  may  be  a node name <node_name> or
1183              %<node_name> or node%<node_name>, a node name regular expression
1184              regexp%<node_pattern>     or     a    node    attribute    value
1185              attrib%<name>=<value>.
1186
1187       level delete <level> [target] [stonith id]...
1188              Removes the fence level for the  level,  target  and/or  devices
1189              specified.  If no target or devices are specified then the fence
1190              level is removed. Target may  be  a  node  name  <node_name>  or
1191              %<node_name> or node%<node_name>, a node name regular expression
1192              regexp%<node_pattern>    or    a    node     attribute     value
1193              attrib%<name>=<value>.
1194
1195       level remove <level> [target] [stonith id]...
1196              Removes  the  fence  level  for the level, target and/or devices
1197              specified. If no target or devices are specified then the  fence
1198              level  is  removed.  Target  may  be  a node name <node_name> or
1199              %<node_name> or node%<node_name>, a node name regular expression
1200              regexp%<node_pattern>     or     a    node    attribute    value
1201              attrib%<name>=<value>.
1202
1203       level clear [target|stonith id(s)]
1204              Clears the fence levels on the target (or stonith id)  specified
1205              or  clears all fence levels if a target/stonith id is not speci‐
1206              fied. If more than one stonith id is specified they must be sep‐
1207              arated  by  a  comma  and  no  spaces. Target may be a node name
1208              <node_name> or %<node_name> or  node%<node_name>,  a  node  name
1209              regular  expression  regexp%<node_pattern>  or  a node attribute
1210              value attrib%<name>=<value>. Example: pcs  stonith  level  clear
1211              dev_a,dev_b
1212
1213       level verify
1214              Verifies  all  fence devices and nodes specified in fence levels
1215              exist.
1216
1217       fence <node> [--off]
1218              Fence the node specified (if --off is specified, use  the  'off'
1219              API  call  to  stonith  which  will turn the node off instead of
1220              rebooting it).
1221
1222       confirm <node> [--force]
1223              Confirm to the cluster that the specified node is  powered  off.
1224              This  allows  the  cluster  to recover from a situation where no
1225              stonith device is able to fence the node.  This  command  should
1226              ONLY  be  used  after manually ensuring that the node is powered
1227              off and has no access to shared resources.
1228
1229              WARNING: If this node is not actually powered  off  or  it  does
1230              have access to shared resources, data corruption/cluster failure
1231              can occur.  To  prevent  accidental  running  of  this  command,
1232              --force  or  interactive  user  response is required in order to
1233              proceed.
1234
1235              NOTE: It is not checked if the  specified  node  exists  in  the
1236              cluster  in order to be able to work with nodes not visible from
1237              the local cluster partition.
1238
1239       history [show [<node>]]
1240              Show fencing history for the specified node or all nodes  if  no
1241              node specified.
1242
1243       history cleanup [<node>]
1244              Cleanup  fence  history of the specified node or all nodes if no
1245              node specified.
1246
1247       history update
1248              Update fence history from all nodes.
1249
1250       sbd  enable  [watchdog=<path>[@<node>]]...  [device=<path>[@<node>]]...
1251       [<SBD_OPTION>=<value>]... [--no-watchdog-validation]
1252              Enable  SBD  in  cluster.  Default  path  for watchdog device is
1253              /dev/watchdog.   Allowed   SBD   options:   SBD_WATCHDOG_TIMEOUT
1254              (default:   5),  SBD_DELAY_START  (default:  no),  SBD_STARTMODE
1255              (default: always) and SBD_TIMEOUT_ACTION. SBD options are  docu‐
1256              mented  in  sbd(8)  man  page. It is possible to specify up to 3
1257              devices per node. If --no-watchdog-validation is specified, val‐
1258              idation of watchdogs will be skipped.
1259
1260              WARNING:  Cluster  has  to  be restarted in order to apply these
1261              changes.
1262
1263              WARNING: By default, it is tested whether the specified watchdog
1264              is  supported.  This  may  cause  a restart of the system when a
1265              watchdog  with  no-way-out-feature  enabled  is   present.   Use
1266              --no-watchdog-validation to skip watchdog validation.
1267
1268              Example  of enabling SBD in cluster with watchdogs on node1 will
1269              be /dev/watchdog2, on node2  /dev/watchdog1,  /dev/watchdog0  on
1270              all  other  nodes,  device /dev/sdb on node1, device /dev/sda on
1271              all other nodes and watchdog timeout will bet set to 10 seconds:
1272
1273              pcs  stonith  sbd  enable  watchdog=/dev/watchdog2@node1  watch‐
1274              dog=/dev/watchdog1@node2                 watchdog=/dev/watchdog0
1275              device=/dev/sdb@node1 device=/dev/sda SBD_WATCHDOG_TIMEOUT=10
1276
1277
1278       sbd disable
1279              Disable SBD in cluster.
1280
1281              WARNING: Cluster has to be restarted in  order  to  apply  these
1282              changes.
1283
1284       sbd   device  setup  device=<path>  [device=<path>]...  [watchdog-time‐
1285       out=<integer>]  [allocate-timeout=<integer>]   [loop-timeout=<integer>]
1286       [msgwait-timeout=<integer>]
1287              Initialize SBD structures on device(s) with specified timeouts.
1288
1289              WARNING: All content on device(s) will be overwritten.
1290
1291       sbd device message <device-path> <node> <message-type>
1292              Manually  set  a message of the specified type on the device for
1293              the node. Possible message types (they are documented in  sbd(8)
1294              man page): test, reset, off, crashdump, exit, clear
1295
1296       sbd status [--full]
1297              Show  status of SBD services in cluster and local device(s) con‐
1298              figured. If --full is specified, also dump  of  SBD  headers  on
1299              device(s) will be shown.
1300
1301       sbd config
1302              Show SBD configuration in cluster.
1303
1304
1305       sbd watchdog list
1306              Show all available watchdog devices on the local node.
1307
1308              WARNING:  Listing available watchdogs may cause a restart of the
1309              system  when  a  watchdog  with  no-way-out-feature  enabled  is
1310              present.
1311
1312
1313       sbd watchdog test [<watchdog-path>]
1314              This  operation  is  expected  to  force-reboot the local system
1315              without following any shutdown procedures using a  watchdog.  If
1316              no  watchdog  is  specified,  available watchdog will be used if
1317              only one watchdog device is available on the local system.
1318
1319
1320   acl
1321       [show] List all current access control lists.
1322
1323       enable Enable access control lists.
1324
1325       disable
1326              Disable access control lists.
1327
1328       role create <role id> [description=<description>]  [((read  |  write  |
1329       deny) (xpath <query> | id <id>))...]
1330              Create  a role with the id and (optional) description specified.
1331              Each role can also  have  an  unlimited  number  of  permissions
1332              (read/write/deny)  applied to either an xpath query or the id of
1333              a specific element in the cib.
1334              Permissions are applied to the selected XML element's entire XML
1335              subtree  (all  elements  enclosed  within  it). Write permission
1336              grants the ability to create, modify, or remove the element  and
1337              its  subtree,  and  also the ability to create any "scaffolding"
1338              elements (enclosing elements that do not have  attributes  other
1339              than  an ID). Permissions for more specific matches (more deeply
1340              nested elements) take precedence over more general ones. If mul‐
1341              tiple  permissions  are configured for the same match (for exam‐
1342              ple, in different roles applied to the same user), any deny per‐
1343              mission takes precedence, then write, then lastly read.
1344              An xpath may include an attribute expression to select only ele‐
1345              ments that  match  the  expression,  but  the  permission  still
1346              applies  to  the  entire  element  (and its subtree), not to the
1347              attribute alone. For example, using the  xpath  "//*[@name]"  to
1348              give write permission would allow changes to the entirety of all
1349              elements that have a "name" attribute and everything enclosed by
1350              those  elements.  There  is no way currently to give permissions
1351              for just one attribute of an element. That is to  say,  you  can
1352              not  define  an ACL that allows someone to read just the dc-uuid
1353              attribute of the cib tag - that would select the cib element and
1354              give read access to the entire CIB.
1355
1356       role delete <role id>
1357              Delete the role specified and remove it from any users/groups it
1358              was assigned to.
1359
1360       role remove <role id>
1361              Delete the role specified and remove it from any users/groups it
1362              was assigned to.
1363
1364       role assign <role id> [to] [user|group] <username/group>
1365              Assign  a  role to a user or group already created with 'pcs acl
1366              user/group create'. If there is user and group with the same  id
1367              and  it is not specified which should be used, user will be pri‐
1368              oritized. In cases like this  specify  whenever  user  or  group
1369              should be used.
1370
1371       role unassign <role id> [from] [user|group] <username/group>
1372              Remove  a  role  from  the  specified user. If there is user and
1373              group with the same id and it is not specified which  should  be
1374              used, user will be prioritized. In cases like this specify when‐
1375              ever user or group should be used.
1376
1377       user create <username> [<role id>]...
1378              Create an ACL for the user specified and  assign  roles  to  the
1379              user.
1380
1381       user delete <username>
1382              Remove the user specified (and roles assigned will be unassigned
1383              for the specified user).
1384
1385       user remove <username>
1386              Remove the user specified (and roles assigned will be unassigned
1387              for the specified user).
1388
1389       group create <group> [<role id>]...
1390              Create  an  ACL  for the group specified and assign roles to the
1391              group.
1392
1393       group delete <group>
1394              Remove the group specified (and roles  assigned  will  be  unas‐
1395              signed for the specified group).
1396
1397       group remove <group>
1398              Remove  the  group  specified  (and roles assigned will be unas‐
1399              signed for the specified group).
1400
1401       permission add <role id> ((read | write | deny)  (xpath  <query>  |  id
1402       <id>))...
1403              Add  the  listed  permissions to the role specified. Permissions
1404              are applied to either an xpath query or the  id  of  a  specific
1405              element in the CIB.
1406              Permissions are applied to the selected XML element's entire XML
1407              subtree (all elements  enclosed  within  it).  Write  permission
1408              grants  the ability to create, modify, or remove the element and
1409              its subtree, and also the ability to  create  any  "scaffolding"
1410              elements  (enclosing  elements that do not have attributes other
1411              than an ID). Permissions for more specific matches (more  deeply
1412              nested elements) take precedence over more general ones. If mul‐
1413              tiple permissions are configured for the same match  (for  exam‐
1414              ple, in different roles applied to the same user), any deny per‐
1415              mission takes precedence, then write, then lastly read.
1416              An xpath may include an attribute expression to select only ele‐
1417              ments  that  match  the  expression,  but  the  permission still
1418              applies to the entire element (and  its  subtree),  not  to  the
1419              attribute  alone.  For  example, using the xpath "//*[@name]" to
1420              give write permission would allow changes to the entirety of all
1421              elements that have a "name" attribute and everything enclosed by
1422              those elements. There is no way currently  to  give  permissions
1423              for  just  one  attribute of an element. That is to say, you can
1424              not define an ACL that allows someone to read just  the  dc-uuid
1425              attribute of the cib tag - that would select the cib element and
1426              give read access to the entire CIB.
1427
1428       permission delete <permission id>
1429              Remove the permission id specified (permission id's  are  listed
1430              in parenthesis after permissions in 'pcs acl' output).
1431
1432       permission remove <permission id>
1433              Remove  the  permission id specified (permission id's are listed
1434              in parenthesis after permissions in 'pcs acl' output).
1435
1436   property
1437       [list|show [<property> | --all | --defaults]] | [--all | --defaults]
1438              List property settings (default: lists  configured  properties).
1439              If  --defaults  is specified will show all property defaults, if
1440              --all is specified, current configured properties will be  shown
1441              with  unset  properties  and their defaults.  See pacemaker-con‐
1442              trold(7) and pacemaker-schedulerd(7) man pages for a description
1443              of the properties.
1444
1445       set <property>=[<value>] ... [--force]
1446              Set  specific  pacemaker  properties (if the value is blank then
1447              the property is removed from the configuration).  If a  property
1448              is not recognized by pcs the property will not be created unless
1449              the --force is used.  See pacemaker-controld(7)  and  pacemaker-
1450              schedulerd(7) man pages for a description of the properties.
1451
1452       unset <property> ...
1453              Remove  property  from configuration.  See pacemaker-controld(7)
1454              and pacemaker-schedulerd(7) man pages for a description  of  the
1455              properties.
1456
1457   constraint
1458       [list|show] [--full] [--all]
1459              List  all  current constraints that are not expired. If --all is
1460              specified also show expired constraints. If --full is  specified
1461              also list the constraint ids.
1462
1463       location <resource> prefers <node>[=<score>] [<node>[=<score>]]...
1464              Create  a location constraint on a resource to prefer the speci‐
1465              fied node with score (default score: INFINITY). Resource may  be
1466              either   a   resource  id  <resource_id>  or  %<resource_id>  or
1467              resource%<resource_id>, or a resource  name  regular  expression
1468              regexp%<resource_pattern>.
1469
1470       location <resource> avoids <node>[=<score>] [<node>[=<score>]]...
1471              Create  a  location constraint on a resource to avoid the speci‐
1472              fied node with score (default score: INFINITY). Resource may  be
1473              either   a   resource  id  <resource_id>  or  %<resource_id>  or
1474              resource%<resource_id>, or a resource  name  regular  expression
1475              regexp%<resource_pattern>.
1476
1477       location  <resource>  rule [id=<rule id>] [resource-discovery=<option>]
1478       [role=master|slave]     [constraint-id=<id>]      [score=<score>      |
1479       score-attribute=<attribute>] <expression>
1480              Creates  a  location  constraint  with  a  rule on the specified
1481              resource where expression looks like one of the following:
1482                defined|not_defined <node attribute>
1483                <node  attribute>   lt|gt|lte|gte|eq|ne   [string|integer|num‐
1484              ber|version] <value>
1485                date gt|lt <date>
1486                date in_range <date> to <date>
1487                date in_range <date> to duration <duration options>...
1488                date-spec <date spec options>...
1489                <expression> and|or <expression>
1490                ( <expression> )
1491              where  duration options and date spec options are: hours, month‐
1492              days, weekdays, yeardays, months, weeks, years, weekyears, moon.
1493              Resource   may   be   either  a  resource  id  <resource_id>  or
1494              %<resource_id> or resource%<resource_id>,  or  a  resource  name
1495              regular  expression regexp%<resource_pattern>. If score is omit‐
1496              ted it defaults to INFINITY. If id is omitted one  is  generated
1497              from  the  resource  id.  If  resource-discovery  is  omitted it
1498              defaults to 'always'.
1499
1500       location  [show  [resources  [<resource>...]]  |  [nodes  [<node>...]]]
1501       [--full] [--all]
1502              List  all the current location constraints that are not expired.
1503              If 'resources' is specified, location constraints are  displayed
1504              per  resource  (default). If 'nodes' is specified, location con‐
1505              straints are displayed per node. If specific nodes or  resources
1506              are specified then we only show information about them. Resource
1507              may be either a resource id <resource_id> or  %<resource_id>  or
1508              resource%<resource_id>,  or  a  resource name regular expression
1509              regexp%<resource_pattern>.  If  --full  is  specified  show  the
1510              internal constraint id's as well. If --all is specified show the
1511              expired constraints.
1512
1513       location  add  <id>   <resource>   <node>   <score>   [resource-discov‐
1514       ery=<option>]
1515              Add a location constraint with the appropriate id for the speci‐
1516              fied resource, node name and score. Resource  may  be  either  a
1517              resource     id     <resource_id>     or    %<resource_id>    or
1518              resource%<resource_id>, or a resource  name  regular  expression
1519              regexp%<resource_pattern>.
1520
1521       location delete <id>
1522              Remove a location constraint with the appropriate id.
1523
1524       location remove <id>
1525              Remove a location constraint with the appropriate id.
1526
1527       order [show] [--full]
1528              List  all  current  ordering constraints (if --full is specified
1529              show the internal constraint id's as well).
1530
1531       order [action] <resource id> then [action] <resource id> [options]
1532              Add an ordering constraint specifying actions (start, stop, pro‐
1533              mote,  demote)  and if no action is specified the default action
1534              will  be  start.   Available  options  are  kind=Optional/Manda‐
1535              tory/Serialize,  symmetrical=true/false,  require-all=true/false
1536              and id=<constraint-id>.
1537
1538       order set <resource1> [resourceN]...  [options]  [set  <resourceX>  ...
1539       [options]] [setoptions [constraint_options]]
1540              Create  an  ordered  set  of  resources.  Available  options are
1541              sequential=true/false,        require-all=true/false         and
1542              action=start/promote/demote/stop.  Available  constraint_options
1543              are  id=<constraint-id>,  kind=Optional/Mandatory/Serialize  and
1544              symmetrical=true/false.
1545
1546       order delete <resource1> [resourceN]...
1547              Remove resource from any ordering constraint
1548
1549       order remove <resource1> [resourceN]...
1550              Remove resource from any ordering constraint
1551
1552       colocation [show] [--full]
1553              List  all current colocation constraints (if --full is specified
1554              show the internal constraint id's as well).
1555
1556       colocation add [<role>] <source  resource  id>  with  [<role>]  <target
1557       resource id> [score] [options] [id=constraint-id]
1558              Request  <source  resource>  to run on the same node where pace‐
1559              maker has determined <target  resource>  should  run.   Positive
1560              values  of  score  mean  the resources should be run on the same
1561              node, negative values mean the resources should not  be  run  on
1562              the  same  node.  Specifying 'INFINITY' (or '-INFINITY') for the
1563              score forces <source resource> to run (or not run) with  <target
1564              resource>  (score  defaults to "INFINITY"). A role can be: 'Mas‐
1565              ter', 'Slave', 'Started', 'Stopped' (if no role is specified, it
1566              defaults to 'Started').
1567
1568       colocation  set  <resource1>  [resourceN]... [options] [set <resourceX>
1569       ... [options]] [setoptions [constraint_options]]
1570              Create a colocation constraint with a  resource  set.  Available
1571              options  are sequential=true/false and role=Stopped/Started/Mas‐
1572              ter/Slave. Available constraint_options are id  and  either  of:
1573              score, score-attribute, score-attribute-mangle.
1574
1575       colocation delete <source resource id> <target resource id>
1576              Remove colocation constraints with specified resources.
1577
1578       colocation remove <source resource id> <target resource id>
1579              Remove colocation constraints with specified resources.
1580
1581       ticket [show] [--full]
1582              List all current ticket constraints (if --full is specified show
1583              the internal constraint id's as well).
1584
1585       ticket  add  <ticket>  [<role>]  <resource  id>  [<options>]  [id=<con‐
1586       straint-id>]
1587              Create  a  ticket constraint for <resource id>. Available option
1588              is loss-policy=fence/stop/freeze/demote. A role can  be  master,
1589              slave, started or stopped.
1590
1591       ticket  set  <resource1>  [<resourceN>]... [<options>] [set <resourceX>
1592       ... [<options>]] setoptions <constraint_options>
1593              Create a  ticket  constraint  with  a  resource  set.  Available
1594              options  are  role=Stopped/Started/Master/Slave.  Required  con‐
1595              straint option is ticket=<ticket>. Optional  constraint  options
1596              are id=<constraint-id> and loss-policy=fence/stop/freeze/demote.
1597
1598       ticket delete <ticket> <resource id>
1599              Remove all ticket constraints with <ticket> from <resource id>.
1600
1601       ticket remove <ticket> <resource id>
1602              Remove all ticket constraints with <ticket> from <resource id>.
1603
1604       delete <constraint id>...
1605              Remove  constraint(s)  or  constraint  rules  with the specified
1606              id(s).
1607
1608       remove <constraint id>...
1609              Remove constraint(s) or  constraint  rules  with  the  specified
1610              id(s).
1611
1612       ref <resource>...
1613              List constraints referencing specified resource.
1614
1615       rule   add   <constraint   id>   [id=<rule   id>]   [role=master|slave]
1616       [score=<score>|score-attribute=<attribute>] <expression>
1617              Add a rule to a location constraint specified by 'constraint id'
1618              where the expression looks like one of the following:
1619                defined|not_defined <node attribute>
1620                <node   attribute>   lt|gt|lte|gte|eq|ne  [string|integer|num‐
1621              ber|version] <value>
1622                date gt|lt <date>
1623                date in_range <date> to <date>
1624                date in_range <date> to duration <duration options>...
1625                date-spec <date spec options>...
1626                <expression> and|or <expression>
1627                ( <expression> )
1628              where duration options and date spec options are: hours,  month‐
1629              days, weekdays, yeardays, months, weeks, years, weekyears, moon.
1630              If score is omitted it defaults to INFINITY. If  id  is  omitted
1631              one is generated from the constraint id.
1632
1633       rule delete <rule id>
1634              Remove  a rule from its location constraint and if it's the last
1635              rule, the constraint will also be removed.
1636
1637       rule remove <rule id>
1638              Remove a rule from its location constraint and if it's the  last
1639              rule, the constraint will also be removed.
1640
1641   qdevice
1642       status <device model> [--full] [<cluster name>]
1643              Show   runtime  status  of  specified  model  of  quorum  device
1644              provider.  Using --full will  give  more  detailed  output.   If
1645              <cluster  name>  is specified, only information about the speci‐
1646              fied cluster will be displayed.
1647
1648       setup model <device model> [--enable] [--start]
1649              Configure specified model of  quorum  device  provider.   Quorum
1650              device  then  can  be  added  to clusters by running "pcs quorum
1651              device add" command in a cluster.  --start will also  start  the
1652              provider.   --enable  will  configure  the  provider to start on
1653              boot.
1654
1655       destroy <device model>
1656              Disable and stop specified model of quorum device  provider  and
1657              delete its configuration files.
1658
1659       start <device model>
1660              Start specified model of quorum device provider.
1661
1662       stop <device model>
1663              Stop specified model of quorum device provider.
1664
1665       kill <device model>
1666              Force  specified  model  of quorum device provider to stop (per‐
1667              forms kill -9).  Note that init system (e.g. systemd) can detect
1668              that the qdevice is not running and start it again.  If you want
1669              to stop the qdevice, run "pcs qdevice stop" command.
1670
1671       enable <device model>
1672              Configure specified model of quorum device provider to start  on
1673              boot.
1674
1675       disable <device model>
1676              Configure specified model of quorum device provider to not start
1677              on boot.
1678
1679   quorum
1680       [config]
1681              Show quorum configuration.
1682
1683       status Show quorum runtime status.
1684
1685       device add [<generic options>] model <device model>  [<model  options>]
1686       [heuristics <heuristics options>]
1687              Add a quorum device to the cluster. Quorum device should be con‐
1688              figured first with "pcs qdevice setup". It is  not  possible  to
1689              use more than one quorum device in a cluster simultaneously.
1690              Currently  the  only supported model is 'net'. It requires model
1691              options 'algorithm' and 'host' to be specified. Options are doc‐
1692              umented  in  corosync-qdevice(8)  man  page; generic options are
1693              'sync_timeout' and 'timeout', for model net  options  check  the
1694              quorum.device.net  section,  for heuristics options see the quo‐
1695              rum.device.heuristics section.  Pcs  automatically  creates  and
1696              distributes  TLS certificates and sets the 'tls' model option to
1697              the default value 'on'.
1698              Example:  pcs  quorum  device  add   model   net   algorithm=lms
1699              host=qnetd.internal.example.com
1700
1701       device heuristics delete
1702              Remove all heuristics settings of the configured quorum device.
1703
1704       device heuristics remove
1705              Remove all heuristics settings of the configured quorum device.
1706
1707       device delete
1708              Remove a quorum device from the cluster.
1709
1710       device remove
1711              Remove a quorum device from the cluster.
1712
1713       device status [--full]
1714              Show  quorum device runtime status.  Using --full will give more
1715              detailed output.
1716
1717       device update [<generic options>] [model <model  options>]  [heuristics
1718       <heuristics options>]
1719              Add/Change  quorum  device  options.  Requires the cluster to be
1720              stopped. Model and options are all documented  in  corosync-qde‐
1721              vice(8)   man  page;  for  heuristics  options  check  the  quo‐
1722              rum.device.heuristics subkey section, for  model  options  check
1723              the quorum.device.<device model> subkey sections.
1724
1725              WARNING:  If  you  want to change "host" option of qdevice model
1726              net, use "pcs quorum device remove" and "pcs quorum device  add"
1727              commands  to  set  up  configuration properly unless old and new
1728              host is the same machine.
1729
1730       expected-votes <votes>
1731              Set expected votes in the live cluster to specified value.  This
1732              only  affects  the  live  cluster, not changes any configuration
1733              files.
1734
1735       unblock [--force]
1736              Cancel waiting for all nodes when establishing  quorum.   Useful
1737              in  situations  where you know the cluster is inquorate, but you
1738              are confident that the cluster should proceed with resource man‐
1739              agement regardless.  This command should ONLY be used when nodes
1740              which the cluster is waiting for have been confirmed to be  pow‐
1741              ered off and to have no access to shared resources.
1742
1743              WARNING:  If  the  nodes are not actually powered off or they do
1744              have access to shared resources, data corruption/cluster failure
1745              can  occur.  To  prevent  accidental  running  of  this command,
1746              --force or interactive user response is  required  in  order  to
1747              proceed.
1748
1749       update        [auto_tie_breaker=[0|1]]        [last_man_standing=[0|1]]
1750       [last_man_standing_window=[<time in ms>]] [wait_for_all=[0|1]]
1751              Add/Change quorum options.  At least one option must  be  speci‐
1752              fied.   Options  are  documented in corosync's votequorum(5) man
1753              page.  Requires the cluster to be stopped.
1754
1755   booth
1756       setup sites <address> <address> [<address>...]  [arbitrators  <address>
1757       ...] [--force]
1758              Write  new booth configuration with specified sites and arbitra‐
1759              tors.  Total number of peers (sites  and  arbitrators)  must  be
1760              odd.   When the configuration file already exists, command fails
1761              unless --force is specified.
1762
1763       destroy
1764              Remove booth configuration files.
1765
1766       ticket add <ticket> [<name>=<value> ...]
1767              Add new ticket to the current configuration. Ticket options  are
1768              specified in booth manpage.
1769
1770       ticket delete <ticket>
1771              Remove the specified ticket from the current configuration.
1772
1773       ticket remove <ticket>
1774              Remove the specified ticket from the current configuration.
1775
1776       config [<node>]
1777              Show  booth  configuration  from  the specified node or from the
1778              current node if node not specified.
1779
1780       create ip <address>
1781              Make the cluster run booth service on the specified  ip  address
1782              as  a  cluster  resource.   Typically  this is used to run booth
1783              site.
1784
1785       delete Remove booth resources created by the "pcs  booth  create"  com‐
1786              mand.
1787
1788       remove Remove  booth  resources  created by the "pcs booth create" com‐
1789              mand.
1790
1791       restart
1792              Restart booth resources created by the "pcs booth  create"  com‐
1793              mand.
1794
1795       ticket grant <ticket> [<site address>]
1796              Grant  the ticket to the site specified by the address, hence to
1797              the booth formation this site is a member of. When this specifi‐
1798              cation  is  omitted,  site  address that has been specified with
1799              'pcs booth create' command is used. Specifying site  address  is
1800              therefore  mandatory  when  running this command at a host in an
1801              arbitrator role.
1802              Note that the ticket must not be already granted in given  booth
1803              formation;  for an ad-hoc (and, in the worst case, abrupt, for a
1804              lack of a direct atomicity) change  of  this  preference  baring
1805              direct  interventions  at  the  sites,  the  ticket  needs to be
1806              revoked first, only then it  can  be  granted  at  another  site
1807              again.
1808
1809       ticket revoke <ticket> [<site address>]
1810              Revoke  the ticket in the booth formation as identified with one
1811              of its member sites specified by the address. When this specifi‐
1812              cation  is  omitted, site address that has been specified with a
1813              prior 'pcs  booth  create'  command  is  used.  Specifying  site
1814              address  is  therefore  mandatory when running this command at a
1815              host in an arbitrator role.
1816
1817       status Print current status of booth on the local node.
1818
1819       pull <node>
1820              Pull booth configuration from the specified node.
1821
1822       sync [--skip-offline]
1823              Send booth configuration from the local node to all nodes in the
1824              cluster.
1825
1826       enable Enable booth arbitrator service.
1827
1828       disable
1829              Disable booth arbitrator service.
1830
1831       start  Start booth arbitrator service.
1832
1833       stop   Stop booth arbitrator service.
1834
1835   status
1836       [status] [--full] [--hide-inactive]
1837              View  all  information  about  the cluster and resources (--full
1838              provides   more   details,   --hide-inactive   hides    inactive
1839              resources).
1840
1841       resources [--hide-inactive]
1842              Show   status   of   all   currently  configured  resources.  If
1843              --hide-inactive is specified, only show active resources.
1844
1845       cluster
1846              View current cluster status.
1847
1848       corosync
1849              View current membership information as seen by corosync.
1850
1851       quorum View current quorum status.
1852
1853       qdevice <device model> [--full] [<cluster name>]
1854              Show  runtime  status  of  specified  model  of  quorum   device
1855              provider.   Using  --full  will  give  more detailed output.  If
1856              <cluster name> is specified, only information about  the  speci‐
1857              fied cluster will be displayed.
1858
1859       booth  Print current status of booth on the local node.
1860
1861       nodes [corosync | both | config]
1862              View  current  status  of nodes from pacemaker. If 'corosync' is
1863              specified, view current status of nodes from  corosync  instead.
1864              If  'both'  is specified, view current status of nodes from both
1865              corosync & pacemaker. If 'config' is specified, print nodes from
1866              corosync & pacemaker configuration.
1867
1868       pcsd [<node>]...
1869              Show  current status of pcsd on nodes specified, or on all nodes
1870              configured in the local cluster if no nodes are specified.
1871
1872       xml    View xml version of status (output from crm_mon -r -1 -X).
1873
1874   config
1875       [show] View full cluster configuration.
1876
1877       backup [filename]
1878              Creates the tarball containing the cluster configuration  files.
1879              If filename is not specified the standard output will be used.
1880
1881       restore [--local] [filename]
1882              Restores  the  cluster configuration files on all nodes from the
1883              backup.  If filename is not specified the standard input will be
1884              used.   If  --local  is  specified only the files on the current
1885              node will be restored.
1886
1887       checkpoint
1888              List all available configuration checkpoints.
1889
1890       checkpoint view <checkpoint_number>
1891              Show specified configuration checkpoint.
1892
1893       checkpoint diff <checkpoint_number> <checkpoint_number>
1894              Show differences between  the  two  specified  checkpoints.  Use
1895              checkpoint  number 'live' to compare a checkpoint to the current
1896              live configuration.
1897
1898       checkpoint restore <checkpoint_number>
1899              Restore cluster configuration to specified checkpoint.
1900
1901   pcsd
1902       certkey <certificate file> <key file>
1903              Load custom certificate and key files for use in pcsd.
1904
1905       sync-certificates
1906              Sync pcsd certificates to all nodes in the local cluster.
1907
1908       deauth [<token>]...
1909              Delete locally stored authentication tokens used by remote  sys‐
1910              tems  to  connect  to  the local pcsd instance. If no tokens are
1911              specified all tokens will be deleted. After this command is  run
1912              other nodes will need to re-authenticate against this node to be
1913              able to connect to it.
1914
1915   host
1916       auth (<host name>  [addr=<address>[:<port>]])...  [-u  <username>]  [-p
1917       <password>]
1918              Authenticate  local pcs/pcsd against pcsd on specified hosts. It
1919              is possible to specify an address and a port via which  pcs/pcsd
1920              will  communicate with each host. If an address is not specified
1921              a host name will be used. If a port is not specified  2224  will
1922              be used.
1923
1924       deauth [<host name>]...
1925              Delete authentication tokens which allow pcs/pcsd on the current
1926              system to connect to remote pcsd  instances  on  specified  host
1927              names.  If  the  current  system  is  a member of a cluster, the
1928              tokens will be deleted from all nodes in the cluster. If no host
1929              names  are specified all tokens will be deleted. After this com‐
1930              mand is run this node will need to re-authenticate against other
1931              nodes to be able to connect to them.
1932
1933   node
1934       attribute [[<node>] [--name <name>] | <node> <name>=<value> ...]
1935              Manage  node  attributes.   If no parameters are specified, show
1936              attributes of all nodes.  If one parameter  is  specified,  show
1937              attributes  of  specified  node.   If  --name is specified, show
1938              specified attribute's value from all nodes.  If more  parameters
1939              are specified, set attributes of specified node.  Attributes can
1940              be removed by setting an attribute without a value.
1941
1942       maintenance [--all | <node>...] [--wait[=n]]
1943              Put specified node(s) into maintenance  mode,  if  no  nodes  or
1944              options  are specified the current node will be put into mainte‐
1945              nance mode, if --all is specified all nodes  will  be  put  into
1946              maintenance  mode.  If  --wait is specified, pcs will wait up to
1947              'n' seconds for the node(s) to be put into maintenance mode  and
1948              then  return  0  on  success or 1 if the operation not succeeded
1949              yet. If 'n' is not specified it defaults to 60 minutes.
1950
1951       unmaintenance [--all | <node>...] [--wait[=n]]
1952              Remove node(s) from maintenance mode, if no nodes or options are
1953              specified  the  current  node  will  be removed from maintenance
1954              mode, if --all is specified all nodes will be removed from main‐
1955              tenance  mode.  If  --wait is specified, pcs will wait up to 'n'
1956              seconds for the node(s) to be removed from maintenance mode  and
1957              then  return  0  on  success or 1 if the operation not succeeded
1958              yet. If 'n' is not specified it defaults to 60 minutes.
1959
1960       standby [--all | <node>...] [--wait[=n]]
1961              Put specified node(s) into standby mode (the node specified will
1962              no longer be able to host resources), if no nodes or options are
1963              specified the current node will be put  into  standby  mode,  if
1964              --all  is  specified all nodes will be put into standby mode. If
1965              --wait is specified, pcs will wait up to  'n'  seconds  for  the
1966              node(s) to be put into standby mode and then return 0 on success
1967              or 1 if the operation not succeeded yet. If 'n' is not specified
1968              it defaults to 60 minutes.
1969
1970       unstandby [--all | <node>...] [--wait[=n]]
1971              Remove node(s) from standby mode (the node specified will now be
1972              able to host resources), if no nodes or  options  are  specified
1973              the  current node will be removed from standby mode, if --all is
1974              specified all nodes will be removed from standby mode. If --wait
1975              is specified, pcs will wait up to 'n' seconds for the node(s) to
1976              be removed from standby mode and then return 0 on success  or  1
1977              if  the  operation not succeeded yet. If 'n' is not specified it
1978              defaults to 60 minutes.
1979
1980       utilization [[<node>] [--name <name>] | <node> <name>=<value> ...]
1981              Add specified utilization options to specified node.  If node is
1982              not  specified,  shows  utilization  of all nodes.  If --name is
1983              specified, shows specified utilization value from all nodes.  If
1984              utilization  options  are  not  specified,  shows utilization of
1985              specified  node.   Utilization  option  should  be   in   format
1986              name=value,  value has to be integer.  Options may be removed by
1987              setting an option without a value.  Example: pcs  node  utiliza‐
1988              tion node1 cpu=4 ram=
1989
1990   alert
1991       [config|show]
1992              Show all configured alerts.
1993
1994       create path=<path> [id=<alert-id>] [description=<description>] [options
1995       [<option>=<value>]...] [meta [<meta-option>=<value>]...]
1996              Define an alert handler with specified path. Id will be automat‐
1997              ically generated if it is not specified.
1998
1999       update  <alert-id>  [path=<path>]  [description=<description>] [options
2000       [<option>=<value>]...] [meta [<meta-option>=<value>]...]
2001              Update an existing alert handler with specified id.
2002
2003       delete <alert-id> ...
2004              Remove alert handlers with specified ids.
2005
2006       remove <alert-id> ...
2007              Remove alert handlers with specified ids.
2008
2009       recipient add  <alert-id>  value=<recipient-value>  [id=<recipient-id>]
2010       [description=<description>]   [options   [<option>=<value>]...]   [meta
2011       [<meta-option>=<value>]...]
2012              Add new recipient to specified alert handler.
2013
2014       recipient  update  <recipient-id>  [value=<recipient-value>]  [descrip‐
2015       tion=<description>]      [options      [<option>=<value>]...]     [meta
2016       [<meta-option>=<value>]...]
2017              Update an existing recipient identified by its id.
2018
2019       recipient delete <recipient-id> ...
2020              Remove specified recipients.
2021
2022       recipient remove <recipient-id> ...
2023              Remove specified recipients.
2024
2025   client
2026       local-auth [<pcsd-port>] [-u <username>] [-p <password>]
2027              Authenticate current user to local pcsd. This is required to run
2028              some  pcs  commands  which  may require permissions of root user
2029              such as 'pcs cluster start'.
2030
2031   dr
2032       config Display disaster-recovery configuration from the local node.
2033
2034       status [--full] [--hide-inactive]
2035              Display status of the local and the remote site cluster  (--full
2036              provides    more   details,   --hide-inactive   hides   inactive
2037              resources).
2038
2039       set-recovery-site <recovery site node>
2040              Set up disaster-recovery with the local cluster being  the  pri‐
2041              mary  site. The recovery site is defined by a name of one of its
2042              nodes.
2043
2044       destroy
2045              Permanently  destroy  disaster-recovery  configuration  on   all
2046              sites.
2047
2048   tag
2049       [config|list [<tag id>...]]
2050              Display configured tags.
2051
2052       create <tag id> <id> [<id>]...
2053              Create a tag containing the specified ids.
2054
2055       delete <tag id>...
2056              Delete specified tags.
2057
2058       remove <tag id>...
2059              Delete specified tags.
2060
2061       update  <tag  id>  [add  <id> [<id>]... [--before <id> | --after <id>]]
2062       [remove <id> [<id>]...]
2063              Update a tag using the specified ids. Ids can be added,  removed
2064              or  moved  in  a tag. You can use --before or --after to specify
2065              the position of the added ids  relatively  to  some  id  already
2066              existing  in the tag. By adding ids to a tag they are already in
2067              and specifying --after or --before you can move the ids  in  the
2068              tag.
2069

EXAMPLES

2071       Show all resources
2072              # pcs resource config
2073
2074       Show options specific to the 'VirtualIP' resource
2075              # pcs resource config VirtualIP
2076
2077       Create a new resource called 'VirtualIP' with options
2078              #    pcs   resource   create   VirtualIP   ocf:heartbeat:IPaddr2
2079              ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s
2080
2081       Create a new resource called 'VirtualIP' with options
2082              #  pcs  resource  create   VirtualIP   IPaddr2   ip=192.168.0.99
2083              cidr_netmask=32 nic=eth2 op monitor interval=30s
2084
2085       Change the ip address of VirtualIP and remove the nic option
2086              # pcs resource update VirtualIP ip=192.168.0.98 nic=
2087
2088       Delete the VirtualIP resource
2089              # pcs resource delete VirtualIP
2090
2091       Create  the  MyStonith  stonith  fence_virt device which can fence host
2092       'f1'
2093              # pcs stonith create MyStonith fence_virt pcmk_host_list=f1
2094
2095       Set the stonith-enabled property to false on the  cluster  (which  dis‐
2096       ables stonith)
2097              # pcs property set stonith-enabled=false
2098

USING --FORCE IN PCS COMMANDS

2100       Various pcs commands accept the --force option. Its purpose is to over‐
2101       ride some of checks that pcs is doing or some of errors that may  occur
2102       when  a  pcs command is run. When such error occurs, pcs will print the
2103       error with a note it may be  overridden.  The  exact  behavior  of  the
2104       option  is different for each pcs command. Using the --force option can
2105       lead into situations that would normally be prevented by logic  of  pcs
2106       commands  and therefore its use is strongly discouraged unless you know
2107       what you are doing.
2108

ENVIRONMENT VARIABLES

2110       EDITOR
2111               Path to a plain-text editor. This is used when pcs is requested
2112              to present a text for the user to edit.
2113
2114       no_proxy, https_proxy, all_proxy, NO_PROXY, HTTPS_PROXY, ALL_PROXY
2115               These  environment variables (listed according to their priori‐
2116              ties) control how pcs handles proxy servers when  connecting  to
2117              cluster nodes. See curl(1) man page for details.
2118

CHANGES IN PCS-0.10

2120       This  section summarizes the most important changes in commands done in
2121       pcs-0.10.x compared to pcs-0.9.x. For detailed description  of  current
2122       commands see above.
2123
2124   cluster
2125       auth   The  'pcs  cluster  auth'  command only authenticates nodes in a
2126              local cluster and does not accept a node list. The  new  command
2127              for authentication is 'pcs host auth'. It allows to specify host
2128              names, addresses and pcsd ports.
2129
2130       node add
2131              Custom node names and Corosync 3.x with knet are fully supported
2132              now, therefore the syntax has been completely changed.
2133              The  --device  and  --watchdog  options  have been replaced with
2134              'device' and 'watchdog' options, respectively.
2135
2136       quorum This command has been replaced with 'pcs quorum'.
2137
2138       remote-node add
2139              This  command  has  been  replaced  with   'pcs   cluster   node
2140              add-guest'.
2141
2142       remote-node remove
2143              This   command   has   been  replaced  with  'pcs  cluster  node
2144              delete-guest' and its alias 'pcs cluster node remove-guest'.
2145
2146       setup  Custom node names and Corosync 3.x with knet are fully supported
2147              now, therefore the syntax has been completely changed.
2148              The  --name  option has been removed. The first parameter of the
2149              command is the cluster name now.
2150              The  --local  option  has  been  replaced  with  --corosync_conf
2151              <path>.
2152
2153       standby
2154              This command has been replaced with 'pcs node standby'.
2155
2156       uidgid rm
2157              This  command  has  been  deprecated,  use  'pcs  cluster uidgid
2158              delete' or 'pcs cluster uidgid remove' instead.
2159
2160       unstandby
2161              This command has been replaced with 'pcs node unstandby'.
2162
2163       verify The -V option has been replaced with --full.
2164              To specify a filename, use the -f option.
2165
2166   pcsd
2167       clear-auth
2168              This command has been replaced with 'pcs host deauth'  and  'pcs
2169              pcsd deauth'.
2170
2171   property
2172       set    The  --node  option  is  no  longer supported. Use the 'pcs node
2173              attribute' command to set node attributes.
2174
2175       show   The --node option is no longer  supported.  Use  the  'pcs  node
2176              attribute' command to view node attributes.
2177
2178       unset  The  --node  option  is  no  longer supported. Use the 'pcs node
2179              attribute' command to unset node attributes.
2180
2181   resource
2182       create The 'master' keyword has been changed to 'promotable'.
2183
2184       failcount reset
2185              The command has been removed as 'pcs resource cleanup' is  doing
2186              exactly the same job.
2187
2188       master This command has been replaced with 'pcs resource promotable'.
2189
2190       show   Previously,  this  command displayed either status or configura‐
2191              tion of resources depending on the  parameters  specified.  This
2192              was confusing, therefore the command was replaced by several new
2193              commands. To display resources status,  run  'pcs  resource'  or
2194              'pcs  resource  status'. To display resources configuration, run
2195              'pcs resource config' or 'pcs resource config <resource  name>'.
2196              To  display  configured resource groups, run 'pcs resource group
2197              list'.
2198
2199   status
2200       groups This command has been replaced with 'pcs resource group list'.
2201
2202   stonith
2203       sbd device setup
2204              The --device option has been replaced with the 'device' option.
2205
2206       sbd enable
2207              The --device and --watchdog  options  have  been  replaced  with
2208              'device' and 'watchdog' options, respectively.
2209
2210       show   Previously,  this  command displayed either status or configura‐
2211              tion of stonith resources depending on the parameters specified.
2212              This  was  confusing, therefore the command was replaced by sev‐
2213              eral new commands. To display stonith resources status, run 'pcs
2214              stonith'  or  'pcs stonith status'. To display stonith resources
2215              configuration, run 'pcs stonith config' or 'pcs  stonith  config
2216              <stonith name>'.
2217

SEE ALSO

2219       http://clusterlabs.org/doc/
2220
2221       pcsd(8), pcs_snmp_agent(8)
2222
2223       corosync_overview(8),  votequorum(5),  corosync.conf(5),  corosync-qde‐
2224       vice(8),          corosync-qdevice-tool(8),          corosync-qnetd(8),
2225       corosync-qnetd-tool(8)
2226
2227       pacemaker-controld(7),   pacemaker-fenced(7),  pacemaker-schedulerd(7),
2228       crm_mon(8), crm_report(8), crm_simulate(8)
2229
2230       boothd(8), sbd(8)
2231
2232       clufter(1)
2233
2234
2235
2236pcs 0.10.8                       February 2021                          PCS(8)
Impressum