1PCS(8)                  System Administration Utilities                 PCS(8)
2
3
4

NAME

6       pcs - pacemaker/corosync configuration system
7

SYNOPSIS

9       pcs [-f file] [-h] [commands]...
10

DESCRIPTION

12       Control and configure pacemaker and corosync.
13

OPTIONS

15       -h, --help
16              Display usage and exit.
17
18       -f file
19              Perform actions on file instead of active CIB.
20
21       --debug
22              Print all network traffic and external commands run.
23
24       --version
25              Print pcs version information.
26
27   Commands:
28       cluster
29              Configure cluster options and nodes.
30
31       resource
32              Manage cluster resources.
33
34       stonith
35              Configure fence devices.
36
37       constraint
38              Set resource constraints.
39
40       property
41              Set pacemaker properties.
42
43       acl    Set pacemaker access control lists.
44
45       status View cluster status.
46
47       config View and manage cluster configuration.
48
49       pcsd   Manage pcs daemon.
50
51       node   Manage cluster nodes.
52
53       alert  Manage pacemaker alerts.
54
55   resource
56       [show [<resource id>] | --full | --groups | --hide-inactive]
57              Show  all  currently  configured  resources  or if a resource is
58              specified show the options  for  the  configured  resource.   If
59              --full  is  specified,  all  configured resource options will be
60              displayed.  If --groups is  specified,  only  show  groups  (and
61              their  resources).   If  --hide-inactive is specified, only show
62              active resources.
63
64       list [filter] [--nodesc]
65              Show list of all available resource agents (if  filter  is  pro‐
66              vided  then  only  resource  agents  matching the filter will be
67              shown). If --nodesc is used then descriptions of resource agents
68              are not printed.
69
70       describe [<standard>:[<provider>:]]<type>
71              Show options for the specified resource.
72
73       create   <resource   id>   [<standard>:[<provider>:]]<type>   [resource
74       options] [op <operation action> <operation options> [<operation action>
75       <operation  options>]...]  [meta  <meta  options>...]  [--clone  <clone
76       options> | --master <master options> |  --group  <group  id>  [--before
77       <resource id> | --after <resource id>]] [--disabled] [--wait[=n]]
78              Create  specified resource.  If --clone is used a clone resource
79              is created.  If --master is specified a master/slave resource is
80              created.   If  --group is specified the resource is added to the
81              group named.  You can use --before or  --after  to  specify  the
82              position  of  the  added  resource  relatively  to some resource
83              already existing in the group.  If --disabled is  specified  the
84              resource  is not started automatically.  If --wait is specified,
85              pcs will wait up to 'n' seconds for the resource  to  start  and
86              then  return  0 if the resource is started, or 1 if the resource
87              has not yet started.  If 'n' is not specified it defaults to  60
88              minutes.
89
90              Example:  Create  a  new  resource  called  'VirtualIP'  with IP
91              address 192.168.0.99, netmask of  32,  monitored  everything  30
92              seconds,  on  eth2:  pcs  resource  create  VirtualIP ocf:heart‐
93              beat:IPaddr2 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor
94              interval=30s
95
96       delete <resource id|group id|master id|clone id>
97              Deletes  the resource, group, master or clone (and all resources
98              within the group/master/clone).
99
100       enable <resource id> [--wait[=n]]
101              Allow the cluster to start the resource. Depending on  the  rest
102              of  the configuration (constraints, options, failures, etc), the
103              resource may remain stopped.  If --wait is specified,  pcs  will
104              wait up to 'n' seconds for the resource to start and then return
105              0 if the resource is started, or 1 if the resource has  not  yet
106              started.  If 'n' is not specified it defaults to 60 minutes.
107
108       disable <resource id> [--wait[=n]]
109              Attempt  to  stop  the  resource if it is running and forbid the
110              cluster from starting it again.  Depending on the  rest  of  the
111              configuration   (constraints,   options,   failures,  etc),  the
112              resource may remain started.  If --wait is specified,  pcs  will
113              wait  up to 'n' seconds for the resource to stop and then return
114              0 if the resource is stopped  or  1  if  the  resource  has  not
115              stopped.  If 'n' is not specified it defaults to 60 minutes.
116
117       restart <resource id> [node] [--wait=n]
118              Restart  the  resource  specified. If a node is specified and if
119              the resource is a clone or master/slave  it  will  be  restarted
120              only  on  the  node  specified.  If --wait is specified, then we
121              will wait up to 'n' seconds for the resource to be restarted and
122              return 0 if the restart was successful or 1 if it was not.
123
124       debug-start <resource id> [--full]
125              This  command will force the specified resource to start on this
126              node ignoring the cluster recommendations and print  the  output
127              from  starting  the  resource.   Using  --full  will  give  more
128              detailed output.  This is mainly used  for  debugging  resources
129              that fail to start.
130
131       debug-stop <resource id> [--full]
132              This  command  will force the specified resource to stop on this
133              node ignoring the cluster recommendations and print  the  output
134              from  stopping  the  resource.   Using  --full  will  give  more
135              detailed output.  This is mainly used  for  debugging  resources
136              that fail to stop.
137
138       debug-promote <resource id> [--full]
139              This command will force the specified resource to be promoted on
140              this node ignoring the cluster  recommendations  and  print  the
141              output from promoting the resource.  Using --full will give more
142              detailed output.  This is mainly used  for  debugging  resources
143              that fail to promote.
144
145       debug-demote <resource id> [--full]
146              This  command will force the specified resource to be demoted on
147              this node ignoring the cluster  recommendations  and  print  the
148              output  from demoting the resource.  Using --full will give more
149              detailed output.  This is mainly used  for  debugging  resources
150              that fail to demote.
151
152       debug-monitor <resource id> [--full]
153              This command will force the specified resource to be moniored on
154              this node ignoring the cluster  recommendations  and  print  the
155              output  from  monitoring  the  resource.  Using --full will give
156              more  detailed  output.   This  is  mainly  used  for  debugging
157              resources that fail to be monitored.
158
159       move  <resource id> [destination node] [--master] [lifetime=<lifetime>]
160       [--wait[=n]]
161              Move the resource off the node it is  currently  running  on  by
162              creating  a  -INFINITY  location constraint to ban the node.  If
163              destination node is specified the resource will be moved to that
164              node  by  creating an INFINITY location constraint to prefer the
165              destination node.  If --master is used the scope of the  command
166              is  limited  to  the  master role and you must use the master id
167              (instead of the resource id).  If lifetime is specified then the
168              constraint will expire after that time, otherwise it defaults to
169              infinity and the constraint can be cleared  manually  with  'pcs
170              resource clear' or 'pcs constraint delete'.  If --wait is speci‐
171              fied, pcs will wait up to 'n' seconds for the resource  to  move
172              and then return 0 on success or 1 on error.  If 'n' is not spec‐
173              ified it defaults to 60 minutes.  If you want  the  resource  to
174              preferably  avoid  running on some nodes but be able to failover
175              to them use 'pcs location avoids'.
176
177       ban <resource id> [node] [--master] [lifetime=<lifetime>] [--wait[=n]]
178              Prevent the resource id specified from running on the  node  (or
179              on the current node it is running on if no node is specified) by
180              creating a -INFINITY location constraint.  If --master  is  used
181              the  scope  of the command is limited to the master role and you
182              must use the master id (instead of the resource id).   If  life‐
183              time  is  specified  then  the constraint will expire after that
184              time, otherwise it defaults to infinity and the  constraint  can
185              be cleared manually with 'pcs resource clear' or 'pcs constraint
186              delete'.  If --wait is specified, pcs will wait up to  'n'  sec‐
187              onds  for the resource to move and then return 0 on success or 1
188              on error.  If 'n' is not specified it defaults  to  60  minutes.
189              If  you  want  the  resource to preferably avoid running on some
190              nodes but be able to failover to them use 'pcs location avoids'.
191
192       clear <resource id> [node] [--master] [--wait[=n]]
193              Remove constraints created by move and/or ban on  the  specified
194              resource  (and node if specified). If --master is used the scope
195              of the command is limited to the master role and  you  must  use
196              the master id (instead of the resource id).  If --wait is speci‐
197              fied, pcs will wait up to 'n' seconds for the operation to  fin‐
198              ish  (including starting and/or moving resources if appropriate)
199              and then return 0 on success or 1 on error.  If 'n' is not spec‐
200              ified it defaults to 60 minutes.
201
202       standards
203              List  available  resource  agent  standards  supported  by  this
204              installation (OCF, LSB, etc.).
205
206       providers
207              List available OCF resource agent providers.
208
209       agents [standard[:provider]]
210              List  available  agents  optionally  filtered  by  standard  and
211              provider.
212
213       update <resource id> [resource options] [op [<operation action> <opera‐
214       tion options>]...] [meta <meta operations>...] [--wait[=n]]
215              Add/Change options to specified resource, clone  or  multi-state
216              resource.   If an operation (op) is specified it will update the
217              first found operation with the  same  action  on  the  specified
218              resource,  if  no  operation  with that action exists then a new
219              operation will be created.  (WARNING: all  existing  options  on
220              the  updated  operation will be reset if not specified.)  If you
221              want to create multiple monitor operations you  should  use  the
222              'op  add'  &  'op remove' commands.  If --wait is specified, pcs
223              will wait up to 'n' seconds for the changes to take  effect  and
224              then return 0 if the changes have been processed or 1 otherwise.
225              If 'n' is not specified it defaults to 60 minutes.
226
227       op add <resource id> <operation action> [operation properties]
228              Add operation for specified resource.
229
230       op remove <resource id> <operation action> [<operation properties>...]
231              Remove specified operation (note: you  must  specify  the  exact
232              operation properties to properly remove an existing operation).
233
234       op remove <operation id>
235              Remove the specified operation id.
236
237       op defaults [options]
238              Set  default  values  for  operations, if no options are passed,
239              lists currently configured defaults.
240
241       meta <resource id | group id | master id |  clone  id>  <meta  options>
242       [--wait[=n]]
243              Add  specified  options  to  the specified resource, group, mas‐
244              ter/slave or clone.  Meta options should be  in  the  format  of
245              name=value,  options may be removed by setting an option without
246              a value.  If --wait is specified, pcs will wait up to  'n'  sec‐
247              onds  for  the  changes  to take effect and then return 0 if the
248              changes have been processed or 1 otherwise.  If 'n' is not spec‐
249              ified  it  defaults  to  60 minutes.  Example: pcs resource meta
250              TestResource failure-timeout=50 stickiness=
251
252       group add <group id> <resource id>  [resource  id]  ...  [resource  id]
253       [--before <resource id> | --after <resource id>] [--wait[=n]]
254              Add  the  specified resource to the group, creating the group if
255              it does not exist.  If the resource is present in another  group
256              it  is  moved to the new group.  You can use --before or --after
257              to specify the position of the  added  resources  relatively  to
258              some resource already existing in the group.  If --wait is spec‐
259              ified, pcs will wait up to 'n' seconds for the operation to fin‐
260              ish  (including moving resources if appropriate) and then return
261              0 on success or 1 on error.  If 'n' is not specified it defaults
262              to 60 minutes.
263
264       group  remove  <group id> <resource id> [resource id] ... [resource id]
265       [--wait[=n]]
266              Remove the specified resource(s) from the  group,  removing  the
267              group  if  it  no resources remain.  If --wait is specified, pcs
268              will wait up to 'n' seconds for the operation to finish (includ‐
269              ing  moving  resources if appropriate) and then return 0 on suc‐
270              cess or 1 on error.  If 'n' is not specified it defaults  to  60
271              minutes.
272
273       ungroup <group id> [resource id] ... [resource id] [--wait[=n]]
274              Remove  the group (note: this does not remove any resources from
275              the cluster) or if resources are specified, remove the specified
276              resources from the group.  If --wait is specified, pcs will wait
277              up to 'n' seconds for the operation to finish (including  moving
278              resources  if  appropriate)  and the return 0 on success or 1 on
279              error.  If 'n' is not specified it defaults to 60 minutes.
280
281       clone <resource id | group id> [clone options]... [--wait[=n]]
282              Setup up the specified resource or group as a clone.  If  --wait
283              is  specified, pcs will wait up to 'n' seconds for the operation
284              to finish (including starting clone  instances  if  appropriate)
285              and then return 0 on success or 1 on error.  If 'n' is not spec‐
286              ified it defaults to 60 minutes.
287
288       unclone <resource id | group id> [--wait[=n]]
289              Remove the clone which contains the specified group or  resource
290              (the resource or group will not be removed).  If --wait is spec‐
291              ified, pcs will wait up to 'n' seconds for the operation to fin‐
292              ish (including stopping clone instances if appropriate) and then
293              return 0 on success or 1 on error.  If 'n' is not  specified  it
294              defaults to 60 minutes.
295
296       master   [<master/slave   id>]  <resource  id  |  group  id>  [options]
297       [--wait[=n]]
298              Configure a resource or group as  a  multi-state  (master/slave)
299              resource.   If --wait is specified, pcs will wait up to 'n' sec‐
300              onds for the operation to finish (including starting and promot‐
301              ing resource instances if appropriate) and then return 0 on suc‐
302              cess or 1 on error.  If 'n' is not specified it defaults  to  60
303              minutes.    Note:  to  remove  a  master  you  must  remove  the
304              resource/group it contains.
305
306       manage <resource id> ... [resource n]
307              Set resources listed to managed mode (default).
308
309       unmanage <resource id> ... [resource n]
310              Set resources listed to unmanaged mode.
311
312       defaults [options]
313              Set default values for resources,  if  no  options  are  passed,
314              lists currently configured defaults.
315
316       cleanup [<resource id>] [--node <node>]
317              Cleans up the resource in the lrmd (useful to reset the resource
318              status and failcount).  This tells the  cluster  to  forget  the
319              operation history of a resource and re-detect its current state.
320              This can be useful to purge knowledge of past failures that have
321              since been resolved.  If a resource id is not specified then all
322              resources/stonith devices will be cleaned up.  If a node is  not
323              specified then resources on all nodes will be cleaned up.
324
325       failcount show <resource id> [node]
326              Show  current failcount for specified resource from all nodes or
327              only on specified node.
328
329       failcount reset <resource id> [node]
330              Reset failcount for specified resource on all nodes or  only  on
331              specified  node. This tells the cluster to forget how many times
332              a resource has failed in the past.  This may allow the  resource
333              to be started or moved to a more preferred location.
334
335       relocate dry-run [resource1] [resource2] ...
336              The same as 'relocate run' but has no effect on the cluster.
337
338       relocate run [resource1] [resource2] ...
339              Relocate  specified  resources  to their preferred nodes.  If no
340              resources are specified, relocate all resources.   This  command
341              calculates  the  preferred node for each resource while ignoring
342              resource stickiness.  Then it creates location constraints which
343              will cause the resources to move to their preferred nodes.  Once
344              the resources have been moved the constraints are deleted  auto‐
345              matically.   Note that the preferred node is calculated based on
346              current cluster status, constraints, location of  resources  and
347              other settings and thus it might change over time.
348
349       relocate show
350              Display  current  status  of  resources  and  their optimal node
351              ignoring resource stickiness.
352
353       relocate clear
354              Remove all constraints created by the 'relocate run' command.
355
356       utilization [<resource id> [<name>=<value> ...]]
357              Add specified utilization  options  to  specified  resource.  If
358              resource  is  not specified, shows utilization of all resources.
359              If utilization options are not specified, shows  utilization  of
360              specified  resource.  Utilization  option  should  be  in format
361              name=value, value has to be integer. Options may be  removed  by
362              setting  an  option  without a value. Example: pcs resource uti‐
363              lization TestResource cpu= ram=20
364
365   cluster
366       auth [node] [...] [-u username] [-p password] [--force] [--local]
367              Authenticate pcs to pcsd on nodes specified,  or  on  all  nodes
368              configured  in  corosync.conf  if no nodes are specified (autho‐
369              rization    tokens    are    stored    in    ~/.pcs/tokens    or
370              /var/lib/pcsd/tokens  for  root).  By default all nodes are also
371              authenticated to each other, using  --local  only  authenticates
372              the  local node (and does not authenticate the remote nodes with
373              each other).  Using --force forces re-authentication to occur.
374
375       setup [--start [--wait[=<n>]]]  [--local]  [--enable]  --name  <cluster
376       name> <node1[,node1-altaddr]> [<node2[,node2-altaddr]>] [...] [--trans‐
377       port   udpu|udp]   [--rrpmode   active|passive]   [--addr0   <addr/net>
378       [[[--mcast0   <address>]   [--mcastport0   <port>]  [--ttl0  <ttl>]]  |
379       [--broadcast0]] [--addr1 <addr/net>  [[[--mcast1  <address>]  [--mcast‐
380       port1      <port>]      [--ttl1     <ttl>]]     |     [--broadcast1]]]]
381       [--wait_for_all=<0|1>]  [--auto_tie_breaker=<0|1>]   [--last_man_stand‐
382       ing=<0|1>  [--last_man_standing_window=<time in ms>]] [--ipv6] [--token
383       <timeout>] [--token_coefficient <timeout>] [--join  <timeout>]  [--con‐
384       sensus   <timeout>]   [--miss_count_const  <count>]  [--fail_recv_const
385       <failures>]
386              Configure corosync and sync configuration out to  listed  nodes.
387              --local  will  only  perform  changes on the local node, --start
388              will also start the cluster on the specified nodes, --wait  will
389              wait  up  to  'n'  seconds for the nodes to start, --enable will
390              enable corosync  and  pacemaker  on  node  startup,  --transport
391              allows  specification  of corosync transport (default: udpu; udp
392              for RHEL 6 clusters), --rrpmode allows you to set the  RRP  mode
393              of  the  system. Currently only 'passive' is supported or tested
394              (using  'active'  is  not  recommended).   The   --wait_for_all,
395              --auto_tie_breaker,    --last_man_standing,    --last_man_stand‐
396              ing_window options are all  documented  in  corosync's  votequo‐
397              rum(5) man page. These options are not supported on RHEL 6 clus‐
398              ters.
399
400              --ipv6 will configure corosync to use ipv6  (instead  of  ipv4).
401              This option is not supported on RHEL 6 clusters.
402
403              --token  <timeout>  sets time in milliseconds until a token loss
404              is declared after not receiving a token (default 1000 ms)
405
406              --token_coefficient <timeout> sets time in milliseconds used for
407              clusters  with  at least 3 nodes as a coefficient for real token
408              timeout calculation (token + (number_of_nodes - 2) * token_coef‐
409              ficient)  (default 650 ms)  This option is not supported on RHEL
410              6 clusters.
411
412              --join <timeout> sets time in milliseconds to wait for join mes‐
413              sages (default 50 ms)
414
415              --consensus <timeout> sets time in milliseconds to wait for con‐
416              sensus to be achieved before starting a new round of  membership
417              configuration (default 1200 ms)
418
419              --miss_count_const  <count>  sets the maximum number of times on
420              receipt of a token  a  message  is  checked  for  retransmission
421              before a retransmission occurs (default 5 messages)
422
423              --fail_recv_const <failures> specifies how many rotations of the
424              token without receiving any messages  when  messages  should  be
425              received may occur before a new configuration is formed (default
426              2500 failures)
427
428
429              Configuring Redundant Ring Protocol (RRP)
430
431              When using udpu specifying nodes, specify  the  ring  0  address
432              first followed by a ',' and then the ring 1 address.
433
434              Example:   pcs   cluster   setup  --name  cname  nodeA-0,nodeA-1
435              nodeB-0,nodeB-1
436
437              When using udp, using --addr0 and --addr1 will allow you to con‐
438              figure rrp mode for corosync.  It's recommended to use a network
439              (instead of IP address) for --addr0  and  --addr1  so  the  same
440              corosync.conf  file  can  be  used around the cluster.  --mcast0
441              defaults to 239.255.1.1 and --mcast1  defaults  to  239.255.2.1,
442              --mcastport0/1  default  to  5405  and  ttl  defaults  to  1. If
443              --broadcast is specified, --mcast0/1, --mcastport0/1 &  --ttl0/1
444              are ignored.
445
446       start [--all] [node] [...] [--wait[=<n>]]
447              Start  corosync  &  pacemaker on specified node(s), if a node is
448              not specified then corosync & pacemaker are started on the local
449              node.  If  --all  is  specified  then  corosync  & pacemaker are
450              started on all nodes. If --wait is specified,  wait  up  to  'n'
451              seconds for nodes to start.
452
453       stop [--all] [node] [...]
454              Stop corosync & pacemaker on specified node(s), if a node is not
455              specified then corosync & pacemaker are  stopped  on  the  local
456              node.  If  --all  is  specified  then  corosync  & pacemaker are
457              stopped on all nodes.
458
459       kill   Force corosync and pacemaker daemons to stop on the  local  node
460              (performs  kill  -9).  Note  that init system (e.g. systemd) can
461              detect that cluster is not running and start it  again.  If  you
462              want  to  stop  cluster  on a node, run pcs cluster stop on that
463              node.
464
465       enable [--all] [node] [...]
466              Configure corosync & pacemaker to run on node boot on  specified
467              node(s),  if node is not specified then corosync & pacemaker are
468              enabled on the local node. If --all is specified then corosync &
469              pacemaker are enabled on all nodes.
470
471       disable [--all] [node] [...]
472              Configure corosync & pacemaker to not run on node boot on speci‐
473              fied node(s), if node is not specified then corosync & pacemaker
474              are  disabled  on  the  local  node.  If --all is specified then
475              corosync & pacemaker are disabled on all nodes.  Note:  this  is
476              the default after installation.
477
478       remote-node add <hostname> <resource id> [options]
479              Enables  the specified resource as a remote-node resource on the
480              specified hostname (hostname should be the same as 'uname -n').
481
482       remote-node remove <hostname>
483              Disables any resources configured to be remote-node resource  on
484              the  specified  hostname  (hostname should be the same as 'uname
485              -n').
486
487       status View current cluster status (an alias of 'pcs status cluster').
488
489       pcsd-status [node] [...]
490              Get current status of pcsd on nodes specified, or on  all  nodes
491              configured in corosync.conf if no nodes are specified.
492
493       sync   Sync  corosync  configuration  to  all  nodes found from current
494              corosync.conf file (cluster.conf  on  systems  running  Corosync
495              1.x).
496
497       cib [filename] [scope=<scope> | --config]
498              Get  the  raw xml from the CIB (Cluster Information Base).  If a
499              filename is provided, we save the CIB to  that  file,  otherwise
500              the  CIB is printed.  Specify scope to get a specific section of
501              the CIB.  Valid values of the scope are:  configuration,  nodes,
502              resources,  constraints,  crm_config, rsc_defaults, op_defaults,
503              status.  --config is the same as  scope=configuration.   Do  not
504              specify a scope if you want to edit the saved CIB using pcs (pcs
505              -f <command>).
506
507       cib-push <filename> [scope=<scope> | --config] [--wait[=<n>]]
508              Push the raw xml from <filename> to the CIB (Cluster Information
509              Base).   You can obtain the CIB by running the 'pcs cluster cib'
510              command, which is recommended first step when you want  to  per‐
511              form  desired  modifications  (pcs -f <command>) for the one-off
512              push.  Specify scope to push a  specific  section  of  the  CIB.
513              Valid  values of the scope are: configuration, nodes, resources,
514              constraints, crm_config, rsc_defaults, op_defaults.  --config is
515              the  same  as  scope=configuration.   Use  of --config is recom‐
516              mended.  Do not specify a scope if you need to  push  the  whole
517              CIB  or  be  warned  in  the case of outdated CIB.  If --wait is
518              specified wait up to 'n' seconds  for  changes  to  be  applied.
519              WARNING:  the  selected  scope of the CIB will be overwritten by
520              the current content of the specified file.
521
522       cib-upgrade
523              Upgrade the CIB to conform to the latest version of the document
524              schema.
525
526       edit [scope=<scope> | --config]
527              Edit  the cib in the editor specified by the $EDITOR environment
528              variable and push out any changes upon saving.  Specify scope to
529              edit  a  specific section of the CIB.  Valid values of the scope
530              are: configuration, nodes, resources,  constraints,  crm_config,
531              rsc_defaults,  op_defaults.   --config is the same as scope=con‐
532              figuration.  Use of --config is recommended.  Do not  specify  a
533              scope if you need to edit the whole CIB or be warned in the case
534              of outdated CIB.
535
536       node  add  <node[,node-altaddr]>  [--start  [--wait[=<n>]]]  [--enable]
537       [--watchdog=<watchdog-path>]
538              Add  the  node to corosync.conf and corosync on all nodes in the
539              cluster and sync the new corosync.conf  to  the  new  node.   If
540              --start  is  specified  also start corosync/pacemaker on the new
541              node, if --wait is sepcified wait up to 'n' seconds for the  new
542              node  to  start.  If --enable is specified enable corosync/pace‐
543              maker on new node.  When using  Redundant  Ring  Protocol  (RRP)
544              with  udpu  transport, specify the ring 0 address first followed
545              by a ',' and then the ring 1 address. Use --watchdog to  specify
546              path  to  watchdog  on  newly added node, when SBD is enabled in
547              cluster.
548
549       node remove <node>
550              Shutdown  specified  node  and  remove  it  from  pacemaker  and
551              corosync on all other nodes in the cluster.
552
553       uidgid List  the  current  configured uids and gids of users allowed to
554              connect to corosync.
555
556       uidgid add [uid=<uid>] [gid=<gid>]
557              Add the specified uid and/or gid to  the  list  of  users/groups
558              allowed to connect to corosync.
559
560       uidgid rm [uid=<uid>] [gid=<gid>]
561              Remove   the   specified   uid  and/or  gid  from  the  list  of
562              users/groups allowed to connect to corosync.
563
564       corosync [node]
565              Get the corosync.conf from the specified node or from  the  cur‐
566              rent node if node not specified.
567
568       reload corosync
569              Reload the corosync configuration on the current node.
570
571       destroy [--all]
572              Permanently destroy the cluster on the current node, killing all
573              corosync/pacemaker processes removing  all  cib  files  and  the
574              corosync.conf  file.   Using  --all  will attempt to destroy the
575              cluster on all nodes configure in the corosync.conf file.  WARN‐
576              ING:  This  command  permantly removes any cluster configuration
577              that has been created. It is recommended  to  run  'pcs  cluster
578              stop' before destroying the cluster.
579
580       verify [-V] [filename]
581              Checks  the  pacemaker configuration (cib) for syntax and common
582              conceptual errors.  If no filename is  specified  the  check  is
583              performed  on the currently running cluster.  If -V is used more
584              verbose output will be printed.
585
586       report [--from "YYYY-M-D H:M:S" [--to "YYYY-M-D" H:M:S"]] dest
587              Create a tarball containing  everything  needed  when  reporting
588              cluster  problems.   If --from and --to are not used, the report
589              will include the past 24 hours.
590
591   stonith
592       [show [stonith id]] [--full]
593              Show all currently configured stonith devices or if a stonith id
594              is specified show the options for the configured stonith device.
595              If --full is specified all configured stonith  options  will  be
596              displayed.
597
598       list [filter] [--nodesc]
599              Show list of all available stonith agents (if filter is provided
600              then only stonith agents matching the filter will be shown).  If
601              --nodesc  is  used  then  descriptions of stonith agents are not
602              printed.
603
604       describe <stonith agent>
605              Show options for specified stonith agent.
606
607       create <stonith id> <stonith device type> [stonith device options]  [op
608       <operation  action>  <operation options> [<operation action> <operation
609       options>]...] [meta <meta options>...]
610              Create stonith device with specified type and options.
611
612       update <stonith id> [stonith device options]
613              Add/Change options to specified stonith id.
614
615       delete <stonith id>
616              Remove stonith id from configuration.
617
618       cleanup [<stonith id>] [--node <node>]
619              Cleans up the stonith device in the lrmd (useful  to  reset  the
620              status  and  failcount).   This  tells the cluster to forget the
621              operation history of a stonith device and re-detect its  current
622              state.   This  can be useful to purge knowledge of past failures
623              that have since been resolved.  If a stonith id is not specified
624              then  all  resources/stonith  devices  will be cleaned up.  If a
625              node is not specified  then  resources  on  all  nodes  will  be
626              cleaned up.
627
628       level  Lists all of the fencing levels currently configured.
629
630       level add <level> <node> <devices>
631              Add  the fencing level for the specified node with a comma sepa‐
632              rated list of devices (stonith ids) to attempt for that node  at
633              that  level.  Fence  levels  are  attempted  in  numerical order
634              (starting with 1) if a level succeeds (meaning all  devices  are
635              successfully  fenced  in  that  level)  then no other levels are
636              tried, and the node is considered fenced.
637
638       level remove <level> [node id] [stonith id] ... [stonith id]
639              Removes the fence level for the level, node and/or devices spec‐
640              ified.   If  no  nodes  or  devices are specified then the fence
641              level is removed.
642
643       level clear [node|stonith id(s)]
644              Clears the fence levels on the node (or stonith id) specified or
645              clears  all  fence levels if a node/stonith id is not specified.
646              If more than one stonith id is specified they must be  separated
647              by  a  comma  and  no  spaces.  Example: pcs stonith level clear
648              dev_a,dev_b
649
650       level verify
651              Verifies all fence devices and nodes specified in  fence  levels
652              exist.
653
654       fence <node> [--off]
655              Fence  the  node specified (if --off is specified, use the 'off'
656              API call to stonith which will turn  the  node  off  instead  of
657              rebooting it).
658
659       confirm <node> [--force]
660              Confirm that the host specified is currently down.  This command
661              should ONLY be used when the node  specified  has  already  been
662              confirmed  to  be  powered  off  and to have no access to shared
663              resources.
664
665              WARNING: If this node is not actually powered  off  or  it  does
666              have access to shared resources, data corruption/cluster failure
667              can occur.  To  prevent  accidental  running  of  this  command,
668              --force  or  interactive  user  response is required in order to
669              proceed.
670
671       sbd enable [--watchdog=<path>[@<node>]] ... [<SBD_OPTION>=<value>] ...
672              Enable SBD in cluster.  Default  path  for  watchdog  device  is
673              /dev/watchdog.   Allowed   SBD   options:   SBD_WATCHDOG_TIMEOUT
674              (default: 5), SBD_DELAY_START (default:  no)  and  SBD_STARTMODE
675              (default: clean).
676
677              WARNING:  Cluster  has  to  be restarted in order to apply these
678              changes.
679
680              Example of enabling SBD in cluster with watchdogs on node1  will
681              be  /dev/watchdog2,  on  node2 /dev/watchdog1, /dev/watchdog0 on
682              all other nodes and watchdog timeout will bet set to 10 seconds:
683
684              pcs stonith sbd enable --watchdog=/dev/watchdog2@node1  --watch‐
685              dog=/dev/watchdog1@node2   --watchdog=/dev/watchdog0  SBD_WATCH‐
686              DOG_TIMEOUT=10
687
688
689       sbd disable
690              Disable SBD in cluster.
691
692              WARNING: Cluster has to be restarted in  order  to  apply  these
693              changes.
694
695       sbd status
696              Show status of SBD services in cluster.
697
698       sbd config
699              Show SBD configuration in cluster.
700
701   acl
702       [show] List all current access control lists.
703
704       enable Enable access control lists.
705
706       disable
707              Disable access control lists.
708
709       role  create  <role  id>  [description=<description>] [((read | write |
710       deny) (xpath <query> | id <id>))...]
711              Create a role with the id and (optional) description  specified.
712              Each  role  can  also  have  an  unlimited number of permissions
713              (read/write/deny) applied to either an xpath query or the id  of
714              a specific element in the cib.
715
716       role delete <role id>
717              Delete the role specified and remove it from any users/groups it
718              was assigned to.
719
720       role assign <role id> [to] [user|group] <username/group>
721              Assign a role to a user or group already created with  'pcs  acl
722              user/group  create'. If there is user and group with the same id
723              and it is not specified which should be used, user will be  pri‐
724              oritized.  In  cases  like  this  specify whenever user or group
725              should be used.
726
727       role unassign <role id> [from] [user|group] <username/group>
728              Remove a role from the specified user.  If  there  is  user  and
729              group  with  the same id and it is not specified which should be
730              used, user will be prioritized. In cases like this specify when‐
731              ever user or group should be used.
732
733       user create <username> [<role id>]...
734              Create  an  ACL  for  the user specified and assign roles to the
735              user.
736
737       user delete <username>
738              Remove the user specified (and roles assigned will be unassigned
739              for the specified user).
740
741       group create <group> [<role id>]...
742              Create  an  ACL  for the group specified and assign roles to the
743              group.
744
745       group delete <group>
746              Remove the group specified (and roles  assigned  will  be  unas‐
747              signed for the specified group).
748
749       permission  add  <role  id>  ((read | write | deny) (xpath <query> | id
750       <id>))...
751              Add the listed permissions to the role specified.
752
753       permission delete <permission id>
754              Remove the permission id specified (permission id's  are  listed
755              in parenthesis after permissions in 'pcs acl' output).
756
757   property
758       [list|show [<property> | --all | --defaults]] | [--all | --defaults]
759              List  property  settings (default: lists configured properties).
760              If --defaults is specified will show all property  defaults,  if
761              --all  is specified, current configured properties will be shown
762              with unset properties and their defaults.  Run 'man pengine' and
763              'man crmd' to get a description of the properties.
764
765       set   [--force   |   --node  <nodename>]  <property>=[<value>]  [<prop‐
766       erty>=[<value>] ...]
767              Set specific pacemaker properties (if the value  is  blank  then
768              the  property is removed from the configuration).  If a property
769              is not recognized by pcs the property will not be created unless
770              the  --force  is used. If --node is used a node attribute is set
771              on the specified node.  Run 'man pengine' and 'man crmd' to  get
772              a description of the properties.
773
774       unset [--node <nodename>] <property>
775              Remove  property  from  configuration  (or remove attribute from
776              specified node if --node is used).  Run 'man pengine'  and  'man
777              crmd' to get a description of the properties.
778
779   constraint
780       [list|show] --full
781              List  all current location, order and colocation constraints, if
782              --full is specified also list the constraint ids.
783
784       location <resource id> prefers <node[=score]>...
785              Create a location constraint on a resource to prefer the  speci‐
786              fied node and score (default score: INFINITY).
787
788       location <resource id> avoids <node[=score]>...
789              Create  a  location constraint on a resource to avoid the speci‐
790              fied node and score (default score: INFINITY).
791
792       location  <resource   id>   rule   [id=<rule   id>]   [resource-discov‐
793       ery=<option>]          [role=master|slave]         [constraint-id=<id>]
794       [score=<score>|score-attribute=<attribute>] <expression>
795              Creates a location rule on  the  specified  resource  where  the
796              expression looks like one of the following:
797                defined|not_defined <attribute>
798                <attribute>    lt|gt|lte|gte|eq|ne    [string|integer|version]
799              <value>
800                date gt|lt <date>
801                date in_range <date> to <date>
802                date in_range <date> to duration <duration options>...
803                date-spec <date spec options>...
804                <expression> and|or <expression>
805                ( <expression> )
806              where duration options and date spec options are: hours,  month‐
807              days, weekdays, yeardays, months, weeks, years, weekyears, moon.
808              If score is omitted it defaults to INFINITY. If  id  is  omitted
809              one  is generated from the resource id. If resource-discovery is
810              omitted it defaults to 'always'.
811
812       location [show [resources|nodes [node id|resource id]...] [--full]]
813              List all the current location  constraints,  if  'resources'  is
814              specified   location  constraints  are  displayed  per  resource
815              (default), if 'nodes' is specified location constraints are dis‐
816              played  per  node.  If specific nodes or resources are specified
817              then we only show information about them.  If --full  is  speci‐
818              fied show the internal constraint id's as well.
819
820       location  add  <id>  <resource  id>  <node>  <score>  [resource-discov‐
821       ery=<option>]
822              Add a location constraint with the appropriate id, resource  id,
823              node name and score. (For more advanced pacemaker usage.)
824
825       location remove <id> [<resource id> <node> <score>]
826              Remove  a  location constraint with the appropriate id, resource
827              id, node name and score. (For more advanced pacemaker usage.)
828
829       order [show] [--full]
830              List all current ordering constraints (if  --full  is  specified
831              show the internal constraint id's as well).
832
833       order [action] <resource id> then [action] <resource id> [options]
834              Add an ordering constraint specifying actions (start, stop, pro‐
835              mote, demote) and if no action is specified the  default  action
836              will  be  start.   Available  options  are  kind=Optional/Manda‐
837              tory/Serialize,  symmetrical=true/false,  require-all=true/false
838              and id=<constraint-id>.
839
840       order  set  <resource1>  [resourceN]...  [options] [set <resourceX> ...
841       [options]] [setoptions [constraint_options]]
842              Create an  ordered  set  of  resources.  Available  options  are
843              sequential=true/false, require-all=true/false, action=start/pro‐
844              mote/demote/stop and role=Stopped/Started/Master/Slave.   Avail‐
845              able       constraint_options       are      id=<constraint-id>,
846              kind=Optional/Mandatory/Serialize and symmetrical=true/false.
847
848       order remove <resource1> [resourceN]...
849              Remove resource from any ordering constraint
850
851       colocation [show] [--full]
852              List all current colocation constraints (if --full is  specified
853              show the internal constraint id's as well).
854
855       colocation  add [master|slave] <source resource id> with [master|slave]
856       <target resource id> [score] [options] [id=constraint-id]
857              Request <source resource> to run on the same  node  where  pace‐
858              maker  has  determined  <target  resource> should run.  Positive
859              values of score mean the resources should be  run  on  the  same
860              node,  negative  values  mean the resources should not be run on
861              the same node.  Specifying 'INFINITY' (or '-INFINITY')  for  the
862              score  forces <source resource> to run (or not run) with <target
863              resource> (score defaults to "INFINITY").  A role can be  master
864              or slave (if no role is specified, it defaults to 'started').
865
866       colocation  set  <resource1>  [resourceN]... [options] [set <resourceX>
867       ... [options]] [setoptions [constraint_options]]
868              Create a colocation constraint with a  resource  set.  Available
869              options   are   sequential=true/false,   require-all=true/false,
870              action=start/promote/demote/stop  and  role=Stopped/Started/Mas‐
871              ter/Slave.  Available  constraint_options  are id, score, score-
872              attribute and score-attribute-mangle.
873
874       colocation remove <source resource id> <target resource id>
875              Remove colocation constraints with specified resources.
876
877       ticket [show] [--full]
878              List all current ticket constraints (if --full is specified show
879              the internal constraint id's as well).
880
881       ticket  add  <ticket>  [<role>]  <resource  id>  [<options>]  [id=<con‐
882       straint-id>]
883              Create a ticket constraint for <resource id>.  Available  option
884              is  loss-policy=fence/stop/freeze/demote.  A role can be master,
885              slave, started or stopped.
886
887       ticket set <resource1> [<resourceN>]...  [<options>]  [set  <resourceX>
888       ... [<options>]] setoptions <constraint_options>
889              Create  a  ticket  constraint  with  a  resource  set. Available
890              options   are   sequential=true/false,   require-all=true/false,
891              action=start/promote/demote/stop  and  role=Stopped/Started/Mas‐
892              ter/Slave.  Required  constraint  option   is   ticket=<ticket>.
893              Optional constraint options are id=<constraint-id> and loss-pol‐
894              icy=fence/stop/freeze/demote.
895
896       ticket remove <ticket> <resource id>
897              Remove all ticket constraints with <ticket> from <resource id>.
898
899       remove [constraint id]...
900              Remove constraint(s) or  constraint  rules  with  the  specified
901              id(s).
902
903       ref <resource>...
904              List constraints referencing specified resource.
905
906       rule   add   <constraint   id>   [id=<rule   id>]   [role=master|slave]
907       [score=<score>|score-attribute=<attribute>] <expression>
908              Add a rule to a constraint where the expression looks  like  one
909              of the following:
910                defined|not_defined <attribute>
911                <attribute>    lt|gt|lte|gte|eq|ne    [string|integer|version]
912              <value>
913                date gt|lt <date>
914                date in_range <date> to <date>
915                date in_range <date> to duration <duration options>...
916                date-spec <date spec options>...
917                <expression> and|or <expression>
918                ( <expression> )
919              where duration options and date spec options are: hours,  month‐
920              days,  weekdays, yeardays, months, weeks, years, weekyears, moon
921              If score is ommited it defaults to INFINITY. If  id  is  ommited
922              one is generated from the constraint id.
923
924       rule remove <rule id>
925              Remove a rule if a rule id is specified, if rule is last rule in
926              its constraint, the constraint will be removed.
927
928   status
929       [status] [--full | --hide-inactive]
930              View all information about the  cluster  and  resources  (--full
931              provides    more   details,   --hide-inactive   hides   inactive
932              resources).
933
934       resources [<resource id> | --full | --groups | --hide-inactive]
935              Show all currently configured resources  or  if  a  resource  is
936              specified  show  the  options  for  the configured resource.  If
937              --full is specified, all configured  resource  options  will  be
938              displayed.   If  --groups  is  specified,  only show groups (and
939              their resources).  If --hide-inactive is  specified,  only  show
940              active resources.
941
942       groups View currently configured groups and their resources.
943
944       cluster
945              View current cluster status.
946
947       corosync
948              View current membership information as seen by corosync.
949
950       nodes [corosync|both|config]
951              View  current  status  of nodes from pacemaker. If 'corosync' is
952              specified, print nodes  currently  configured  in  corosync,  if
953              'both' is specified, print nodes from both corosync & pacemaker.
954              If 'config' is specified, print nodes from corosync &  pacemaker
955              configuration.
956
957       pcsd [<node>] ...
958              Show  the current status of pcsd on the specified nodes. When no
959              nodes are specified, status of all nodes is displayed.
960
961       xml    View xml version of status (output from crm_mon -r -1 -X).
962
963   config
964       [show] View full cluster configuration.
965
966       backup [filename]
967              Creates the tarball containing the cluster configuration  files.
968              If filename is not specified the standard output will be used.
969
970       restore [--local] [filename]
971              Restores  the  cluster configuration files on all nodes from the
972              backup.  If filename is not specified the standard input will be
973              used.   If  --local  is  specified only the files on the current
974              node will be restored.
975
976       checkpoint
977              List all available configuration checkpoints.
978
979       checkpoint view <checkpoint_number>
980              Show specified configuration checkpoint.
981
982       checkpoint restore <checkpoint_number>
983              Restore cluster configuration to specified checkpoint.
984
985       import-cman output=<filename> [input=<filename>] [--interactive]  [out‐
986       put-format=corosync.conf|cluster.conf] [dist=<dist>]
987              Converts  RHEL 6 (CMAN) cluster configuration to Pacemaker clus‐
988              ter configuration.  Converted configuration  will  be  saved  to
989              'output'  file.   To send the configuration to the cluster nodes
990              the 'pcs config restore' command can be used.  If  --interactive
991              is  specified  you  will  be prompted to solve incompatibilities
992              manually.  If no input  is  specified  /etc/cluster/cluster.conf
993              will  be used.  You can force to create output containing either
994              cluster.conf or corosync.conf using  the  output-format  option.
995              Optionally  you  can  specify  output  version by setting 'dist'
996              option  e.  g.   rhel,6.8   or   redhat,7.3   or   debian,7   or
997              ubuntu,trusty.  You can get the list of supported dist values by
998              running the "clufter --list-dists" command.  If  'dist'  is  not
999              specified,  it  defaults  to this node's version if that matches
1000              output-format, otherwise redhat,6.7 is used for cluster.conf and
1001              redhat,7.1 is used for corosync.conf.
1002
1003       import-cman  output=<filename>  [input=<filename>] [--interactive] out‐
1004       put-format=pcs-commands|pcs-commands-verbose [dist=<dist>]
1005              Converts RHEL 6 (CMAN) cluster configuration to a  list  of  pcs
1006              commands  which  recreates the same cluster as Pacemaker cluster
1007              when executed.  Commands will be saved to  'output'  file.   For
1008              other options see above.
1009
1010       export       pcs-commands|pcs-commands-verbose      [output=<filename>]
1011       [dist=<dist>]
1012              Creates a list of pcs commands which  upon  execution  recreates
1013              the  current  cluster  running  on  this node.  Commands will be
1014              saved to 'output' file or written to stdout if 'output'  is  not
1015              specified.   Use  pcs-commands to get a simple list of commands,
1016              whereas pcs-commands-verbose creates a list  including  comments
1017              and  debug  messages.  Optionally specify output version by set‐
1018              ting 'dist' option e. g. rhel,6.8 or redhat,7.3 or  debian,7  or
1019              ubuntu,trusty.  You can get the list of supported dist values by
1020              running the "clufter --list-dists" command.  If  'dist'  is  not
1021              specified, it defaults to this node's version.
1022
1023   pcsd
1024       certkey <certificate file> <key file>
1025              Load custom certificate and key files for use in pcsd.
1026
1027       sync-certificates
1028              Sync   pcsd   certificates  to  all  nodes  found  from  current
1029              corosync.conf file (cluster.conf  on  systems  running  Corosync
1030              1.x).  WARNING: This will restart pcsd daemon on the nodes.
1031
1032       clear-auth [--local] [--remote]
1033              Removes  all  system  tokens which allow pcs/pcsd on the current
1034              system  to  authenticate  with  remote  pcs/pcsd  instances  and
1035              vice-versa.  After this command is run this node will need to be
1036              re-authenticated with other nodes (using  'pcs  cluster  auth').
1037              Using --local only removes tokens used by local pcs (and pcsd if
1038              root) to connect to other pcsd instances, using --remote  clears
1039              authentication  tokens  used by remote systems to connect to the
1040              local pcsd instance.
1041
1042   node
1043       attribute [[<node>] [--name <name>] | <node> <name>=<value> ...]
1044              Manage node attributes.  If no parameters  are  specified,  show
1045              attributes  of  all  nodes.  If one parameter is specified, show
1046              attributes of specified node.   If  --name  is  specified,  show
1047              specified  attribute's value from all nodes.  If more parameters
1048              are specified, set attributes of specified node.  Attributes can
1049              be removed by setting an attribute without a value.
1050
1051       maintenance [--all] | [<node>]...
1052              Put  specified  node(s)  into  maintenance  mode,  if no node or
1053              options are specified the current node will be put into  mainte‐
1054              nance  mode,  if  --all  is specified all nodes will be put into
1055              maintenace mode.
1056
1057       unmaintenance [--all] | [<node>]...
1058              Remove node(s) from maintenance mode, if no node or options  are
1059              specified  the  current  node  will  be removed from maintenance
1060              mode, if --all is specified all nodes will be removed from main‐
1061              tenance mode.
1062
1063       standby [--all | <node>] [--wait[=n]]
1064              Put specified node into standby mode (the node specified will no
1065              longer be able to host resources), if no  node  or  options  are
1066              specified  the  current  node  will be put into standby mode, if
1067              --all is specified all nodes will be put into standby mode.   If
1068              --wait  is  specified,  pcs  will wait up to 'n' seconds for the
1069              node(s) to be put into standby mode and then return 0 on success
1070              or  1  if the operation not succeeded yet.  If 'n' is not speci‐
1071              fied it defaults to 60 minutes.
1072
1073       unstandby [--all | <node>] [--wait[=n]]
1074              Remove node from standby mode (the node specified  will  now  be
1075              able to host resources), if no node or options are specified the
1076              current node will be removed from  standby  mode,  if  --all  is
1077              specified  all  nodes  will  be  removed  from standby mode.  If
1078              --wait is specified, pcs will wait up to  'n'  seconds  for  the
1079              node(s)  to  be  removed  from standby mode and then return 0 on
1080              success or 1 if the operation not succeeded yet.  If 'n' is  not
1081              specified it defaults to 60 minutes.
1082
1083       utilization [[<node>] [--name <name>] | <node> <name>=<value> ...]
1084              Add specified utilization options to specified node.  If node is
1085              not specified, shows utilization of all  nodes.   If  --name  is
1086              specified,  shows specified utilization value from all nodes. If
1087              utilization options are  not  specified,  shows  utilization  of
1088              specified   node.    Utilization  option  should  be  in  format
1089              name=value, value has to be integer.  Options may be removed  by
1090              setting  an  option without a value.  Example: pcs node utiliza‐
1091              tion node1 cpu=4 ram=
1092
1093   alert
1094       [config|show]
1095              Show all configured alerts.
1096
1097       create path=<path> [id=<alert-id>] [description=<description>] [options
1098       [<option>=<value>]...] [meta [<meta-option>=<value>]...]
1099              Define an alert handler with specified path. Id will be automat‐
1100              ically generated if it is not specified.
1101
1102       update <alert-id>  [path=<path>]  [description=<description>]  [options
1103       [<option>=<value>]...] [meta [<meta-option>=<value>]...]
1104              Update existing alert handler with specified id.
1105
1106       remove <alert-id> ...
1107              Remove alert handlers with specified ids.
1108
1109       recipient  add  <alert-id>  value=<recipient-value> [id=<recipient-id>]
1110       [description=<description>]   [options   [<option>=<value>]...]   [meta
1111       [<meta-option>=<value>]...]
1112              Add new recipient to specified alert handler.
1113
1114       recipient  update  <recipient-id>  [value=<recipient-value>]  [descrip‐
1115       tion=<description>]  [options  [<option>=<value>]...]   [meta   [<meta-
1116       option>=<value>]...]
1117              Update existing recipient identified by it's id.
1118
1119       recipient remove <recipient-id> ...
1120              Remove specified recipients.
1121

EXAMPLES

1123       Show all resources
1124              # pcs resource show
1125
1126       Show options specific to the 'VirtualIP' resource
1127              # pcs resource show VirtualIP
1128
1129       Create a new resource called 'VirtualIP' with options
1130              #    pcs   resource   create   VirtualIP   ocf:heartbeat:IPaddr2
1131              ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s
1132
1133       Create a new resource called 'VirtualIP' with options
1134              #  pcs  resource  create   VirtualIP   IPaddr2   ip=192.168.0.99
1135              cidr_netmask=32 nic=eth2 op monitor interval=30s
1136
1137       Change the ip address of VirtualIP and remove the nic option
1138              # pcs resource update VirtualIP ip=192.168.0.98 nic=
1139
1140       Delete the VirtualIP resource
1141              # pcs resource delete VirtualIP
1142
1143       Create  the  MyStonith  stonith  fence_virt device which can fence host
1144       'f1'
1145              # pcs stonith create MyStonith fence_virt pcmk_host_list=f1
1146
1147       Set the stonith-enabled property to false on the  cluster  (which  dis‐
1148       ables stonith)
1149              # pcs property set stonith-enabled=false
1150
1151
1152
1153pcs 0.9.155                      November 2016                          PCS(8)
Impressum