1
2Gluster(8)                       Gluster Inc.                       Gluster(8)
3
4
5

NAME

7       gluster - Gluster Console Manager (command line utility)
8

SYNOPSIS

10       gluster
11
12       To run the program and display gluster prompt:
13
14       gluster [--remote-host=<gluster_node>] [--mode=script] [--xml]
15
16       (or)
17
18       To specify a command directly:
19
20       gluster     [commands]     [options]     [--remote-host=<gluster_node>]
21       [--mode=script] [--xml]
22

DESCRIPTION

24       The Gluster Console Manager is a command line utility for elastic  vol‐
25       ume  management.  You can run the gluster command on any export server.
26       The command enables administrators to perform cloud operations, such as
27       creating,  expanding,  shrinking,  rebalancing,  and  migrating volumes
28       without needing to schedule server downtime.
29

COMMANDS

31   Volume Commands
32        volume info [all|<VOLNAME>]
33              Display information about all volumes, or the specified volume.
34
35        volume list
36              List all volumes in cluster
37
38        volume   status    [all    |    <VOLNAME>    [nfs|shd|<BRICK>|quotad]]
39       [detail|clients|mem|inode|fd|callpool|tasks|client-list]
40              Display status of all or specified volume(s)/brick
41
42        volume   create   <NEW-VOLNAME>  [stripe  <COUNT>]  [[replica  <COUNT>
43       [arbiter <COUNT>]]|[replica 2  thin-arbiter  1]]  [disperse  [<COUNT>]]
44       [redundancy  <COUNT>]  [transport  <tcp|rdma|tcp,rdma>] <NEW-BRICK> ...
45       <TA-BRICK>
46              Create a new volume of the specified type  using  the  specified
47              bricks  and  transport type (the default transport type is tcp).
48              To create a volume with both transports  (tcp  and  rdma),  give
49              'transport tcp,rdma' as an option.
50
51        volume delete <VOLNAME>
52              Delete the specified volume.
53
54        volume start <VOLNAME>
55              Start the specified volume.
56
57        volume stop <VOLNAME> [force]
58              Stop the specified volume.
59
60        volume set <VOLNAME> <OPTION> <PARAMETER> [<OPTION> <PARAMETER>] ...
61              Set the volume options.
62
63        volume get <VOLNAME/all> <OPTION/all>
64              Get  the  value  of  the  all options or given option for volume
65              <VOLNAME> or all option. gluster volume get all all  is  to  get
66              all global options
67
68        volume reset <VOLNAME> [option] [force]
69              Reset all the reconfigured options
70
71        volume barrier <VOLNAME> {enable|disable}
72              Barrier/unbarrier file operations on a volume
73
74        volume  clear-locks  <VOLNAME> <path> kind {blocked|granted|all}{inode
75       [range]|entry [basename]|posix [range]}
76              Clear locks held on path
77
78        volume help
79              Display help for the volume command.
80
81   Brick Commands
82        volume add-brick <VOLNAME> <NEW-BRICK> ...
83              Add the specified brick to the specified volume.
84
85        volume remove-brick <VOLNAME> <BRICK> ...
86              Remove the specified brick from the specified volume.
87
88              Note: If you remove the brick, the data  stored  in  that  brick
89              will  not  be  available. You can migrate data from one brick to
90              another using replace-brick option.
91
92        volume reset-brick <VOLNAME> <SOURCE-BRICK>  {{start}  |  {<NEW-BRICK>
93       commit}}
94              Brings  down or replaces the specified source brick with the new
95              brick.
96
97        volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> commit force
98              Replace the specified source brick with a new brick.
99
100        volume rebalance <VOLNAME> start
101              Start rebalancing the specified volume.
102
103        volume rebalance <VOLNAME> stop
104              Stop rebalancing the specified volume.
105
106        volume rebalance <VOLNAME> status
107              Display the rebalance status of the specified volume.
108
109   Log Commands
110        volume log filename <VOLNAME> [BRICK] <DIRECTORY>
111              Set the log directory for the corresponding volume/brick.
112
113        volume log locate <VOLNAME> [BRICK]
114              Locate the log file for corresponding volume/brick.
115
116        volume log rotate <VOLNAME> [BRICK]
117              Rotate the log file for corresponding volume/brick.
118
119        volume profile <VOLNAME> {start|info [peek|incremental  [peek]|cumula‐
120       tive|clear]|stop} [nfs]
121              Profile  operations  on the volume. Once started, volume profile
122              <volname> info provides cumulative statistics of the  FOPs  per‐
123              formed.
124
125        volume           statedump           <VOLNAME>           [[nfs|quotad]
126       [all|mem|iobuf|callpool|priv|fd|inode|history]...  |   [client   <host‐
127       name:process-id>]]
128              Dumps the in memory state of the specified process or the bricks
129              of the volume.
130
131        volume sync <HOSTNAME> [all|<VOLNAME>]
132              Sync the volume information from a peer
133
134   Peer Commands
135        peer probe <HOSTNAME>
136              Probe the specified peer. In case the <HOSTNAME>  given  belongs
137              to  an  already probed peer, the peer probe command will add the
138              hostname to the peer if required.
139
140        peer detach <HOSTNAME>
141              Detach the specified peer.
142
143        peer status
144              Display the status of peers.
145
146        pool list
147              List all the nodes in the pool (including localhost)
148
149        peer help
150              Display help for the peer command.
151
152   Quota Commands
153        volume quota <VOLNAME> enable
154              Enable quota on the specified volume. This will  cause  all  the
155              directories  in  the  filesystem  hierarchy  to be accounted and
156              updated thereafter on each operation in the the  filesystem.  To
157              kick  start  this accounting, a crawl is done over the hierarchy
158              with an auxiliary client.
159
160        volume quota <VOLNAME> disable
161              Disable quota on the volume. This will disable  enforcement  and
162              accounting  in  the  filesystem.  Any  configured limits will be
163              lost.
164
165        volume quota <VOLNAME> limit-usage <PATH> <SIZE> [<PERCENT>]
166              Set a usage  limit on the given path. Any previously  set  limit
167              is overridden to the new value. The soft limit can optionally be
168              specified (as a percentage of hard limit). If  soft  limit  per‐
169              centage  is  not  provided  the default soft limit value for the
170              volume is used to decide the soft limit.
171
172        volume quota <VOLNAME> limit-objects <PATH> <SIZE> [<PERCENT>]
173              Set an inode limit on the given path. Any previously  set  limit
174              is overridden to the new value. The soft limit can optionally be
175              specified (as a percentage of hard limit). If  soft  limit  per‐
176              centage  is  not  provided  the default soft limit value for the
177              volume is used to decide the soft limit.
178
179       NOTE: valid units of SIZE are : B, KB, MB, GB, TB, PB. If  no  unit  is
180       specified, the unit defaults to bytes.
181
182        volume quota <VOLNAME> remove <PATH>
183              Remove  any  usage  limit configured on the specified directory.
184              Note that if any limit is configured on the  ancestors  of  this
185              directory (previous directories along the path), they will still
186              be honored and enforced.
187
188        volume quota <VOLNAME> remove-objects <PATH>
189              Remove any inode limit configured on  the  specified  directory.
190              Note  that  if  any limit is configured on the ancestors of this
191              directory (previous directories along the path), they will still
192              be honored and enforced.
193
194        volume quota <VOLNAME> list <PATH>
195              Lists  the   usage  and  limits configured on directory(s). If a
196              path is given only the limit that has  been  configured  on  the
197              directory(if any) is displayed along with the directory's usage.
198              If no path is given, usage and  limits  are  displayed  for  all
199              directories that has limits configured.
200
201        volume quota <VOLNAME> list-objects <PATH>
202              Lists  the  inode  usage  and  inode limits configured on direc‐
203              tory(s). If a path is given only the limit that has been config‐
204              ured on the directory(if any) is displayed along with the direc‐
205              tory's inode usage. If no path is given, usage  and  limits  are
206              displayed for all directories that has limits configured.
207
208        volume quota <VOLNAME> default-soft-limit <PERCENT>
209              Set the percentage value for default soft limit for the volume.
210
211        volume quota <VOLNAME> soft-timeout <TIME>
212              Set  the soft timeout for the volume. The interval in which lim‐
213              its are retested before the soft limit is breached.
214
215        volume quota <VOLNAME> hard-timeout <TIME>
216              Set the hard timeout for the volume. The interval in which  lim‐
217              its are retested after the soft limit is breached.
218
219        volume quota <VOLNAME> alert-time <TIME>
220              Set  the  frequency  in which warning messages need to be logged
221              (in the brick logs) once soft limit is breached.
222
223        volume inode-quota <VOLNAME> enable/disable
224              Enable/disable inode-quota for <VOLNAME>
225
226        volume quota help
227              Display help for volume quota commands
228
229       NOTE: valid units of time and their symbols  are  :  hours(h/hr),  min‐
230       utes(m/min), seconds(s/sec), weeks(w/wk), Days(d/days).
231
232   Geo-replication Commands
233        Note:  password-less  ssh,  from the master node (where these commands
234       are executed) to the slave node <SLAVE_HOST>, is a prerequisite for the
235       geo-replication commands.
236
237        system:: execute gsec_create
238              Generates pem keys which are required for push-pem
239
240        volume  geo-replication  <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL> create
241       [[ssh-port n][[no-verify]|[push-pem]]] [force]
242              Create  a  new  geo-replication  session  from  <MASTER_VOL>  to
243              <SLAVE_HOST> host machine having <SLAVE_VOL>.  Use ssh-port n if
244              custom SSH port is configured in slave nodes.  Use no-verify  if
245              the  rsa-keys  of nodes in master volume is distributed to slave
246              nodes through an external agent.  Use push-pem to push the  keys
247              automatically.
248
249        volume    geo-replication    <MASTER_VOL>    <SLAVE_HOST>::<SLAVE_VOL>
250       {start|stop} [force]
251              Start/stop the  geo-replication  session  from  <MASTER_VOL>  to
252              <SLAVE_HOST> host machine having <SLAVE_VOL>.
253
254        volume geo-replication [<MASTER_VOL> [<SLAVE_HOST>::<SLAVE_VOL>]] sta‐
255       tus [detail]
256              Query status of the geo-replication session from <MASTER_VOL> to
257              <SLAVE_HOST> host machine having <SLAVE_VOL>.
258
259        volume    geo-replication    <MASTER_VOL>    <SLAVE_HOST>::<SLAVE_VOL>
260       {pause|resume} [force]
261              Pause/resume the geo-replication session  from  <MASTER_VOL>  to
262              <SLAVE_HOST> host machine having <SLAVE_VOL>.
263
264        volume  geo-replication  <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL> delete
265       [reset-sync-time]
266              Delete  the  geo-replication  session   from   <MASTER_VOL>   to
267              <SLAVE_HOST>  host  machine  having <SLAVE_VOL>.  Optionally you
268              can also reset the sync time in case  you  need  to  resync  the
269              entire volume on session recreate.
270
271        volume  geo-replication  <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL> config
272       [[!]<options> [<value>]]
273              View (when no option provided) or  set  configuration  for  this
274              geo-replication   session.   Use  "!<OPTION>"  to  reset  option
275              <OPTION> to default value.
276
277   Bitrot Commands
278        volume bitrot <VOLNAME> {enable|disable}
279              Enable/disable bitrot for volume <VOLNAME>
280
281        volume bitrot <VOLNAME> scrub-throttle {lazy|normal|aggressive}
282              Scrub-throttle value is a measure of how fast or slow the scrub‐
283              ber scrubs the filesystem for volume <VOLNAME>
284
285        volume            bitrot           <VOLNAME>           scrub-frequency
286       {hourly|daily|weekly|biweekly|monthly}
287              Scrub frequency for volume <VOLNAME>
288
289        volume bitrot <VOLNAME> scrub {pause|resume|status|ondemand}
290              Pause/Resume scrub. Upon resume,  scrubber  continues  where  it
291              left  off. status option shows the statistics of scrubber. onde‐
292              mand option starts the scrubbing immediately if the scrubber  is
293              not paused or already running.
294
295        volume bitrot help
296              Display help for volume bitrot commands
297
298
299          Snapshot Commands
300
301        snapshot   create  <snapname>  <volname>  [no-timestamp]  [description
302       <description>] [force]
303              Creates a snapshot of a GlusterFS volume.  User  can  provide  a
304              snap-name  and  a description to identify the snap. Snap will be
305              created by appending timestamp in GMT. User  can  override  this
306              behaviour using "no-timestamp" option. The description cannot be
307              more than 1024 characters. To be able to take a snapshot, volume
308              should be present and it should be in started state.
309
310        snapshot restore <snapname>
311              Restores  an already taken snapshot of a GlusterFS volume. Snap‐
312              shot restore is an offline activity therefore if the  volume  is
313              online  (in started state) then the restore operation will fail.
314              Once the snapshot is restored it will not be  available  in  the
315              list of snapshots.
316
317        snapshot clone <clonename> <snapname>
318              Create  a  clone of a snapshot volume, the resulting volume will
319              be GlusterFS volume. User can provide a clone-name. To  be  able
320              to  take a clone, snapshot should be present and it should be in
321              activated state.
322
323        snapshot delete ( all | <snapname> | volume <volname> )
324              If snapname is specified then mentioned snapshot is deleted.  If
325              volname  is  specified then all snapshots belonging to that par‐
326              ticular volume is deleted. If keyword *all*  is  used  then  all
327              snapshots belonging to the system is deleted.
328
329        snapshot list [volname]
330              Lists all snapshots taken. If volname is provided, then only the
331              snapshots belonging to that particular volume is listed.
332
333        snapshot info [snapname | (volume <volname>)]
334              This command gives information such as snapshot  name,  snapshot
335              UUID,  time at which snapshot was created, and it lists down the
336              snap-volume-name, number of snapshots already taken  and  number
337              of snapshots still available for that particular volume, and the
338              state of the snapshot. If snapname is specified then info of the
339              mentioned  snapshot is  displayed.  If volname is specified then
340              info of all snapshots belonging to that volume is displayed.  If
341              both   snapname and  volname  is  not specified then info of all
342              the snapshots present in the system are displayed.
343
344        snapshot status [snapname | (volume <volname>)]
345              This command gives status of the snapshot. The details  included
346              are  snapshot  brick  path, volume group(LVM details), status of
347              the snapshot bricks, PID of the bricks, data  percentage  filled
348              for  that  particular volume group to which the snapshots belong
349              to, and total size of the logical volume.
350
351              If snapname is specified then status of the  mentioned  snapshot
352              is  displayed.  If volname is specified then status of all snap‐
353              shots belonging to that volume is displayed.  If  both  snapname
354              and  volname  is  not specified then status of all the snapshots
355              present in the system are displayed.
356
357        snapshot config [volname]  ([snap-max-hard-limit  <count>]  [snap-max-
358       soft-limit <percent>]) | ([auto-delete <enable|disable>]) | ([activate-
359       on-create <enable|disable>])
360              Displays and sets the snapshot config values.
361
362              snapshot config without any keywords displays the snapshot  con‐
363              fig values of all volumes in the system. If volname is provided,
364              then the snapshot config values of that volume is displayed.
365
366              Snapshot config command along  with  keywords  can  be  used  to
367              change  the  existing config values. If volname is provided then
368              config value of that volume is changed, else it will  set/change
369              the system limit.
370
371              snap-max-soft-limit  and  auto-delete  are  global options, that
372              will be inherited by all volumes in the system and cannot be set
373              to individual volumes.
374
375              snap-max-hard-limit  can be set globally, as well as per volume.
376              The lowest limit between the global system limit and the  volume
377              specific  limit, becomes the "Effective snap-max-hard-limit" for
378              a volume.
379
380              snap-max-soft-limit is a percentage value, which is  applied  on
381              the  "Effective snap-max-hard-limit" to get the "Effective snap-
382              max-soft-limit".
383
384              When auto-delete feature is  enabled,  then  upon  reaching  the
385              "Effective  snap-max-soft-limit", with every successful snapshot
386              creation, the oldest snapshot will be deleted.
387
388              When auto-delete feature is disabled,  then  upon  reaching  the
389              "Effective  snap-max-soft-limit",  the  user gets a warning with
390              every successful snapshot creation.
391
392              When auto-delete feature is disabled,  then  upon  reaching  the
393              "Effective  snap-max-hard-limit",  further   snapshot  creations
394              will not be allowed.
395
396              activate-on-create is disabled by default. If you  enable  acti‐
397              vate-on-create,  then  further snapshot will be activated during
398              the time of snapshot creation.
399
400        snapshot activate <snapname>
401              Activates the mentioned snapshot.
402
403              Note : By default the snapshot is activated during snapshot cre‐
404              ation.
405
406        snapshot deactivate <snapname>
407              Deactivates the mentioned snapshot.
408
409        snapshot help
410              Display help for the snapshot commands.
411
412   Self-heal Commands
413        volume heal <VOLNAME>
414              Triggers index self heal for the files that need healing.
415
416
417        volume heal  <VOLNAME> [enable | disable]
418              Enable/disable self-heal-daemon for volume <VOLNAME>.
419
420
421        volume heal <VOLNAME> full
422              Triggers self heal on all the files.
423
424
425        volume heal <VOLNAME> info
426              Lists the files that need healing.
427
428
429        volume heal <VOLNAME> info split-brain
430              Lists the files which are in split-brain state.
431
432
433        volume heal <VOLNAME> statistics
434              Lists the crawl statistics.
435
436
437        volume heal <VOLNAME> statistics heal-count
438              Displays the count of files to be healed.
439
440
441        volume  heal  <VOLNAME> statistics heal-count replica <HOSTNAME:BRICK‐
442       NAME>
443              Displays the number of files to  be  healed  from  a  particular
444              replica   subvolume  to  which  the  brick  <HOSTNAME:BRICKNAME>
445              belongs.
446
447
448        volume heal <VOLNAME> split-brain bigger-file <FILE>
449              Performs healing of <FILE> which is in split-brain  by  choosing
450              the bigger file in the replica as source.
451
452
453        volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME>
454              Selects  <HOSTNAME:BRICKNAME>  as  the  source for all the files
455              that are in split-brain in that replica and heals them.
456
457
458        volume heal <VOLNAME>  split-brain  source-brick  <HOSTNAME:BRICKNAME>
459       <FILE>
460              Selects the split-brained <FILE> present in <HOSTNAME:BRICKNAME>
461              as source and completes heal.
462
463   Other Commands
464        get-state [<daemon>] [[odir </path/to/output/dir/>] [file <filename>]]
465       [detail|volumeoptions]
466              Get  local  state  representation  of mentioned daemon and store
467              data in provided path information
468
469        help  Display the command options.
470
471        quit  Exit the gluster command line interface.
472
473

FILES

475       /var/lib/glusterd/*
476

SEE ALSO

478       fusermount(1), mount.glusterfs(8), glusterfs(8), glusterd(8)
479
481       Copyright(c) 2006-2011  Gluster, Inc.  <http://www.gluster.com>
482
483
484
48507 March 2011            Gluster command line utility               Gluster(8)
Impressum