1
2Gluster(8)                       Gluster Inc.                       Gluster(8)
3
4
5

NAME

7       gluster - Gluster Console Manager (command line utility)
8

SYNOPSIS

10       gluster
11
12       To run the program and display gluster prompt:
13
14       gluster [--remote-host=<gluster_node>] [--mode=script] [--xml]
15
16       (or)
17
18       To specify a command directly:
19
20       gluster     [commands]     [options]     [--remote-host=<gluster_node>]
21       [--mode=script] [--xml]
22

DESCRIPTION

24       The Gluster Console Manager is a command line utility for elastic  vol‐
25       ume  management.  You can run the gluster command on any export server.
26       The command enables administrators to perform cloud operations, such as
27       creating,  expanding,  shrinking,  rebalancing,  and  migrating volumes
28       without needing to schedule server downtime.
29

COMMANDS

31   Volume Commands
32        volume info [all|<VOLNAME>]
33              Display information about all volumes, or the specified volume.
34
35        volume list
36              List all volumes in cluster
37
38        volume  status  [all   |   <VOLNAME>   [nfs|shd|<BRICK>|quotad|tierd]]
39       [detail|clients|mem|inode|fd|callpool|tasks|client-list]
40              Display status of all or specified volume(s)/brick
41
42        volume  create  <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT>] [dis‐
43       perse [<COUNT>]] [redundancy <COUNT>]  [transport  <tcp|rdma|tcp,rdma>]
44       <NEW-BRICK> ...
45              Create  a  new  volume of the specified type using the specified
46              bricks and transport type (the default transport type  is  tcp).
47              To  create  a  volume  with both transports (tcp and rdma), give
48              'transport tcp,rdma' as an option.
49
50        volume delete <VOLNAME>
51              Delete the specified volume.
52
53        volume start <VOLNAME>
54              Start the specified volume.
55
56        volume stop <VOLNAME> [force]
57              Stop the specified volume.
58
59        volume set <VOLNAME> <OPTION> <PARAMETER> [<OPTION> <PARAMETER>] ...
60              Set the volume options.
61
62        volume get <VOLNAME/all> <OPTION/all>
63              Get the value of the all options  or  given  option  for  volume
64              <VOLNAME>  or  all  option. gluster volume get all all is to get
65              all global options
66
67        volume reset <VOLNAME> [option] [force]
68              Reset all the reconfigured options
69
70        volume barrier <VOLNAME> {enable|disable}
71              Barrier/unbarrier file operations on a volume
72
73        volume clear-locks <VOLNAME> <path>  kind  {blocked|granted|all}{inode
74       [range]|entry [basename]|posix [range]}
75              Clear locks held on path
76
77        volume help
78              Display help for the volume command.
79
80   Brick Commands
81        volume add-brick <VOLNAME> <NEW-BRICK> ...
82              Add the specified brick to the specified volume.
83
84        volume remove-brick <VOLNAME> <BRICK> ...
85              Remove the specified brick from the specified volume.
86
87              Note:  If  you  remove  the brick, the data stored in that brick
88              will not be available. You can migrate data from  one  brick  to
89              another using replace-brick option.
90
91        volume  reset-brick  <VOLNAME>  <SOURCE-BRICK> {{start} | {<NEW-BRICK>
92       commit}}
93              Brings down or replaces the specified source brick with the  new
94              brick.
95
96        volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> commit force
97              Replace the specified source brick with a new brick.
98
99        volume rebalance <VOLNAME> start
100              Start rebalancing the specified volume.
101
102        volume rebalance <VOLNAME> stop
103              Stop rebalancing the specified volume.
104
105        volume rebalance <VOLNAME> status
106              Display the rebalance status of the specified volume.
107
108   Log Commands
109        volume log filename <VOLNAME> [BRICK] <DIRECTORY>
110              Set the log directory for the corresponding volume/brick.
111
112        volume log locate <VOLNAME> [BRICK]
113              Locate the log file for corresponding volume/brick.
114
115        volume log rotate <VOLNAME> [BRICK]
116              Rotate the log file for corresponding volume/brick.
117
118        volume  profile <VOLNAME> {start|info [peek|incremental [peek]|cumula‐
119       tive|clear]|stop} [nfs]
120              Profile operations on the volume. Once started,  volume  profile
121              <volname>  info  provides cumulative statistics of the FOPs per‐
122              formed.
123
124        volume           statedump           <VOLNAME>           [[nfs|quotad]
125       [all|mem|iobuf|callpool|priv|fd|inode|history]...   |   [client  <host‐
126       name:process-id>]]
127              Dumps the in memory state of the specified process or the bricks
128              of the volume.
129
130        volume sync <HOSTNAME> [all|<VOLNAME>]
131              Sync the volume information from a peer
132
133   Peer Commands
134        peer probe <HOSTNAME>
135              Probe  the  specified peer. In case the <HOSTNAME> given belongs
136              to an already probed peer, the peer probe command will  add  the
137              hostname to the peer if required.
138
139        peer detach <HOSTNAME>
140              Detach the specified peer.
141
142        peer status
143              Display the status of peers.
144
145        pool list
146              List all the nodes in the pool (including localhost)
147
148        peer help
149              Display help for the peer command.
150
151   Tier Commands
152        volume tier <VOLNAME> attach [<replica COUNT>] <NEW-BRICK>...
153              Attach  to an existing volume a tier of specified type using the
154              specified bricks.
155
156        volume tier <VOLNAME> start [force]
157              Start the tier service for <VOLNAME>
158
159        volume tier <VOLNAME> status
160              Display statistics on data migration between the  hot  and  cold
161              tiers.
162
163        volume tier <VOLNAME> stop [force]
164              Stop the tier service for <VOLNAME>
165
166        volume tier <VOLNAME> detach start
167              Begin detaching the hot tier from the volume. Data will be moved
168              from the hot tier to the cold tier.
169
170        volume tier <VOLNAME> detach commit [force]
171              Commit detaching the hot tier from the volume. The  volume  will
172              revert to its original state before the hot tier was attached.
173
174        volume tier <VOLNAME> detach status
175              Check status of data movement from the hot to cold tier.
176
177        volume tier <VOLNAME> detach stop
178              Stop detaching the hot tier from the volume.
179
180
181   Quota Commands
182        volume quota <VOLNAME> enable
183              Enable  quota  on  the specified volume. This will cause all the
184              directories in the filesystem  hierarchy  to  be  accounted  and
185              updated  thereafter  on each operation in the the filesystem. To
186              kick start this accounting, a crawl is done over  the  hierarchy
187              with an auxiliary client.
188
189        volume quota <VOLNAME> disable
190              Disable  quota  on the volume. This will disable enforcement and
191              accounting in the filesystem.  Any  configured  limits  will  be
192              lost.
193
194        volume quota <VOLNAME> limit-usage <PATH> <SIZE> [<PERCENT>]
195              Set  a  usage  limit on the given path. Any previously set limit
196              is overridden to the new value. The soft limit can optionally be
197              specified  (as  a  percentage of hard limit). If soft limit per‐
198              centage is not provided the default soft  limit  value  for  the
199              volume is used to decide the soft limit.
200
201        volume quota <VOLNAME> limit-objects <PATH> <SIZE> [<PERCENT>]
202              Set  an  inode limit on the given path. Any previously set limit
203              is overridden to the new value. The soft limit can optionally be
204              specified  (as  a  percentage of hard limit). If soft limit per‐
205              centage is not provided the default soft  limit  value  for  the
206              volume is used to decide the soft limit.
207
208       NOTE:  valid  units  of SIZE are : B, KB, MB, GB, TB, PB. If no unit is
209       specified, the unit defaults to bytes.
210
211        volume quota <VOLNAME> remove <PATH>
212              Remove any usage limit configured on  the  specified  directory.
213              Note  that  if  any limit is configured on the ancestors of this
214              directory (previous directories along the path), they will still
215              be honored and enforced.
216
217        volume quota <VOLNAME> remove-objects <PATH>
218              Remove  any  inode  limit configured on the specified directory.
219              Note that if any limit is configured on the  ancestors  of  this
220              directory (previous directories along the path), they will still
221              be honored and enforced.
222
223        volume quota <VOLNAME> list <PATH>
224              Lists the  usage and limits configured  on  directory(s).  If  a
225              path  is  given  only  the limit that has been configured on the
226              directory(if any) is displayed along with the directory's usage.
227              If  no  path  is  given,  usage and limits are displayed for all
228              directories that has limits configured.
229
230        volume quota <VOLNAME> list-objects <PATH>
231              Lists the inode usage and  inode  limits  configured  on  direc‐
232              tory(s). If a path is given only the limit that has been config‐
233              ured on the directory(if any) is displayed along with the direc‐
234              tory's  inode  usage.  If no path is given, usage and limits are
235              displayed for all directories that has limits configured.
236
237        volume quota <VOLNAME> default-soft-limit <PERCENT>
238              Set the percentage value for default soft limit for the volume.
239
240        volume quota <VOLNAME> soft-timeout <TIME>
241              Set the soft timeout for the volume. The interval in which  lim‐
242              its are retested before the soft limit is breached.
243
244        volume quota <VOLNAME> hard-timeout <TIME>
245              Set  the hard timeout for the volume. The interval in which lim‐
246              its are retested after the soft limit is breached.
247
248        volume quota <VOLNAME> alert-time <TIME>
249              Set the frequency in which warning messages need  to  be  logged
250              (in the brick logs) once soft limit is breached.
251
252        volume inode-quota <VOLNAME> enable/disable
253              Enable/disable inode-quota for <VOLNAME>
254
255        volume quota help
256              Display help for volume quota commands
257
258       NOTE:  valid  units  of  time and their symbols are : hours(h/hr), min‐
259       utes(m/min), seconds(s/sec), weeks(w/wk), Days(d/days).
260
261   Geo-replication Commands
262        Note: password-less ssh, from the master node  (where  these  commands
263       are executed) to the slave node <SLAVE_HOST>, is a prerequisite for the
264       geo-replication commands.
265
266        system:: execute gsec_create
267              Generates pem keys which are required for push-pem
268
269        volume geo-replication <MASTER_VOL>  <SLAVE_HOST>::<SLAVE_VOL>  create
270       [[ssh-port n][[no-verify]|[push-pem]]] [force]
271              Create  a  new  geo-replication  session  from  <MASTER_VOL>  to
272              <SLAVE_HOST> host machine having <SLAVE_VOL>.  Use ssh-port n if
273              custom  SSH port is configured in slave nodes.  Use no-verify if
274              the rsa-keys of nodes in master volume is distributed  to  slave
275              nodes  through an external agent.  Use push-pem to push the keys
276              automatically.
277
278        volume    geo-replication    <MASTER_VOL>    <SLAVE_HOST>::<SLAVE_VOL>
279       {start|stop} [force]
280              Start/stop  the  geo-replication  session  from  <MASTER_VOL> to
281              <SLAVE_HOST> host machine having <SLAVE_VOL>.
282
283        volume geo-replication [<MASTER_VOL> [<SLAVE_HOST>::<SLAVE_VOL>]] sta‐
284       tus [detail]
285              Query status of the geo-replication session from <MASTER_VOL> to
286              <SLAVE_HOST> host machine having <SLAVE_VOL>.
287
288        volume    geo-replication    <MASTER_VOL>    <SLAVE_HOST>::<SLAVE_VOL>
289       {pause|resume} [force]
290              Pause/resume  the  geo-replication  session from <MASTER_VOL> to
291              <SLAVE_HOST> host machine having <SLAVE_VOL>.
292
293        volume geo-replication <MASTER_VOL>  <SLAVE_HOST>::<SLAVE_VOL>  delete
294       [reset-sync-time]
295              Delete   the   geo-replication   session  from  <MASTER_VOL>  to
296              <SLAVE_HOST> host machine having  <SLAVE_VOL>.   Optionally  you
297              can  also  reset  the  sync  time in case you need to resync the
298              entire volume on session recreate.
299
300        volume geo-replication <MASTER_VOL>  <SLAVE_HOST>::<SLAVE_VOL>  config
301       [[!]<options> [<value>]]
302              View  (when  no  option  provided) or set configuration for this
303              geo-replication  session.   Use  "!<OPTION>"  to  reset   option
304              <OPTION> to default value.
305
306   Bitrot Commands
307        volume bitrot <VOLNAME> {enable|disable}
308              Enable/disable bitrot for volume <VOLNAME>
309
310        volume bitrot <VOLNAME> scrub-throttle {lazy|normal|aggressive}
311              Scrub-throttle value is a measure of how fast or slow the scrub‐
312              ber scrubs the filesystem for volume <VOLNAME>
313
314        volume           bitrot           <VOLNAME>            scrub-frequency
315       {hourly|daily|weekly|biweekly|monthly}
316              Scrub frequency for volume <VOLNAME>
317
318        volume bitrot <VOLNAME> scrub {pause|resume|status|ondemand}
319              Pause/Resume  scrub.  Upon  resume,  scrubber continues where it
320              left off. status option shows the statistics of scrubber.  onde‐
321              mand  option starts the scrubbing immediately if the scrubber is
322              not paused or already running.
323
324        volume bitrot help
325              Display help for volume bitrot commands
326
327
328          Snapshot Commands
329
330        snapshot  create  <snapname>  <volname>  [no-timestamp]   [description
331       <description>] [force]
332              Creates  a  snapshot  of  a GlusterFS volume. User can provide a
333              snap-name and a description to identify the snap. Snap  will  be
334              created  by  appending  timestamp in GMT. User can override this
335              behaviour using "no-timestamp" option. The description cannot be
336              more than 1024 characters. To be able to take a snapshot, volume
337              should be present and it should be in started state.
338
339        snapshot restore <snapname>
340              Restores an already taken snapshot of a GlusterFS volume.  Snap‐
341              shot  restore  is an offline activity therefore if the volume is
342              online (in started state) then the restore operation will  fail.
343              Once  the  snapshot  is restored it will not be available in the
344              list of snapshots.
345
346        snapshot clone <clonename> <snapname>
347              Create a clone of a snapshot volume, the resulting  volume  will
348              be  GlusterFS  volume. User can provide a clone-name. To be able
349              to take a clone, snapshot should be present and it should be  in
350              activated state.
351
352        snapshot delete ( all | <snapname> | volume <volname> )
353              If  snapname is specified then mentioned snapshot is deleted. If
354              volname is specified then all snapshots belonging to  that  par‐
355              ticular  volume  is  deleted.  If keyword *all* is used then all
356              snapshots belonging to the system is deleted.
357
358        snapshot list [volname]
359              Lists all snapshots taken. If volname is provided, then only the
360              snapshots belonging to that particular volume is listed.
361
362        snapshot info [snapname | (volume <volname>)]
363              This  command  gives information such as snapshot name, snapshot
364              UUID, time at which snapshot was created, and it lists down  the
365              snap-volume-name,  number  of snapshots already taken and number
366              of snapshots still available for that particular volume, and the
367              state of the snapshot. If snapname is specified then info of the
368              mentioned  snapshot is  displayed.  If volname is specified then
369              info of all snapshots belonging to that volume is displayed.  If
370              both  snapname and  volname  is  not specified then info of  all
371              the snapshots present in the system are displayed.
372
373        snapshot status [snapname | (volume <volname>)]
374              This  command gives status of the snapshot. The details included
375              are snapshot brick path, volume group(LVM  details),  status  of
376              the  snapshot  bricks, PID of the bricks, data percentage filled
377              for that particular volume group to which the  snapshots  belong
378              to, and total size of the logical volume.
379
380              If  snapname  is specified then status of the mentioned snapshot
381              is displayed. If volname is specified then status of  all  snap‐
382              shots  belonging  to  that volume is displayed. If both snapname
383              and volname is not specified then status of  all  the  snapshots
384              present in the system are displayed.
385
386        snapshot  config  [volname]  ([snap-max-hard-limit <count>] [snap-max-
387       soft-limit <percent>]) | ([auto-delete <enable|disable>]) | ([activate-
388       on-create <enable|disable>])
389              Displays and sets the snapshot config values.
390
391              snapshot  config without any keywords displays the snapshot con‐
392              fig values of all volumes in the system. If volname is provided,
393              then the snapshot config values of that volume is displayed.
394
395              Snapshot  config  command  along  with  keywords  can be used to
396              change the existing config values. If volname is  provided  then
397              config  value of that volume is changed, else it will set/change
398              the system limit.
399
400              snap-max-soft-limit and auto-delete  are  global  options,  that
401              will be inherited by all volumes in the system and cannot be set
402              to individual volumes.
403
404              snap-max-hard-limit can be set globally, as well as per  volume.
405              The  lowest limit between the global system limit and the volume
406              specific limit, becomes the "Effective snap-max-hard-limit"  for
407              a volume.
408
409              snap-max-soft-limit  is  a percentage value, which is applied on
410              the "Effective snap-max-hard-limit" to get the "Effective  snap-
411              max-soft-limit".
412
413              When  auto-delete  feature  is  enabled,  then upon reaching the
414              "Effective snap-max-soft-limit", with every successful  snapshot
415              creation, the oldest snapshot will be deleted.
416
417              When  auto-delete  feature  is  disabled, then upon reaching the
418              "Effective snap-max-soft-limit", the user gets  a  warning  with
419              every successful snapshot creation.
420
421              When  auto-delete  feature  is  disabled, then upon reaching the
422              "Effective snap-max-hard-limit",  further   snapshot   creations
423              will not be allowed.
424
425              activate-on-create  is  disabled by default. If you enable acti‐
426              vate-on-create, then further snapshot will be  activated  during
427              the time of snapshot creation.
428
429        snapshot activate <snapname>
430              Activates the mentioned snapshot.
431
432              Note : By default the snapshot is activated during snapshot cre‐
433              ation.
434
435        snapshot deactivate <snapname>
436              Deactivates the mentioned snapshot.
437
438        snapshot help
439              Display help for the snapshot commands.
440
441   Self-heal Commands
442        volume heal <VOLNAME>
443              Triggers index self heal for the files that need healing.
444
445
446        volume heal  <VOLNAME> [enable | disable]
447              Enable/disable self-heal-daemon for volume <VOLNAME>.
448
449
450        volume heal <VOLNAME> full
451              Triggers self heal on all the files.
452
453
454        volume heal <VOLNAME> info
455              Lists the files that need healing.
456
457
458        volume heal <VOLNAME> info split-brain
459              Lists the files which are in split-brain state.
460
461
462        volume heal <VOLNAME> statistics
463              Lists the crawl statistics.
464
465
466        volume heal <VOLNAME> statistics heal-count
467              Displays the count of files to be healed.
468
469
470        volume heal <VOLNAME> statistics heal-count  replica  <HOSTNAME:BRICK‐
471       NAME>
472              Displays  the  number  of  files  to be healed from a particular
473              replica  subvolume  to  which  the  brick   <HOSTNAME:BRICKNAME>
474              belongs.
475
476
477        volume heal <VOLNAME> split-brain bigger-file <FILE>
478              Performs  healing  of <FILE> which is in split-brain by choosing
479              the bigger file in the replica as source.
480
481
482        volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME>
483              Selects <HOSTNAME:BRICKNAME> as the source  for  all  the  files
484              that are in split-brain in that replica and heals them.
485
486
487        volume  heal  <VOLNAME>  split-brain source-brick <HOSTNAME:BRICKNAME>
488       <FILE>
489              Selects the split-brained <FILE> present in <HOSTNAME:BRICKNAME>
490              as source and completes heal.
491
492   Other Commands
493        get-state [<daemon>] [[odir </path/to/output/dir/>] [file <filename>]]
494       [detail|volumeoptions]
495              Get local state representation of  mentioned  daemon  and  store
496              data in provided path information
497
498        help  Display the command options.
499
500        quit  Exit the gluster command line interface.
501
502

FILES

504       /var/lib/glusterd/*
505

SEE ALSO

507       fusermount(1), mount.glusterfs(8), glusterfs(8), glusterd(8)
508
510       Copyright(c) 2006-2011  Gluster, Inc.  <http://www.gluster.com>
511
512
513
51407 March 2011            Gluster command line utility               Gluster(8)
Impressum