1
2Gluster(8) Gluster Inc. Gluster(8)
3
4
5
7 gluster - Gluster Console Manager (command line utility)
8
10 gluster
11
12 To run the program and display gluster prompt:
13
14 gluster [--remote-host=<gluster_node>] [--mode=script] [--xml]
15
16 (or)
17
18 To specify a command directly:
19
20 gluster [commands] [options] [--remote-host=<gluster_node>]
21 [--mode=script] [--xml]
22
24 The Gluster Console Manager is a command line utility for elastic vol‐
25 ume management. You can run the gluster command on any export server.
26 The command enables administrators to perform cloud operations, such as
27 creating, expanding, shrinking, rebalancing, and migrating volumes
28 without needing to schedule server downtime.
29
31 Volume Commands
32 volume info [all|<VOLNAME>]
33 Display information about all volumes, or the specified volume.
34
35 volume list
36 List all volumes in cluster
37
38 volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad|tierd]]
39 [detail|clients|mem|inode|fd|callpool|tasks|client-list]
40 Display status of all or specified volume(s)/brick
41
42 volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT>] [dis‐
43 perse [<COUNT>]] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>]
44 <NEW-BRICK> ...
45 Create a new volume of the specified type using the specified
46 bricks and transport type (the default transport type is tcp).
47 To create a volume with both transports (tcp and rdma), give
48 'transport tcp,rdma' as an option.
49
50 volume delete <VOLNAME>
51 Delete the specified volume.
52
53 volume start <VOLNAME>
54 Start the specified volume.
55
56 volume stop <VOLNAME> [force]
57 Stop the specified volume.
58
59 volume set <VOLNAME> <OPTION> <PARAMETER> [<OPTION> <PARAMETER>] ...
60 Set the volume options.
61
62 volume get <VOLNAME/all> <OPTION/all>
63 Get the value of the all options or given option for volume
64 <VOLNAME> or all option. gluster volume get all all is to get
65 all global options
66
67 volume reset <VOLNAME> [option] [force]
68 Reset all the reconfigured options
69
70 volume barrier <VOLNAME> {enable|disable}
71 Barrier/unbarrier file operations on a volume
72
73 volume clear-locks <VOLNAME> <path> kind {blocked|granted|all}{inode
74 [range]|entry [basename]|posix [range]}
75 Clear locks held on path
76
77 volume help
78 Display help for the volume command.
79
80 Brick Commands
81 volume add-brick <VOLNAME> <NEW-BRICK> ...
82 Add the specified brick to the specified volume.
83
84 volume remove-brick <VOLNAME> <BRICK> ...
85 Remove the specified brick from the specified volume.
86
87 Note: If you remove the brick, the data stored in that brick
88 will not be available. You can migrate data from one brick to
89 another using replace-brick option.
90
91 volume reset-brick <VOLNAME> <SOURCE-BRICK> {{start} | {<NEW-BRICK>
92 commit}}
93 Brings down or replaces the specified source brick with the new
94 brick.
95
96 volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> commit force
97 Replace the specified source brick with a new brick.
98
99 volume rebalance <VOLNAME> start
100 Start rebalancing the specified volume.
101
102 volume rebalance <VOLNAME> stop
103 Stop rebalancing the specified volume.
104
105 volume rebalance <VOLNAME> status
106 Display the rebalance status of the specified volume.
107
108 Log Commands
109 volume log filename <VOLNAME> [BRICK] <DIRECTORY>
110 Set the log directory for the corresponding volume/brick.
111
112 volume log locate <VOLNAME> [BRICK]
113 Locate the log file for corresponding volume/brick.
114
115 volume log rotate <VOLNAME> [BRICK]
116 Rotate the log file for corresponding volume/brick.
117
118 volume profile <VOLNAME> {start|info [peek|incremental [peek]|cumula‐
119 tive|clear]|stop} [nfs]
120 Profile operations on the volume. Once started, volume profile
121 <volname> info provides cumulative statistics of the FOPs per‐
122 formed.
123
124 volume statedump <VOLNAME> [[nfs|quotad]
125 [all|mem|iobuf|callpool|priv|fd|inode|history]... | [client <host‐
126 name:process-id>]]
127 Dumps the in memory state of the specified process or the bricks
128 of the volume.
129
130 volume sync <HOSTNAME> [all|<VOLNAME>]
131 Sync the volume information from a peer
132
133 Peer Commands
134 peer probe <HOSTNAME>
135 Probe the specified peer. In case the <HOSTNAME> given belongs
136 to an already probed peer, the peer probe command will add the
137 hostname to the peer if required.
138
139 peer detach <HOSTNAME>
140 Detach the specified peer.
141
142 peer status
143 Display the status of peers.
144
145 pool list
146 List all the nodes in the pool (including localhost)
147
148 peer help
149 Display help for the peer command.
150
151 Tier Commands
152 volume tier <VOLNAME> attach [<replica COUNT>] <NEW-BRICK>...
153 Attach to an existing volume a tier of specified type using the
154 specified bricks.
155
156 volume tier <VOLNAME> start [force]
157 Start the tier service for <VOLNAME>
158
159 volume tier <VOLNAME> status
160 Display statistics on data migration between the hot and cold
161 tiers.
162
163 volume tier <VOLNAME> stop [force]
164 Stop the tier service for <VOLNAME>
165
166 volume tier <VOLNAME> detach start
167 Begin detaching the hot tier from the volume. Data will be moved
168 from the hot tier to the cold tier.
169
170 volume tier <VOLNAME> detach commit [force]
171 Commit detaching the hot tier from the volume. The volume will
172 revert to its original state before the hot tier was attached.
173
174 volume tier <VOLNAME> detach status
175 Check status of data movement from the hot to cold tier.
176
177 volume tier <VOLNAME> detach stop
178 Stop detaching the hot tier from the volume.
179
180
181 Quota Commands
182 volume quota <VOLNAME> enable
183 Enable quota on the specified volume. This will cause all the
184 directories in the filesystem hierarchy to be accounted and
185 updated thereafter on each operation in the the filesystem. To
186 kick start this accounting, a crawl is done over the hierarchy
187 with an auxiliary client.
188
189 volume quota <VOLNAME> disable
190 Disable quota on the volume. This will disable enforcement and
191 accounting in the filesystem. Any configured limits will be
192 lost.
193
194 volume quota <VOLNAME> limit-usage <PATH> <SIZE> [<PERCENT>]
195 Set a usage limit on the given path. Any previously set limit
196 is overridden to the new value. The soft limit can optionally be
197 specified (as a percentage of hard limit). If soft limit per‐
198 centage is not provided the default soft limit value for the
199 volume is used to decide the soft limit.
200
201 volume quota <VOLNAME> limit-objects <PATH> <SIZE> [<PERCENT>]
202 Set an inode limit on the given path. Any previously set limit
203 is overridden to the new value. The soft limit can optionally be
204 specified (as a percentage of hard limit). If soft limit per‐
205 centage is not provided the default soft limit value for the
206 volume is used to decide the soft limit.
207
208 NOTE: valid units of SIZE are : B, KB, MB, GB, TB, PB. If no unit is
209 specified, the unit defaults to bytes.
210
211 volume quota <VOLNAME> remove <PATH>
212 Remove any usage limit configured on the specified directory.
213 Note that if any limit is configured on the ancestors of this
214 directory (previous directories along the path), they will still
215 be honored and enforced.
216
217 volume quota <VOLNAME> remove-objects <PATH>
218 Remove any inode limit configured on the specified directory.
219 Note that if any limit is configured on the ancestors of this
220 directory (previous directories along the path), they will still
221 be honored and enforced.
222
223 volume quota <VOLNAME> list <PATH>
224 Lists the usage and limits configured on directory(s). If a
225 path is given only the limit that has been configured on the
226 directory(if any) is displayed along with the directory's usage.
227 If no path is given, usage and limits are displayed for all
228 directories that has limits configured.
229
230 volume quota <VOLNAME> list-objects <PATH>
231 Lists the inode usage and inode limits configured on direc‐
232 tory(s). If a path is given only the limit that has been config‐
233 ured on the directory(if any) is displayed along with the direc‐
234 tory's inode usage. If no path is given, usage and limits are
235 displayed for all directories that has limits configured.
236
237 volume quota <VOLNAME> default-soft-limit <PERCENT>
238 Set the percentage value for default soft limit for the volume.
239
240 volume quota <VOLNAME> soft-timeout <TIME>
241 Set the soft timeout for the volume. The interval in which lim‐
242 its are retested before the soft limit is breached.
243
244 volume quota <VOLNAME> hard-timeout <TIME>
245 Set the hard timeout for the volume. The interval in which lim‐
246 its are retested after the soft limit is breached.
247
248 volume quota <VOLNAME> alert-time <TIME>
249 Set the frequency in which warning messages need to be logged
250 (in the brick logs) once soft limit is breached.
251
252 volume inode-quota <VOLNAME> enable/disable
253 Enable/disable inode-quota for <VOLNAME>
254
255 volume quota help
256 Display help for volume quota commands
257
258 NOTE: valid units of time and their symbols are : hours(h/hr), min‐
259 utes(m/min), seconds(s/sec), weeks(w/wk), Days(d/days).
260
261 Geo-replication Commands
262 Note: password-less ssh, from the master node (where these commands
263 are executed) to the slave node <SLAVE_HOST>, is a prerequisite for the
264 geo-replication commands.
265
266 system:: execute gsec_create
267 Generates pem keys which are required for push-pem
268
269 volume geo-replication <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL> create
270 [push-pem] [force]
271 Create a new geo-replication session from <MASTER_VOL> to
272 <SLAVE_HOST> host machine having <SLAVE_VOL>. Use push-pem to
273 push the keys automatically.
274
275 volume geo-replication <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL>
276 {start|stop} [force]
277 Start/stop the geo-replication session from <MASTER_VOL> to
278 <SLAVE_HOST> host machine having <SLAVE_VOL>.
279
280 volume geo-replication [<MASTER_VOL> [<SLAVE_HOST>::<SLAVE_VOL>]] sta‐
281 tus [detail]
282 Query status of the geo-replication session from <MASTER_VOL> to
283 <SLAVE_HOST> host machine having <SLAVE_VOL>.
284
285 volume geo-replication <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL>
286 {pause|resume} [force]
287 Pause/resume the geo-replication session from <MASTER_VOL> to
288 <SLAVE_HOST> host machine having <SLAVE_VOL>.
289
290 volume geo-replication <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL> delete
291 [reset-sync-time]
292 Delete the geo-replication session from <MASTER_VOL> to
293 <SLAVE_HOST> host machine having <SLAVE_VOL>. Optionally you
294 can also reset the sync time in case you need to resync the
295 entire volume on session recreate.
296
297 volume geo-replication <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL> config
298 [[!]<options> [<value>]]
299 View (when no option provided) or set configuration for this
300 geo-replication session. Use "!<OPTION>" to reset option
301 <OPTION> to default value.
302
303 Bitrot Commands
304 volume bitrot <VOLNAME> {enable|disable}
305 Enable/disable bitrot for volume <VOLNAME>
306
307 volume bitrot <VOLNAME> scrub-throttle {lazy|normal|aggressive}
308 Scrub-throttle value is a measure of how fast or slow the scrub‐
309 ber scrubs the filesystem for volume <VOLNAME>
310
311 volume bitrot <VOLNAME> scrub-frequency
312 {daily|weekly|biweekly|monthly}
313 Scrub frequency for volume <VOLNAME>
314
315 volume bitrot <VOLNAME> scrub {pause|resume|status|ondemand}
316 Pause/Resume scrub. Upon resume, scrubber continues where it
317 left off. status option shows the statistics of scrubber. onde‐
318 mand option starts the scrubbing immediately if the scrubber is
319 not paused or already running.
320
321 volume bitrot help
322 Display help for volume bitrot commands
323
324
325 Snapshot Commands
326
327 snapshot create <snapname> <volname> [no-timestamp] [description
328 <description>] [force]
329 Creates a snapshot of a GlusterFS volume. User can provide a
330 snap-name and a description to identify the snap. Snap will be
331 created by appending timestamp in GMT. User can override this
332 behaviour using "no-timestamp" option. The description cannot be
333 more than 1024 characters. To be able to take a snapshot, volume
334 should be present and it should be in started state.
335
336 snapshot restore <snapname>
337 Restores an already taken snapshot of a GlusterFS volume. Snap‐
338 shot restore is an offline activity therefore if the volume is
339 online (in started state) then the restore operation will fail.
340 Once the snapshot is restored it will not be available in the
341 list of snapshots.
342
343 snapshot clone <clonename> <snapname>
344 Create a clone of a snapshot volume, the resulting volume will
345 be GlusterFS volume. User can provide a clone-name. To be able
346 to take a clone, snapshot should be present and it should be in
347 activated state.
348
349 snapshot delete ( all | <snapname> | volume <volname> )
350 If snapname is specified then mentioned snapshot is deleted. If
351 volname is specified then all snapshots belonging to that par‐
352 ticular volume is deleted. If keyword *all* is used then all
353 snapshots belonging to the system is deleted.
354
355 snapshot list [volname]
356 Lists all snapshots taken. If volname is provided, then only the
357 snapshots belonging to that particular volume is listed.
358
359 snapshot info [snapname | (volume <volname>)]
360 This command gives information such as snapshot name, snapshot
361 UUID, time at which snapshot was created, and it lists down the
362 snap-volume-name, number of snapshots already taken and number
363 of snapshots still available for that particular volume, and the
364 state of the snapshot. If snapname is specified then info of the
365 mentioned snapshot is displayed. If volname is specified then
366 info of all snapshots belonging to that volume is displayed. If
367 both snapname and volname is not specified then info of all
368 the snapshots present in the system are displayed.
369
370 snapshot status [snapname | (volume <volname>)]
371 This command gives status of the snapshot. The details included
372 are snapshot brick path, volume group(LVM details), status of
373 the snapshot bricks, PID of the bricks, data percentage filled
374 for that particular volume group to which the snapshots belong
375 to, and total size of the logical volume.
376
377 If snapname is specified then status of the mentioned snapshot
378 is displayed. If volname is specified then status of all snap‐
379 shots belonging to that volume is displayed. If both snapname
380 and volname is not specified then status of all the snapshots
381 present in the system are displayed.
382
383 snapshot config [volname] ([snap-max-hard-limit <count>] [snap-max-
384 soft-limit <percent>]) | ([auto-delete <enable|disable>]) | ([activate-
385 on-create <enable|disable>])
386 Displays and sets the snapshot config values.
387
388 snapshot config without any keywords displays the snapshot con‐
389 fig values of all volumes in the system. If volname is provided,
390 then the snapshot config values of that volume is displayed.
391
392 Snapshot config command along with keywords can be used to
393 change the existing config values. If volname is provided then
394 config value of that volume is changed, else it will set/change
395 the system limit.
396
397 snap-max-soft-limit and auto-delete are global options, that
398 will be inherited by all volumes in the system and cannot be set
399 to individual volumes.
400
401 snap-max-hard-limit can be set globally, as well as per volume.
402 The lowest limit between the global system limit and the volume
403 specific limit, becomes the "Effective snap-max-hard-limit" for
404 a volume.
405
406 snap-max-soft-limit is a percentage value, which is applied on
407 the "Effective snap-max-hard-limit" to get the "Effective snap-
408 max-soft-limit".
409
410 When auto-delete feature is enabled, then upon reaching the
411 "Effective snap-max-soft-limit", with every successful snapshot
412 creation, the oldest snapshot will be deleted.
413
414 When auto-delete feature is disabled, then upon reaching the
415 "Effective snap-max-soft-limit", the user gets a warning with
416 every successful snapshot creation.
417
418 When auto-delete feature is disabled, then upon reaching the
419 "Effective snap-max-hard-limit", further snapshot creations
420 will not be allowed.
421
422 activate-on-create is disabled by default. If you enable acti‐
423 vate-on-create, then further snapshot will be activated during
424 the time of snapshot creation.
425
426 snapshot activate <snapname>
427 Activates the mentioned snapshot.
428
429 Note : By default the snapshot is activated during snapshot cre‐
430 ation.
431
432 snapshot deactivate <snapname>
433 Deactivates the mentioned snapshot.
434
435 snapshot help
436 Display help for the snapshot commands.
437
438 Self-heal Commands
439 volume heal <VOLNAME>
440 Triggers index self heal for the files that need healing.
441
442
443 volume heal <VOLNAME> [enable | disable]
444 Enable/disable self-heal-daemon for volume <VOLNAME>.
445
446
447 volume heal <VOLNAME> full
448 Triggers self heal on all the files.
449
450
451 volume heal <VOLNAME> info
452 Lists the files that need healing.
453
454
455 volume heal <VOLNAME> info split-brain
456 Lists the files which are in split-brain state.
457
458
459 volume heal <VOLNAME> statistics
460 Lists the crawl statistics.
461
462
463 volume heal <VOLNAME> statistics heal-count
464 Displays the count of files to be healed.
465
466
467 volume heal <VOLNAME> statistics heal-count replica <HOSTNAME:BRICK‐
468 NAME>
469 Displays the number of files to be healed from a particular
470 replica subvolume to which the brick <HOSTNAME:BRICKNAME>
471 belongs.
472
473
474 volume heal <VOLNAME> split-brain bigger-file <FILE>
475 Performs healing of <FILE> which is in split-brain by choosing
476 the bigger file in the replica as source.
477
478
479 volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME>
480 Selects <HOSTNAME:BRICKNAME> as the source for all the files
481 that are in split-brain in that replica and heals them.
482
483
484 volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME>
485 <FILE>
486 Selects the split-brained <FILE> present in <HOSTNAME:BRICKNAME>
487 as source and completes heal.
488
489 Other Commands
490 get-state [<daemon>] [[odir </path/to/output/dir/>] [file <filename>]]
491 [detail|volumeoptions]
492 Get local state representation of mentioned daemon and store
493 data in provided path information
494
495 help Display the command options.
496
497 quit Exit the gluster command line interface.
498
499
501 /var/lib/glusterd/*
502
504 fusermount(1), mount.glusterfs(8), glusterfs(8), glusterd(8)
505
507 Copyright(c) 2006-2011 Gluster, Inc. <http://www.gluster.com>
508
509
510
51107 March 2011 Gluster command line utility Gluster(8)