1
2Gluster(8) Gluster Inc. Gluster(8)
3
4
5
7 gluster - Gluster Console Manager (command line utility)
8
10 gluster
11
12 To run the program and display gluster prompt:
13
14 gluster [--remote-host=<gluster_node>] [--mode=script] [--xml]
15
16 (or)
17
18 To specify a command directly:
19
20 gluster [commands] [options] [--remote-host=<gluster_node>]
21 [--mode=script] [--xml]
22
24 The Gluster Console Manager is a command line utility for elastic vol‐
25 ume management. You can run the gluster command on any export server.
26 The command enables administrators to perform cloud operations, such as
27 creating, expanding, shrinking, rebalancing, and migrating volumes
28 without needing to schedule server downtime.
29
31 Volume Commands
32 volume info [all|<VOLNAME>]
33 Display information about all volumes, or the specified volume.
34
35 volume list
36 List all volumes in cluster
37
38 volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]]
39 [detail|clients|mem|inode|fd|callpool|tasks|client-list]
40 Display status of all or specified volume(s)/brick
41
42 volume create <NEW-VOLNAME> [stripe <COUNT>] [[replica <COUNT>
43 [arbiter <COUNT>]]|[replica 2 thin-arbiter 1]] [disperse [<COUNT>]]
44 [disperse-data <COUNT>] [redundancy <COUNT>] [transport
45 <tcp|rdma|tcp,rdma>] <NEW-BRICK> ... <TA-BRICK>
46 Create a new volume of the specified type using the specified
47 bricks and transport type (the default transport type is tcp).
48 To create a volume with both transports (tcp and rdma), give
49 'transport tcp,rdma' as an option.
50
51 volume delete <VOLNAME>
52 Delete the specified volume.
53
54 volume start <VOLNAME>
55 Start the specified volume.
56
57 volume stop <VOLNAME> [force]
58 Stop the specified volume.
59
60 volume set <VOLNAME> <OPTION> <PARAMETER> [<OPTION> <PARAMETER>] ...
61 Set the volume options.
62
63 volume get <VOLNAME/all> <OPTION/all>
64 Get the value of the all options or given option for volume
65 <VOLNAME> or all option. gluster volume get all all is to get
66 all global options
67
68 volume reset <VOLNAME> [option] [force]
69 Reset all the reconfigured options
70
71 volume barrier <VOLNAME> {enable|disable}
72 Barrier/unbarrier file operations on a volume
73
74 volume clear-locks <VOLNAME> <path> kind {blocked|granted|all}{inode
75 [range]|entry [basename]|posix [range]}
76 Clear locks held on path
77
78 volume help
79 Display help for the volume command.
80
81 Brick Commands
82 volume add-brick <VOLNAME> <NEW-BRICK> ...
83 Add the specified brick to the specified volume.
84
85 volume remove-brick <VOLNAME> <BRICK> ...
86 Remove the specified brick from the specified volume.
87
88 Note: If you remove the brick, the data stored in that brick
89 will not be available. You can migrate data from one brick to
90 another using replace-brick option.
91
92 volume reset-brick <VOLNAME> <SOURCE-BRICK> {{start} | {<NEW-BRICK>
93 commit}}
94 Brings down or replaces the specified source brick with the new
95 brick.
96
97 volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> commit force
98 Replace the specified source brick with a new brick.
99
100 volume rebalance <VOLNAME> start
101 Start rebalancing the specified volume.
102
103 volume rebalance <VOLNAME> stop
104 Stop rebalancing the specified volume.
105
106 volume rebalance <VOLNAME> status
107 Display the rebalance status of the specified volume.
108
109 Log Commands
110 volume log filename <VOLNAME> [BRICK] <DIRECTORY>
111 Set the log directory for the corresponding volume/brick.
112
113 volume log locate <VOLNAME> [BRICK]
114 Locate the log file for corresponding volume/brick.
115
116 volume log rotate <VOLNAME> [BRICK]
117 Rotate the log file for corresponding volume/brick.
118
119 volume profile <VOLNAME> {start|info [peek|incremental [peek]|cumula‐
120 tive|clear]|stop} [nfs]
121 Profile operations on the volume. Once started, volume profile
122 <volname> info provides cumulative statistics of the FOPs per‐
123 formed.
124
125 volume top <VOLNAME> {open|read|write|opendir|readdir|clear}
126 [nfs|brick <brick>] [list-cnt <value>] | {read-perf|write-perf} [bs
127 <size> count <count>] [brick <brick>] [list-cnt <value>]
128 Generates a profile of a volume representing the performance and
129 bottlenecks/hotspots of each brick.
130
131 volume statedump <VOLNAME> [[nfs|quotad]
132 [all|mem|iobuf|callpool|priv|fd|inode|history]... | [client <host‐
133 name:process-id>]]
134 Dumps the in memory state of the specified process or the bricks
135 of the volume.
136
137 volume sync <HOSTNAME> [all|<VOLNAME>]
138 Sync the volume information from a peer
139
140 Peer Commands
141 peer probe <HOSTNAME>
142 Probe the specified peer. In case the <HOSTNAME> given belongs
143 to an already probed peer, the peer probe command will add the
144 hostname to the peer if required.
145
146 peer detach <HOSTNAME>
147 Detach the specified peer.
148
149 peer status
150 Display the status of peers.
151
152 pool list
153 List all the nodes in the pool (including localhost)
154
155 peer help
156 Display help for the peer command.
157
158 Quota Commands
159 volume quota <VOLNAME> enable
160 Enable quota on the specified volume. This will cause all the
161 directories in the filesystem hierarchy to be accounted and
162 updated thereafter on each operation in the the filesystem. To
163 kick start this accounting, a crawl is done over the hierarchy
164 with an auxiliary client.
165
166 volume quota <VOLNAME> disable
167 Disable quota on the volume. This will disable enforcement and
168 accounting in the filesystem. Any configured limits will be
169 lost.
170
171 volume quota <VOLNAME> limit-usage <PATH> <SIZE> [<PERCENT>]
172 Set a usage limit on the given path. Any previously set limit
173 is overridden to the new value. The soft limit can optionally be
174 specified (as a percentage of hard limit). If soft limit per‐
175 centage is not provided the default soft limit value for the
176 volume is used to decide the soft limit.
177
178 volume quota <VOLNAME> limit-objects <PATH> <SIZE> [<PERCENT>]
179 Set an inode limit on the given path. Any previously set limit
180 is overridden to the new value. The soft limit can optionally be
181 specified (as a percentage of hard limit). If soft limit per‐
182 centage is not provided the default soft limit value for the
183 volume is used to decide the soft limit.
184
185 NOTE: valid units of SIZE are : B, KB, MB, GB, TB, PB. If no unit is
186 specified, the unit defaults to bytes.
187
188 volume quota <VOLNAME> remove <PATH>
189 Remove any usage limit configured on the specified directory.
190 Note that if any limit is configured on the ancestors of this
191 directory (previous directories along the path), they will still
192 be honored and enforced.
193
194 volume quota <VOLNAME> remove-objects <PATH>
195 Remove any inode limit configured on the specified directory.
196 Note that if any limit is configured on the ancestors of this
197 directory (previous directories along the path), they will still
198 be honored and enforced.
199
200 volume quota <VOLNAME> list <PATH>
201 Lists the usage and limits configured on directory(s). If a
202 path is given only the limit that has been configured on the
203 directory(if any) is displayed along with the directory's usage.
204 If no path is given, usage and limits are displayed for all
205 directories that has limits configured.
206
207 volume quota <VOLNAME> list-objects <PATH>
208 Lists the inode usage and inode limits configured on direc‐
209 tory(s). If a path is given only the limit that has been config‐
210 ured on the directory(if any) is displayed along with the direc‐
211 tory's inode usage. If no path is given, usage and limits are
212 displayed for all directories that has limits configured.
213
214 volume quota <VOLNAME> default-soft-limit <PERCENT>
215 Set the percentage value for default soft limit for the volume.
216
217 volume quota <VOLNAME> soft-timeout <TIME>
218 Set the soft timeout for the volume. The interval in which lim‐
219 its are retested before the soft limit is breached.
220
221 volume quota <VOLNAME> hard-timeout <TIME>
222 Set the hard timeout for the volume. The interval in which lim‐
223 its are retested after the soft limit is breached.
224
225 volume quota <VOLNAME> alert-time <TIME>
226 Set the frequency in which warning messages need to be logged
227 (in the brick logs) once soft limit is breached.
228
229 volume inode-quota <VOLNAME> enable/disable
230 Enable/disable inode-quota for <VOLNAME>
231
232 volume quota help
233 Display help for volume quota commands
234
235 NOTE: valid units of time and their symbols are : hours(h/hr), min‐
236 utes(m/min), seconds(s/sec), weeks(w/wk), Days(d/days).
237
238 Geo-replication Commands
239 Note: password-less ssh, from the master node (where these commands
240 are executed) to the slave node <SLAVE_HOST>, is a prerequisite for the
241 geo-replication commands.
242
243 system:: execute gsec_create
244 Generates pem keys which are required for push-pem
245
246 volume geo-replication <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL> create
247 [[ssh-port n][[no-verify]|[push-pem]]] [force]
248 Create a new geo-replication session from <MASTER_VOL> to
249 <SLAVE_HOST> host machine having <SLAVE_VOL>. Use ssh-port n if
250 custom SSH port is configured in slave nodes. Use no-verify if
251 the rsa-keys of nodes in master volume is distributed to slave
252 nodes through an external agent. Use push-pem to push the keys
253 automatically.
254
255 volume geo-replication <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL>
256 {start|stop} [force]
257 Start/stop the geo-replication session from <MASTER_VOL> to
258 <SLAVE_HOST> host machine having <SLAVE_VOL>.
259
260 volume geo-replication [<MASTER_VOL> [<SLAVE_HOST>::<SLAVE_VOL>]] sta‐
261 tus [detail]
262 Query status of the geo-replication session from <MASTER_VOL> to
263 <SLAVE_HOST> host machine having <SLAVE_VOL>.
264
265 volume geo-replication <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL>
266 {pause|resume} [force]
267 Pause/resume the geo-replication session from <MASTER_VOL> to
268 <SLAVE_HOST> host machine having <SLAVE_VOL>.
269
270 volume geo-replication <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL> delete
271 [reset-sync-time]
272 Delete the geo-replication session from <MASTER_VOL> to
273 <SLAVE_HOST> host machine having <SLAVE_VOL>. Optionally you
274 can also reset the sync time in case you need to resync the
275 entire volume on session recreate.
276
277 volume geo-replication <MASTER_VOL> <SLAVE_HOST>::<SLAVE_VOL> config
278 [[!]<options> [<value>]]
279 View (when no option provided) or set configuration for this
280 geo-replication session. Use "!<OPTION>" to reset option
281 <OPTION> to default value.
282
283 Bitrot Commands
284 volume bitrot <VOLNAME> {enable|disable}
285 Enable/disable bitrot for volume <VOLNAME>
286
287 volume bitrot <VOLNAME> signing-time <time-in-secs>
288 Waiting time for an object after last fd is closed to start
289 signing process.
290
291 volume bitrot <VOLNAME> signer-threads <count>
292 Number of signing process threads. Usually set to number of
293 available cores.
294
295 volume bitrot <VOLNAME> scrub-throttle {lazy|normal|aggressive}
296 Scrub-throttle value is a measure of how fast or slow the scrub‐
297 ber scrubs the filesystem for volume <VOLNAME>
298
299 volume bitrot <VOLNAME> scrub-frequency
300 {hourly|daily|weekly|biweekly|monthly}
301 Scrub frequency for volume <VOLNAME>
302
303 volume bitrot <VOLNAME> scrub {pause|resume|status|ondemand}
304 Pause/Resume scrub. Upon resume, scrubber continues where it
305 left off. status option shows the statistics of scrubber. onde‐
306 mand option starts the scrubbing immediately if the scrubber is
307 not paused or already running.
308
309 volume bitrot help
310 Display help for volume bitrot commands
311
312
313 Snapshot Commands
314
315 snapshot create <snapname> <volname> [no-timestamp] [description
316 <description>] [force]
317 Creates a snapshot of a GlusterFS volume. User can provide a
318 snap-name and a description to identify the snap. Snap will be
319 created by appending timestamp in GMT. User can override this
320 behaviour using "no-timestamp" option. The description cannot be
321 more than 1024 characters. To be able to take a snapshot, volume
322 should be present and it should be in started state.
323
324 snapshot restore <snapname>
325 Restores an already taken snapshot of a GlusterFS volume. Snap‐
326 shot restore is an offline activity therefore if the volume is
327 online (in started state) then the restore operation will fail.
328 Once the snapshot is restored it will not be available in the
329 list of snapshots.
330
331 snapshot clone <clonename> <snapname>
332 Create a clone of a snapshot volume, the resulting volume will
333 be GlusterFS volume. User can provide a clone-name. To be able
334 to take a clone, snapshot should be present and it should be in
335 activated state.
336
337 snapshot delete ( all | <snapname> | volume <volname> )
338 If snapname is specified then mentioned snapshot is deleted. If
339 volname is specified then all snapshots belonging to that par‐
340 ticular volume is deleted. If keyword *all* is used then all
341 snapshots belonging to the system is deleted.
342
343 snapshot list [volname]
344 Lists all snapshots taken. If volname is provided, then only the
345 snapshots belonging to that particular volume is listed.
346
347 snapshot info [snapname | (volume <volname>)]
348 This command gives information such as snapshot name, snapshot
349 UUID, time at which snapshot was created, and it lists down the
350 snap-volume-name, number of snapshots already taken and number
351 of snapshots still available for that particular volume, and the
352 state of the snapshot. If snapname is specified then info of the
353 mentioned snapshot is displayed. If volname is specified then
354 info of all snapshots belonging to that volume is displayed. If
355 both snapname and volname is not specified then info of all
356 the snapshots present in the system are displayed.
357
358 snapshot status [snapname | (volume <volname>)]
359 This command gives status of the snapshot. The details included
360 are snapshot brick path, volume group(LVM details), status of
361 the snapshot bricks, PID of the bricks, data percentage filled
362 for that particular volume group to which the snapshots belong
363 to, and total size of the logical volume.
364
365 If snapname is specified then status of the mentioned snapshot
366 is displayed. If volname is specified then status of all snap‐
367 shots belonging to that volume is displayed. If both snapname
368 and volname is not specified then status of all the snapshots
369 present in the system are displayed.
370
371 snapshot config [volname] ([snap-max-hard-limit <count>] [snap-max-
372 soft-limit <percent>]) | ([auto-delete <enable|disable>]) | ([activate-
373 on-create <enable|disable>])
374 Displays and sets the snapshot config values.
375
376 snapshot config without any keywords displays the snapshot con‐
377 fig values of all volumes in the system. If volname is provided,
378 then the snapshot config values of that volume is displayed.
379
380 Snapshot config command along with keywords can be used to
381 change the existing config values. If volname is provided then
382 config value of that volume is changed, else it will set/change
383 the system limit.
384
385 snap-max-soft-limit and auto-delete are global options, that
386 will be inherited by all volumes in the system and cannot be set
387 to individual volumes.
388
389 snap-max-hard-limit can be set globally, as well as per volume.
390 The lowest limit between the global system limit and the volume
391 specific limit, becomes the "Effective snap-max-hard-limit" for
392 a volume.
393
394 snap-max-soft-limit is a percentage value, which is applied on
395 the "Effective snap-max-hard-limit" to get the "Effective snap-
396 max-soft-limit".
397
398 When auto-delete feature is enabled, then upon reaching the
399 "Effective snap-max-soft-limit", with every successful snapshot
400 creation, the oldest snapshot will be deleted.
401
402 When auto-delete feature is disabled, then upon reaching the
403 "Effective snap-max-soft-limit", the user gets a warning with
404 every successful snapshot creation.
405
406 When auto-delete feature is disabled, then upon reaching the
407 "Effective snap-max-hard-limit", further snapshot creations
408 will not be allowed.
409
410 activate-on-create is disabled by default. If you enable acti‐
411 vate-on-create, then further snapshot will be activated during
412 the time of snapshot creation.
413
414 snapshot activate <snapname>
415 Activates the mentioned snapshot.
416
417 Note : By default the snapshot is activated during snapshot cre‐
418 ation.
419
420 snapshot deactivate <snapname>
421 Deactivates the mentioned snapshot.
422
423 snapshot help
424 Display help for the snapshot commands.
425
426 Self-heal Commands
427 volume heal <VOLNAME>
428 Triggers index self heal for the files that need healing.
429
430
431 volume heal <VOLNAME> [enable | disable]
432 Enable/disable self-heal-daemon for volume <VOLNAME>.
433
434
435 volume heal <VOLNAME> full
436 Triggers self heal on all the files.
437
438
439 volume heal <VOLNAME> info
440 Lists the files that need healing.
441
442
443 volume heal <VOLNAME> info split-brain
444 Lists the files which are in split-brain state.
445
446
447 volume heal <VOLNAME> statistics
448 Lists the crawl statistics.
449
450
451 volume heal <VOLNAME> statistics heal-count
452 Displays the count of files to be healed.
453
454
455 volume heal <VOLNAME> statistics heal-count replica <HOSTNAME:BRICK‐
456 NAME>
457 Displays the number of files to be healed from a particular
458 replica subvolume to which the brick <HOSTNAME:BRICKNAME>
459 belongs.
460
461
462 volume heal <VOLNAME> split-brain bigger-file <FILE>
463 Performs healing of <FILE> which is in split-brain by choosing
464 the bigger file in the replica as source.
465
466
467 volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME>
468 Selects <HOSTNAME:BRICKNAME> as the source for all the files
469 that are in split-brain in that replica and heals them.
470
471
472 volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME>
473 <FILE>
474 Selects the split-brained <FILE> present in <HOSTNAME:BRICKNAME>
475 as source and completes heal.
476
477 Other Commands
478 get-state [<daemon>] [[odir </path/to/output/dir/>] [file <filename>]]
479 [detail|volumeoptions]
480 Get local state representation of mentioned daemon and store
481 data in provided path information
482
483 help Display the command options.
484
485 quit Exit the gluster command line interface.
486
487
489 /var/lib/glusterd/*
490
492 fusermount(1), mount.glusterfs(8), glusterfs(8), glusterd(8)
493
495 Copyright(c) 2006-2011 Gluster, Inc. <http://www.gluster.com>
496
497
498
49907 March 2011 Gluster command line utility Gluster(8)