1
2Gluster(8) Gluster Inc. Gluster(8)
3
4
5
7 gluster - Gluster Console Manager (command line utility)
8
10 gluster
11
12 To run the program and display gluster prompt:
13
14 gluster [--remote-host=<gluster_node>] [--mode=script] [--xml]
15
16 (or)
17
18 To specify a command directly:
19
20 gluster [commands] [options] [--remote-host=<gluster_node>]
21 [--mode=script] [--xml]
22
24 The Gluster Console Manager is a command line utility for elastic vol‐
25 ume management. You can run the gluster command on any export server.
26 The command enables administrators to perform cloud operations, such as
27 creating, expanding, shrinking, rebalancing, and migrating volumes
28 without needing to schedule server downtime.
29
31 Volume Commands
32 volume info [all|<VOLNAME>]
33 Display information about all volumes, or the specified volume.
34
35 volume list
36 List all volumes in cluster
37
38 volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [de‐
39 tail|clients|mem|inode|fd|callpool|tasks|client-list]
40 Display status of all or specified volume(s)/brick
41
42 volume create <NEW-VOLNAME> [stripe <COUNT>] [[replica <COUNT> [ar‐
43 biter <COUNT>]]|[replica 2 thin-arbiter 1]] [disperse [<COUNT>]] [dis‐
44 perse-data <COUNT>] [redundancy <COUNT>] [transport
45 <tcp|rdma|tcp,rdma>] <NEW-BRICK> ... <TA-BRICK>
46 Create a new volume of the specified type using the specified
47 bricks and transport type (the default transport type is tcp).
48 To create a volume with both transports (tcp and rdma), give
49 'transport tcp,rdma' as an option.
50
51 volume delete <VOLNAME>
52 Delete the specified volume.
53
54 volume start <VOLNAME>
55 Start the specified volume.
56
57 volume stop <VOLNAME> [force]
58 Stop the specified volume.
59
60 volume set <VOLNAME> <OPTION> <PARAMETER> [<OPTION> <PARAMETER>] ...
61 Set the volume options.
62
63 volume get <VOLNAME/all> <OPTION/all>
64 Get the value of the all options or given option for volume
65 <VOLNAME> or all option. gluster volume get all all is to get
66 all global options
67
68 volume reset <VOLNAME> [option] [force]
69 Reset all the reconfigured options
70
71 volume barrier <VOLNAME> {enable|disable}
72 Barrier/unbarrier file operations on a volume
73
74 volume clear-locks <VOLNAME> <path> kind {blocked|granted|all}{inode
75 [range]|entry [basename]|posix [range]}
76 Clear locks held on path
77
78 volume help
79 Display help for the volume command.
80
81 Brick Commands
82 volume add-brick <VOLNAME> <NEW-BRICK> ...
83 Add the specified brick to the specified volume.
84
85 volume remove-brick <VOLNAME> <BRICK> ...
86 Remove the specified brick from the specified volume.
87
88 Note: If you remove the brick, the data stored in that brick
89 will not be available. You can migrate data from one brick to
90 another using replace-brick option.
91
92 volume reset-brick <VOLNAME> <SOURCE-BRICK> {{start} | {<NEW-BRICK>
93 commit}}
94 Brings down or replaces the specified source brick with the new
95 brick.
96
97 volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> commit force
98 Replace the specified source brick with a new brick.
99
100 volume rebalance <VOLNAME> start
101 Start rebalancing the specified volume.
102
103 volume rebalance <VOLNAME> stop
104 Stop rebalancing the specified volume.
105
106 volume rebalance <VOLNAME> status
107 Display the rebalance status of the specified volume.
108
109 Log Commands
110 volume log <VOLNAME> rotate [BRICK]
111 Rotate the log file for corresponding volume/brick.
112
113 volume profile <VOLNAME> {start|info [peek|incremental [peek]|cumula‐
114 tive|clear]|stop} [nfs]
115 Profile operations on the volume. Once started, volume profile
116 <volname> info provides cumulative statistics of the FOPs per‐
117 formed.
118
119 volume top <VOLNAME> {open|read|write|opendir|readdir|clear}
120 [nfs|brick <brick>] [list-cnt <value>] | {read-perf|write-perf} [bs
121 <size> count <count>] [brick <brick>] [list-cnt <value>]
122 Generates a profile of a volume representing the performance and
123 bottlenecks/hotspots of each brick.
124
125 volume statedump <VOLNAME> [[nfs|quotad]
126 [all|mem|iobuf|callpool|priv|fd|inode|history]... | [client <host‐
127 name:process-id>]]
128 Dumps the in memory state of the specified process or the bricks
129 of the volume.
130
131 volume sync <HOSTNAME> [all|<VOLNAME>]
132 Sync the volume information from a peer
133
134 Peer Commands
135 peer probe <HOSTNAME>
136 Probe the specified peer. In case the <HOSTNAME> given belongs
137 to an already probed peer, the peer probe command will add the
138 hostname to the peer if required.
139
140 peer detach <HOSTNAME>
141 Detach the specified peer.
142
143 peer status
144 Display the status of peers.
145
146 pool list
147 List all the nodes in the pool (including localhost)
148
149 peer help
150 Display help for the peer command.
151
152 Quota Commands
153 volume quota <VOLNAME> enable
154 Enable quota on the specified volume. This will cause all the
155 directories in the filesystem hierarchy to be accounted and up‐
156 dated thereafter on each operation in the the filesystem. To
157 kick start this accounting, a crawl is done over the hierarchy
158 with an auxiliary client.
159
160 volume quota <VOLNAME> disable
161 Disable quota on the volume. This will disable enforcement and
162 accounting in the filesystem. Any configured limits will be
163 lost.
164
165 volume quota <VOLNAME> limit-usage <PATH> <SIZE> [<PERCENT>]
166 Set a usage limit on the given path. Any previously set limit
167 is overridden to the new value. The soft limit can optionally be
168 specified (as a percentage of hard limit). If soft limit per‐
169 centage is not provided the default soft limit value for the
170 volume is used to decide the soft limit.
171
172 volume quota <VOLNAME> limit-objects <PATH> <SIZE> [<PERCENT>]
173 Set an inode limit on the given path. Any previously set limit
174 is overridden to the new value. The soft limit can optionally be
175 specified (as a percentage of hard limit). If soft limit per‐
176 centage is not provided the default soft limit value for the
177 volume is used to decide the soft limit.
178
179 NOTE: valid units of SIZE are : B, KB, MB, GB, TB, PB. If no unit is
180 specified, the unit defaults to bytes.
181
182 volume quota <VOLNAME> remove <PATH>
183 Remove any usage limit configured on the specified directory.
184 Note that if any limit is configured on the ancestors of this
185 directory (previous directories along the path), they will still
186 be honored and enforced.
187
188 volume quota <VOLNAME> remove-objects <PATH>
189 Remove any inode limit configured on the specified directory.
190 Note that if any limit is configured on the ancestors of this
191 directory (previous directories along the path), they will still
192 be honored and enforced.
193
194 volume quota <VOLNAME> list <PATH>
195 Lists the usage and limits configured on directory(s). If a
196 path is given only the limit that has been configured on the di‐
197 rectory(if any) is displayed along with the directory's usage.
198 If no path is given, usage and limits are displayed for all di‐
199 rectories that has limits configured.
200
201 volume quota <VOLNAME> list-objects <PATH>
202 Lists the inode usage and inode limits configured on direc‐
203 tory(s). If a path is given only the limit that has been config‐
204 ured on the directory(if any) is displayed along with the direc‐
205 tory's inode usage. If no path is given, usage and limits are
206 displayed for all directories that has limits configured.
207
208 volume quota <VOLNAME> default-soft-limit <PERCENT>
209 Set the percentage value for default soft limit for the volume.
210
211 volume quota <VOLNAME> soft-timeout <TIME>
212 Set the soft timeout for the volume. The interval in which lim‐
213 its are retested before the soft limit is breached.
214
215 volume quota <VOLNAME> hard-timeout <TIME>
216 Set the hard timeout for the volume. The interval in which lim‐
217 its are retested after the soft limit is breached.
218
219 volume quota <VOLNAME> alert-time <TIME>
220 Set the frequency in which warning messages need to be logged
221 (in the brick logs) once soft limit is breached.
222
223 volume inode-quota <VOLNAME> enable/disable
224 Enable/disable inode-quota for <VOLNAME>
225
226 volume quota help
227 Display help for volume quota commands
228
229 NOTE: valid units of time and their symbols are : hours(h/hr), min‐
230 utes(m/min), seconds(s/sec), weeks(w/wk), Days(d/days).
231
232 Geo-replication Commands
233 Note: password-less ssh, from the primary node (where these commands
234 are executed) to the secondary node <SECONDARY_HOST>, is a prerequisite
235 for the geo-replication commands.
236
237 system:: execute gsec_create
238 Generates pem keys which are required for push-pem
239
240 volume geo-replication <PRIMARY_VOL> <SECONDARY_HOST>::<SECONDARY_VOL>
241 create [[ssh-port n][[no-verify]|[push-pem]]] [force]
242 Create a new geo-replication session from <PRIMARY_VOL> to <SEC‐
243 ONDARY_HOST> host machine having <SECONDARY_VOL>. Use ssh-port
244 n if custom SSH port is configured in secondary nodes. Use no-
245 verify if the rsa-keys of nodes in primary volume is distributed
246 to secondary nodes through an external agent. Use push-pem to
247 push the keys automatically.
248
249 volume geo-replication <PRIMARY_VOL> <SECONDARY_HOST>::<SECONDARY_VOL>
250 {start|stop} [force]
251 Start/stop the geo-replication session from <PRIMARY_VOL> to
252 <SECONDARY_HOST> host machine having <SECONDARY_VOL>.
253
254 volume geo-replication [<PRIMARY_VOL> [<SECONDARY_HOST>::<SEC‐
255 ONDARY_VOL>]] status [detail]
256 Query status of the geo-replication session from <PRIMARY_VOL>
257 to <SECONDARY_HOST> host machine having <SECONDARY_VOL>.
258
259 volume geo-replication <PRIMARY_VOL> <SECONDARY_HOST>::<SECONDARY_VOL>
260 {pause|resume} [force]
261 Pause/resume the geo-replication session from <PRIMARY_VOL> to
262 <SECONDARY_HOST> host machine having <SECONDARY_VOL>.
263
264 volume geo-replication <PRIMARY_VOL> <SECONDARY_HOST>::<SECONDARY_VOL>
265 delete [reset-sync-time]
266 Delete the geo-replication session from <PRIMARY_VOL> to <SEC‐
267 ONDARY_HOST> host machine having <SECONDARY_VOL>. Optionally
268 you can also reset the sync time in case you need to resync the
269 entire volume on session recreate.
270
271 volume geo-replication <PRIMARY_VOL> <SECONDARY_HOST>::<SECONDARY_VOL>
272 config [[!]<options> [<value>]]
273 View (when no option provided) or set configuration for this
274 geo-replication session. Use "!<OPTION>" to reset option <OP‐
275 TION> to default value.
276
277 Bitrot Commands
278 volume bitrot <VOLNAME> {enable|disable}
279 Enable/disable bitrot for volume <VOLNAME>
280
281 volume bitrot <VOLNAME> signing-time <time-in-secs>
282 Waiting time for an object after last fd is closed to start
283 signing process.
284
285 volume bitrot <VOLNAME> signer-threads <count>
286 Number of signing process threads. Usually set to number of
287 available cores.
288
289 volume bitrot <VOLNAME> scrub-throttle {lazy|normal|aggressive}
290 Scrub-throttle value is a measure of how fast or slow the scrub‐
291 ber scrubs the filesystem for volume <VOLNAME>
292
293 volume bitrot <VOLNAME> scrub-frequency {hourly|daily|weekly|bi‐
294 weekly|monthly}
295 Scrub frequency for volume <VOLNAME>
296
297 volume bitrot <VOLNAME> scrub {pause|resume|status|ondemand}
298 Pause/Resume scrub. Upon resume, scrubber continues where it
299 left off. status option shows the statistics of scrubber. onde‐
300 mand option starts the scrubbing immediately if the scrubber is
301 not paused or already running.
302
303 volume bitrot help
304 Display help for volume bitrot commands
305
306
307 Snapshot Commands
308
309 snapshot create <snapname> <volname> [no-timestamp] [description <de‐
310 scription>] [force]
311 Creates a snapshot of a GlusterFS volume. User can provide a
312 snap-name and a description to identify the snap. Snap will be
313 created by appending timestamp in GMT. User can override this
314 behaviour using "no-timestamp" option. The description cannot be
315 more than 1024 characters. To be able to take a snapshot, volume
316 should be present and it should be in started state.
317
318 snapshot restore <snapname>
319 Restores an already taken snapshot of a GlusterFS volume. Snap‐
320 shot restore is an offline activity therefore if the volume is
321 online (in started state) then the restore operation will fail.
322 Once the snapshot is restored it will not be available in the
323 list of snapshots.
324
325 snapshot clone <clonename> <snapname>
326 Create a clone of a snapshot volume, the resulting volume will
327 be GlusterFS volume. User can provide a clone-name. To be able
328 to take a clone, snapshot should be present and it should be in
329 activated state.
330
331 snapshot delete ( all | <snapname> | volume <volname> )
332 If snapname is specified then mentioned snapshot is deleted. If
333 volname is specified then all snapshots belonging to that par‐
334 ticular volume is deleted. If keyword *all* is used then all
335 snapshots belonging to the system is deleted.
336
337 snapshot list [volname]
338 Lists all snapshots taken. If volname is provided, then only the
339 snapshots belonging to that particular volume is listed.
340
341 snapshot info [snapname | (volume <volname>)]
342 This command gives information such as snapshot name, snapshot
343 UUID, time at which snapshot was created, and it lists down the
344 snap-volume-name, number of snapshots already taken and number
345 of snapshots still available for that particular volume, and the
346 state of the snapshot. If snapname is specified then info of the
347 mentioned snapshot is displayed. If volname is specified then
348 info of all snapshots belonging to that volume is displayed. If
349 both snapname and volname is not specified then info of all
350 the snapshots present in the system are displayed.
351
352 snapshot status [snapname | (volume <volname>)]
353 This command gives status of the snapshot. The details included
354 are snapshot brick path, volume group(LVM details), status of
355 the snapshot bricks, PID of the bricks, data percentage filled
356 for that particular volume group to which the snapshots belong
357 to, and total size of the logical volume.
358
359 If snapname is specified then status of the mentioned snapshot
360 is displayed. If volname is specified then status of all snap‐
361 shots belonging to that volume is displayed. If both snapname
362 and volname is not specified then status of all the snapshots
363 present in the system are displayed.
364
365 snapshot config [volname] ([snap-max-hard-limit <count>] [snap-max-
366 soft-limit <percent>]) | ([auto-delete <enable|disable>]) | ([activate-
367 on-create <enable|disable>])
368 Displays and sets the snapshot config values.
369
370 snapshot config without any keywords displays the snapshot con‐
371 fig values of all volumes in the system. If volname is provided,
372 then the snapshot config values of that volume is displayed.
373
374 Snapshot config command along with keywords can be used to
375 change the existing config values. If volname is provided then
376 config value of that volume is changed, else it will set/change
377 the system limit.
378
379 snap-max-soft-limit and auto-delete are global options, that
380 will be inherited by all volumes in the system and cannot be set
381 to individual volumes.
382
383 snap-max-hard-limit can be set globally, as well as per volume.
384 The lowest limit between the global system limit and the volume
385 specific limit, becomes the "Effective snap-max-hard-limit" for
386 a volume.
387
388 snap-max-soft-limit is a percentage value, which is applied on
389 the "Effective snap-max-hard-limit" to get the "Effective snap-
390 max-soft-limit".
391
392 When auto-delete feature is enabled, then upon reaching the "Ef‐
393 fective snap-max-soft-limit", with every successful snapshot
394 creation, the oldest snapshot will be deleted.
395
396 When auto-delete feature is disabled, then upon reaching the
397 "Effective snap-max-soft-limit", the user gets a warning with
398 every successful snapshot creation.
399
400 When auto-delete feature is disabled, then upon reaching the
401 "Effective snap-max-hard-limit", further snapshot creations
402 will not be allowed.
403
404 activate-on-create is disabled by default. If you enable acti‐
405 vate-on-create, then further snapshot will be activated during
406 the time of snapshot creation.
407
408 snapshot activate <snapname>
409 Activates the mentioned snapshot.
410
411 Note : By default the snapshot is activated during snapshot cre‐
412 ation.
413
414 snapshot deactivate <snapname>
415 Deactivates the mentioned snapshot.
416
417 snapshot help
418 Display help for the snapshot commands.
419
420 Self-heal Commands
421 volume heal <VOLNAME>
422 Triggers index self heal for the files that need healing.
423
424
425 volume heal <VOLNAME> [enable | disable]
426 Enable/disable self-heal-daemon for volume <VOLNAME>.
427
428
429 volume heal <VOLNAME> full
430 Triggers self heal on all the files.
431
432
433 volume heal <VOLNAME> info
434 Lists the files that need healing.
435
436
437 volume heal <VOLNAME> info split-brain
438 Lists the files which are in split-brain state.
439
440
441 volume heal <VOLNAME> statistics
442 Lists the crawl statistics.
443
444
445 volume heal <VOLNAME> statistics heal-count
446 Displays the count of files to be healed.
447
448
449 volume heal <VOLNAME> statistics heal-count replica <HOSTNAME:BRICK‐
450 NAME>
451 Displays the number of files to be healed from a particular
452 replica subvolume to which the brick <HOSTNAME:BRICKNAME> be‐
453 longs.
454
455
456 volume heal <VOLNAME> split-brain bigger-file <FILE>
457 Performs healing of <FILE> which is in split-brain by choosing
458 the bigger file in the replica as source.
459
460
461 volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME>
462 Selects <HOSTNAME:BRICKNAME> as the source for all the files
463 that are in split-brain in that replica and heals them.
464
465
466 volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME>
467 <FILE>
468 Selects the split-brained <FILE> present in <HOSTNAME:BRICKNAME>
469 as source and completes heal.
470
471 Other Commands
472 get-state [<daemon>] [[odir </path/to/output/dir/>] [file <filename>]]
473 [detail|volumeoptions]
474 Get local state representation of mentioned daemon and store
475 data in provided path information
476
477 help Display the command options.
478
479 quit Exit the gluster command line interface.
480
481
483 /var/lib/glusterd/*
484
486 fusermount(1), mount.glusterfs(8), glusterfs(8), glusterd(8)
487
489 Copyright(c) 2006-2011 Gluster, Inc. <http://www.gluster.com>
490
491
492
49307 March 2011 Gluster command line utility Gluster(8)