1gdeploy.conf(5) File Formats gdeploy.conf(5)
2
3
4
6 gdeploy.conf - Configuration file format for gdeploy(1) tool
7
9 There is no rule to name the configuration file for gdeploy(1) as long
10 as the file is mentioned with -c option for gdeploy(1). Please refer
11 gdeploy(1) man page for the list of gdeploy options.
12
14 The configuration file is conceptually split into four parts.
15
16 Inventory
17 - The [hosts] section.
18
19 Backend
20 - [disktype], [diskcount], [vgs], [pools], [backend-reset], ...
21 and other sections. These sections are used in the configuration
22 file based on the usecase.
23
24 Volume
25 - The [volume], [peer], [clients], ... These sections define the volume
26 options and clients to be mounted on.
27
28 Features
29 - The [snapshot], [quota], [geo-replication], ... this is a growing list.
30
32 The inventory contains the ip-address/hostnames of all the machines
33 that form the trusted storage pool.
34
35 [hosts]
36 - This is a mandatory section, and hostnames/ip-address are listed one per line.
37
38 Example:
39
40 [hosts]
41 10.70.47.121
42 10.70.47.122
43
44
46 [backend-setup]
47 [backend-setup:<hostname>/<ip>]
48 The [backend-setup] section is used configure the disks on all the hosts
49 mentioned in the [hosts] section. If the disks names varies from host to
50 host then [backend-setup:<hostname>/<ip>] can be used to do setup
51 backend on the particular host.
52
53 backend-setup supports the following variables:
54
55
56 devices
57 The disks that are to be configured. eg: /dev/sdb,/dev/sdc This is a
58 mandatory variable.
59
60
61 vgs
62 VG names that has to be used. This variable is optional, if not given
63 GLUSTER_vg{n} will be used. Where n is a number starting from 1.
64
65
66 pools
67 Thinpool name to be used. This variable is optional, if not given a default
68 poolnames will be used.
69
70
71 lvs
72 LV name to be used. This variable is optional, if not given a default LV names
73 will be used.
74
75
76 mountpoints
77 Mountpoints where the LVs have to be mounted.
78
79 brick_dirs
80 Brick directories to be used while creating a gluster volume. The directories
81 will be created if not present already.
82
83 Examples:
84
85 1. Backend setup for all the hosts listed in [hosts] section.
86
87 [backend-setup]
88 devices=/dev/sdb
89 vgs=CUSTOM_vg1
90 pools=CUSTOM_pool1
91 lvs=CUSTOM_lv1
92 mountpoints=/gluster/brick1
93 brick_dirs=glusterbrick1
94
95 2. Backend setup for a particular host 10.70.47.122
96
97 [backend-setup:10.70.47.122]
98 devices=/dev/sdc
99 vgs=CUSTOM_vg1
100 pools=CUSTOM_pool1
101 lvs=CUSTOM_lv1
102 mountpoints=/gluster/brick1
103 brick_dirs=glusterbrick1
104
105 [disktype]
106 Specifies which disk configuration is used while setting up the
107 back-end. Supports RAID 10, RAID 6 and JBOD configurations. This section
108 is optional if omitted, it will be by default taken as JBOD. The
109 configuration in this section applies to all the hosts in the [hosts]
110 section.
111
112 Examples:
113
114 1. raid6
115
116 [disktype]
117 RAID6
118
119 2. RAID10
120
121 [disktype]
122 RAID10
123
124 [diskcount]
125 Specifies the number of data disks in the setup. This is a mandatory
126 field if the disk configuration specified is either RAID 10 or RAID 6
127 and will be ignored if architecture is JBOD.
128
129 [stripesize]
130 Specifies the stripe_unit size in KB. This is a mandatory field if disk
131 configuration is RAID 6. If this is not specified in case of RAID 10
132 configurations, it will take the default value 256K. This field is not
133 necessary for JBOD configuration of disks. Do not add any suffixes like
134 K, KB, M, etc.
135
136 [backend-reset] / [backend-reset:<hostname>/<ip>]
137 This section allows backend reset in remote machines. Backend reset
138 includes unmouting of LVs and deletion of LVs, VGs, and PVs.
139
140 NOTE: Use this feature cautiously.
141
142 backend-reset supports the following variables:
143
144 pvs
145 Physical volumes that have to be wiped out (Including LVs and
146 VGs).
147
148 lvs
149 Logical volumes that have to be wiped out (VGs and PVs are not
150 wiped out).
151
152 vgs
153 Volume groups that have to be wiped out (PVs are not wiped)
154 unmount - (yes/no) Unmount the specified mountpoints.
155 mountpoints - List of mountpoints that have to be unmounted.
156
157 Examples:
158
159 1. unmount bricks without deleting VG/PV/LV
160
161 [backend-reset]
162 mountpoints=/dev/GLUSTER_vg1/GLUSTER_lv1,/dev/GLUSTER_vg2/GLUSTER_lv2
163 unmount=yes
164
165 On a particular host
166
167 [backend-reset:10.70.47.122]
168 mountpoints=/dev/GLUSTER_vg1/GLUSTER_lv1,/dev/GLUSTER_vg2/GLUSTER_lv2
169 unmount=yes
170
171 2. Remove the logcial volumes on all hosts
172 [backend-reset]
173 lvs=GLUSTER_lv{1,2}
174 unmount=yes
175
176 3. Remove VGs and associated LVs on all the hosts
177 [backend-reset]
178 vgs=GLUSTER_vg1,GLUSTER_vg2
179 unmount=yes
180
181 4. Remove the PV, VG, and LVs on all hosts
182 [backend-reset]
183 pvs=/dev/sdb,/dev/vdb
184 unmount=yes
185
186 5. Remove the PV, VG, and LVs on a particular host
187 [backend-reset:10.70.47.122]
188 pvs=/dev/sdb,/dev/vdb
189 unmount=yes
190
191
193 [peer]
194 The section peer specifies the configurations for the Trusted Storage
195 Pool management. This section has variable:
196
197 manage
198 probe/detach/ignore are the allowed options for this variable.
199
200 probe - probes the peer
201 detach - detaches the peer
202 ignore - skip the peer probing step.
203
204 Examples:
205
206 1. Probe all the machines listed in hosts section
207
208 [peer]
209 manage=probe
210
211 2. Detach the peers
212 [peer]
213 manage=detach
214
215 [volume]
216 The section volume specifies the configuration options for the volume. This
217 section supports the following variables:
218
219 volname
220 Name of the volume, this is an optional variable. If no value is given,
221 `glustervol' is used. If the volume has to be started or stopped, volname should
222 be of the format <host>/<ip>:name eg: 10.70.47.122:glustervol
223
224 action
225 The action to be performed on the volume. Possible values are create, delete,
226 add-brick, remove-brick, rebalance, set. add-brick adds a brick to the
227 volume. If this is set as action, then extra variable bricks should be set with
228 a list of bricks to be added. remove-brick removes a brick from the volume,
229 again if remove-brick is set extra option `bricks' with a comma separated list
230 of brick names(in the format <hostname>:<brick path>) should be provided.
231
232 In case of remove-brick and rebalance, `state' option should also be
233 provided. Choices for `state' are:
234
235 For remove-brick: [start, stop, commit, force]
236 For rebalance: [start, stop, fix-layout]
237
238 bricks
239 This option is set when add-brick or remove-brick are set as action. Bricks are
240 comma separated list of brick names in the format <hostname>:<brick path>
241
242 state
243 This option is set while using remove-brick or rebalance for action. Choices for
244 `state' are:
245
246 For remove-brick: [start, stop, commit, force]
247 For rebalance: [start, stop, fix-layout]
248
249 transport
250 This option specifies the transport type. Default is tcp. Options are `tcp' or
251 `rdma' or `tcp,rdma'.
252
253 replica
254 Identifies if the volume is a replicate volume or not. Possible options are
255 [yes, no].
256
257 replica_count
258 The replicate count, possible options are [2, 3].
259
260 disperse
261 Identifies if the volume should be disperse. Possible options are [yes, no].
262
263 disperse_count
264 Optional argument. If none given, the number of bricks specified in the command
265 line is taken as the disperse_count value.
266
267 redundancy_count
268 If `redundancy_count' is not specified, and if `disperse' is yes, it's default
269 value is computed so that it generates an optimal configuration.
270
271 force
272 Force volume creation without any questions on brick directories. Default is
273 `no'.
274
275 [clients]
276 This section is intended to use to mount the gluster volumes. `clients'
277 seciton supports the following variables:
278
279 action
280 Specifies whether to mount or unmount a gluster volume. Supported options are
281 [mount, umount]
282
283 volname
284 Name of the volume to be mounted. This is optional if volume name is mentioned
285 in [volume] section.
286
287 hosts
288 This is a mandatory field, hostnames or ip addresses are listed as comma
289 separated values. Range of ip addresses can be given. For eg: 10.70.46.1{3,9}
290 will consider ip addresses between 10.70.46.13 ... 10.70.46.19
291
292 fstype
293 The option `fstype' specifies the mount protocol. Choices are: [glusterfs, nfs]
294 (Default is glusterfs)
295
296 client_mount_points
297 Specifies the client mount points. Each client can have a separate mountpoint.
298 In that case, the mountpoints have to be listed as comma separated values.
299
300
302 [snapshot]
303 This section allows to configure snapshot operations. Supports the following
304 variables:
305
306 action
307 Supports the following snapshot operations: [create, delete, clone, config, and
308 restore].
309
310 volname
311 Volume on which snapshot operation has to be performed. This is an optional
312 variable if volume name is set in the [volume] section.
313
314 snapname
315 The name of the snapshot, on which the `action' has to be performed.
316
317 snap_max_soft_limit
318 To set this variable, action has to be set to config. snap_max_soft_limit is a
319 percentage value, which is applied on the "Effective snap-max-hard-limit" to get
320 the "Effective snap-max-soft-limit". When auto-delete feature is enabled, then
321 upon reaching the "Effective snap-max-soft-limit", with every successful
322 snapshot creation, the oldest snapshot will be deleted.
323
324 snap_max_hard_limit
325 To set this variable, action has to be set to config. snap_max_hard_limit is a number.
326
327 auto_delete
328 When enabled, then upon reaching the "Effective snap-max-soft-limit", with every
329 successful snapshot creation, the oldest snapshot will be deleted. Possible
330 valuese are [enable, disable].
331
332 activate_on_create
333 Possible values are [enable, disable]. Will enable snapshot on creation.
334
335 [quota]
336 This section is used to set quota limits on directories on mountpoint. Quota
337 section supports the following variables:
338
339 action
340 Supports the following values [enable, disable, remove, remove-objects,
341 default-soft-limit, limit-usage, limit-objects, alert-time, soft-timeout,
342 hard-timeout].
343
344 volname
345 Name of the volume. This is an optional variable, can be ignored if `volume'
346 section contains the volume name. If provided, the value should look like
347 <ip>:<volname>.
348 Example: 10.70.46.15:glustervol
349
350 path
351 Path on which the quota has to be set. The value should be comma separated list
352 of paths if more than one directory is mentioned.
353
354 percent
355 Percentage of size of the volume.
356
357 size
358 Size in MB, GB ...
359
360 number
361 The object count that should be allowed.
362
363 time
364 Time in seconds to set `alert time', `soft timeout', `hard timeout' based on the
365 action.
366
367 [geo-replication]
368 Geo-Replication supports the following variables:
369
370 action
371 Setup or manage geo-replication The choices available are: [create, start, stop,
372 delete, pause, resume]
373
374 mastervol
375 Name of the master volume. The format of the value should be - <ip>:<hostname>
376 Eg: 10.70.46.13:mastervolname
377
378 slavevol
379 Name of the slave volume. The format of the value should be - <ip>:<hostname>
380 Eg: 10.70.46.26:slavevolname
381
382 force
383 `yes' will force the volume creation.
384
385 The following configuration options are provided to configure a
386 geo-replication session:
387
388 gluster-log-file
389 The path to the geo-replication glusterfs log file.
390 gluster-log-level
391 The log level for glusterfs processes.
392
393 log-file
394 The path to the geo-replication log file.
395
396 log-level
397 The log level for geo-replication.
398
399 ssh-command
400 The SSH command to connect to the remote machine (the default is SSH).
401
402 rsync-command
403 The rsync command to use for synchronizing the files (the default is rsync).
404
405 use-tarssh
406 The use-tarssh option allows tar over Secure Shell protocol. Use this option to
407 handle workloads of files that have not undergone edits. Value of this option
408 can be [true, false]
409
410 volume-id
411 The option to delete the existing master UID for the intermediate/slave
412 node. Value to this option should be a UID
413
414 timeout
415 The timeout period in seconds.
416
417 sync-jobs
418 The number of simultaneous files/directories that can be synchronized.
419
420 ignore-deletes
421 If this option is set to 1, a file deleted on the master will not trigger a
422 delete operation on the slave.
423
424 checkpoint
425 Sets a checkpoint with the given value. If the option is set as now, then the
426 current time will be used as the label.
427
428 If the value of any of the above option(other than volume-id) in set to `reset',
429 the setting of that config option will be deleted.
430
431 [RH-subscription]
432 This section is used to configure Red Hat Subscription Management like
433 attach to a pool, enable repos, disable repos, and unregister from RHSM.
434 Allows the following variables:
435
436 action
437 Allowed variables for action - [register, attach-pool, enable-repos,
438 disable-repos, unregister]
439
440 username
441 Username for RHSM
442
443 password
444 Password for RHSM
445
446 auto-attach
447 true, if if product certificates are are available at /etc/pki/product/
448
449 pool
450 Pool id for RHSM
451
452 repos
453 List of comma separated repo lists
454
455 [yum]
456 Install packages using yum. yum supports the following variables.
457
458 action
459 Currently supports [install, remove] values.
460
461 repos
462 List of comma separated repos to install from. packages - List of packages to be
463 installed.
464
465 [firewalld]
466 This section allows addition or deletion of ports in either running or
467 permanent firewalld rules. The following variables are supported:
468
469 action
470 Supports variables [add-ports, delete-ports]
471
472 ports
473 value formats for ports can be <port>/<protocol>. Eg: 8081/tcp, 161-162/udp
474
475 permanent
476 Supports [true, false]. True makes the change permanent.
477
478 zone
479 Possible values for zones are [drop, block, public, external, dmz, work, home,
480 internal, trusted]
481
482
483 [ctdb]
484 The variables supported by ctdb include:
485
486 action
487 List of actions supported [`setup', `start', `stop', `enable', `disable']
488
489 public_address
490 The public ip address and interface.
491 eg: 192.168.1.{1,4}/24 eth1;eth2,192.168.1.1/24 eth2
492
493 CTDB_PUBLIC_ADDRESSES
494 Default: /etc/ctdb/public_addresses
495
496 CTDB_NODES
497 Default: /etc/ctdb/nodes
498
499 CTDB_MANAGES_SAMBA
500 Default: no
501
502 CTDB_SET_DeterministicIPs
503 Default: 1
504
505 CTDB_SET_RecoveryBanPeriod
506 Default: 120
507
508 CTDB_SET_KeepaliveInterval
509 Default: 5
510
511 CTDB_SET_KeepaliveLimit
512 Default: 5
513
514 CTDB_SET_MonitorInterval
515 Default: 15
516
517 CTDB_RECOVERY_LOCK
518 Default: /mnt/lock/reclock
519
520 For more documentation on ctdb refer documentation under
521 /usr/share/doc/gdeploy/examples/gluster.conf.sample
522
524 Create a 2x2 gluster volume
525
526 [hosts]
527 10.70.46.13
528 10.70.46.17
529
530 # Common backend setup for 2 of the hosts.
531 [backend-setup]
532 brick_dirs=/gluster/brick/brick{1,2}
533
534 [volume]
535 action=create
536 volname=sample_volname
537 replica=yes
538 replica_count=2
539 force=yes
540
541 [clients]
542 action=mount
543 hosts=10.70.46.15
544 fstype=glusterfs
545 client_mount_points=/mnt/mountpointname
546
548 /usr/share/doc/gdeploy/examples/
549 /usr/share/doc/gdeploy/README.md
550 /usr/share/doc/gdeploy/examples/gluster.conf.sample
551
553 gdeploy(1)
554
555
556
557 23 December 2015 gdeploy.conf(5)