1gfs2(5)                       File Formats Manual                      gfs2(5)
2
3
4

NAME

6       gfs2 - GFS2 reference guide
7
8

SYNOPSIS

10       Overview of the GFS2 filesystem
11
12

DESCRIPTION

14       GFS2  is a clustered filesystem, designed for sharing data between mul‐
15       tiple nodes connected to a common shared storage device. It can also be
16       used  as  a local filesystem on a single node, however since the design
17       is aimed at clusters, that will usually  result  in  lower  performance
18       than using a filesystem designed specifically for single node use.
19
20       GFS2  is  a  journaling filesystem and one journal is required for each
21       node that will mount the filesystem. The one exception to that is spec‐
22       tator  mounts which are equivalent to mounting a read-only block device
23       and as such can neither recover a journal or write to  the  filesystem,
24       so do not require a journal assigned to them.
25
26       The GFS2 documentation has been split into a number of sections:
27
28       mkfs.gfs2(8) Create a GFS2 filesystem
29       fsck.gfs2(8) The GFS2 filesystem checker
30       gfs2_grow(8) Growing a GFS2 filesystem
31       gfs2_jadd(8) Adding a journal to a GFS2 filesystem
32       tunegfs2(8) Tool to manipulate GFS2 superblocks
33       gfs2_edit(8) A GFS2 debug tool (use with caution)
34
35

MOUNT OPTIONS

37       lockproto=LockProtoName
38              This  specifies  which  inter-node  lock protocol is used by the
39              GFS2 filesystem for this mount, overriding the default lock pro‐
40              tocol name stored in the filesystem's on-disk superblock.
41
42              The  LockProtoName  must  be one of the supported locking proto‐
43              cols, currently these are lock_nolock and lock_dlm.
44
45              The default lock protocol name is written to disk initially when
46              creating the filesystem with mkfs.gfs2(8), -p option.  It can be
47              changed on-disk by using the tunegfs2(8) command.
48
49              The lockproto mount option should be  used  only  under  special
50              circumstances  in  which you want to temporarily use a different
51              lock protocol without changing the on-disk  default.  Using  the
52              incorrect  lock  protocol  on  a cluster filesystem mounted from
53              more than one node will almost certainly  result  in  filesystem
54              corruption.
55
56       locktable=LockTableName
57              This specifies the identity of the cluster and of the filesystem
58              for this mount, overriding the default cluster/filesystem  iden‐
59              tify  stored  in the filesystem's on-disk superblock.  The clus‐
60              ter/filesystem name is recognized globally throughout the  clus‐
61              ter, and establishes a unique namespace for the inter-node lock‐
62              ing system, enabling the mounting of multiple GFS2 filesystems.
63
64              The  format  of  LockTableName  is  lock-module-specific.    For
65              lock_dlm,  the  format  is clustername:fsname.  For lock_nolock,
66              the field is ignored.
67
68              The default cluster/filesystem name is written to disk initially
69              when  creating  the filesystem with mkfs.gfs2(8), -t option.  It
70              can be changed on-disk by using the tunegfs2(8) command.
71
72              The locktable mount option should be  used  only  under  special
73              circumstances  in  which  you  want to mount the filesystem in a
74              different cluster, or mount it as a different  filesystem  name,
75              without changing the on-disk default.
76
77       localflocks
78              This  flag  tells  GFS2 that it is running as a local (not clus‐
79              tered) filesystem, so it can allow the kernel VFS  layer  to  do
80              all flock and fcntl file locking.  When running in cluster mode,
81              these file locks require inter-node locks, and require the  sup‐
82              port  of  GFS2.   When  running  locally,  better performance is
83              achieved by letting VFS handle the whole job.
84
85              This is turned on automatically by the lock_nolock module.
86
87       errors=[panic|withdraw]
88              Setting errors=panic causes GFS2 to oops  when  encountering  an
89              error  that would otherwise cause the mount to withdraw or print
90              an assertion warning. The default  setting  is  errors=withdraw.
91              This  option  should  not  be  used  in a production system.  It
92              replaces the earlier debug option on kernel versions 2.6.31  and
93              above.
94
95       acl    Enables POSIX Access Control List acl(5) support within GFS2.
96
97       spectator
98              Mount  this  filesystem using a special form of read-only mount.
99              The mount does not use one of  the  filesystem's  journals.  The
100              node is unable to recover journals for other nodes.
101
102       norecovery
103              A synonym for spectator
104
105       suiddir
106              Sets  owner of any newly created file or directory to be that of
107              parent directory, if parent  directory  has  S_ISUID  permission
108              attribute  bit  set.   Sets S_ISUID in any new directory, if its
109              parent directory's S_ISUID is set.  Strips all execution bits on
110              a new file, if parent directory owner is different from owner of
111              process creating the file.  Set this option only if you know why
112              you are setting it.
113
114       quota=[off/account/on]
115              Turns  quotas on or off for a filesystem.  Setting the quotas to
116              be in the "account" state causes the per UID/GID  usage  statis‐
117              tics  to  be  correctly  maintained by the filesystem, limit and
118              warn values are ignored.  The default value is "off".
119
120       discard
121              Causes GFS2 to generate "discard" I/O requests for blocks  which
122              have  been  freed.  These  can  be  used by suitable hardware to
123              implement thin-provisioning and similar schemes. This feature is
124              supported in kernel version 2.6.30 and above.
125
126       barrier
127              This  option, which defaults to on, causes GFS2 to send I/O bar‐
128              riers when flushing the journal.  The  option  is  automatically
129              turned  off if the underlying device does not support I/O barri‐
130              ers. We highly recommend the use of I/O barriers  with  GFS2  at
131              all  times unless the block device is designed so that it cannot
132              lose its write cache content (e.g. its on a UPS, or  it  doesn't
133              have a write cache)
134
135       commit=secs
136              This  is  similar to the ext3 commit= option in that it sets the
137              maximum number of seconds between journal commits  if  there  is
138              dirty  data  in  the  journal.  The  default is 60 seconds. This
139              option is only provided in kernel versions 2.6.31 and above.
140
141       data=[ordered|writeback]
142              When data=ordered is set, the user data modified by  a  transac‐
143              tion  is flushed to the disk before the transaction is committed
144              to disk.  This should prevent the user from seeing uninitialized
145              blocks  in a file after a crash.  Data=writeback mode writes the
146              user data to the disk at any  time  after  it's  dirtied.   This
147              doesn't  provide the same consistency guarantee as ordered mode,
148              but it should  be  slightly  faster  for  some  workloads.   The
149              default is ordered mode.
150
151       meta   This option results in selecting the meta filesystem root rather
152              than the normal filesystem root. This option  is  normally  only
153              used  by  the  GFS2  utility functions. Altering any file on the
154              GFS2 meta filesystem may render the filesystem unusable, so only
155              experts in the GFS2 on-disk layout should use this option.
156
157       quota_quantum=secs
158              This  sets the number of seconds for which a change in the quota
159              information may sit on one node  before  being  written  to  the
160              quota file. This is the preferred way to set this parameter. The
161              value is an integer number of seconds  greater  than  zero.  The
162              default is 60 seconds. Shorter settings result in faster updates
163              of the lazy quota information and  less  likelihood  of  someone
164              exceeding  their  quota.  Longer settings make filesystem opera‐
165              tions involving quotas faster and more efficient.
166
167       statfs_quantum=secs
168              Setting statfs_quantum to 0 is the preferred way to set the slow
169              version  of  statfs. The default value is 30 secs which sets the
170              maximum time period before statfs changes will be syned  to  the
171              master  statfs  file.  This can be adjusted to allow for faster,
172              less accurate statfs values or slower more accurate values. When
173              set to 0, statfs will always report the true values.
174
175       statfs_percent=value
176              This  setting  provides a bound on the maximum percentage change
177              in the statfs information on a local basis before it  is  synced
178              back  to the master statfs file, even if the time period has not
179              expired. If the setting of statfs_quantum is 0, then  this  set‐
180              ting is ignored.
181
182       rgrplvb
183              This  flag  tells  gfs2 to look for information about a resource
184              group's free space and unlinked inodes in its glock  lock  value
185              block. This keeps gfs2 from having to read in the resource group
186              data from disk, speeding up allocations  in  some  cases.   This
187              option  was added in the 3.6 Linux kernel. Prior to this kernel,
188              no information was saved to the resource  group  lvb.  Note:  To
189              safely  turn  on  this option, all nodes mounting the filesystem
190              must be running at least a 3.6 Linux kernel. If  any  nodes  had
191              previously  mounted  the  filesystem  using  older  kernels, the
192              filesystem must be unmounted on  all  nodes  before  it  can  be
193              mounted  with  this option enabled. This option does not need to
194              be enabled on all nodes using a filesystem.
195
196       loccookie
197              This flag tells gfs2 to  use  location  based  readdir  cookies,
198              instead  of  its usual filename hash readdir cookies.  The file‐
199              name hash cookies are not guaranteed to be unique,  and  as  the
200              number of files in a directory increases, so does the likelihood
201              of a collision.  NFS requires  readdir  cookies  to  be  unique,
202              which  can  cause  problems  with  very  large directories (over
203              100,000 files). With this flag set, gfs2 will try  to  give  out
204              location  based cookies.  Since the cookie is 31 bits, gfs2 will
205              eventually run out of unique cookies,  and  will  fail  back  to
206              using  hash cookies. The maximum number of files that could have
207              unique location cookies  assuming  perfectly  even  hashing  and
208              names  of  8  or  fewer  characters is 1,073,741,824. An average
209              directory should be able to give out well over  half  a  billion
210              location  based  cookies. This option was added in the 4.5 Linux
211              kernel. Prior to this kernel, gfs2 did not add directory entries
212              in  a way that allowed it to use location based readdir cookies.
213              Note: To safely turn on this  option,  all  nodes  mounting  the
214              filesystem  must be running at least a 4.5 Linux kernel. If this
215              option is only enabled on some of the nodes mounting a  filesys‐
216              tem, the cookies returned by nodes using this option will not be
217              valid on nodes that are not using this option, and  vice  versa.
218              Finally,  when  first  enabling this option on a filesystem that
219              had been previously mounted without it, you must make sure  that
220              there are no outstanding cookies being cached by other software,
221              such as NFS.
222
223

SETUP

225       GFS2 clustering is driven by the dlm, which depends on dlm_controld  to
226       provide clustering from userspace.  dlm_controld clustering is built on
227       corosync cluster/group membership and  messaging.  GFS2  also  requires
228       clustered  lvm  which  is  provided  by lvmlockd or, previously, clvmd.
229       Refer to the documentation for each of these components and ensure that
230       they  are configured before setting up a GFS2 filesystem. Also refer to
231       your distribution's documentation for  any  specific  support  require‐
232       ments.
233
234       Ensure  that  gfs2-utils  is  installed  on  all  nodes which mount the
235       filesystem as it provides scripts required for correct  withdraw  event
236       response.
237
238       1. Create the gfs2 filesystem
239
240       mkfs.gfs2 -p lock_dlm -t cluster_name:fs_name -j num /path/to/storage
241
242       The  cluster_name  must match the name configured in corosync (and thus
243       dlm).  The fs_name must be a unique name  for  the  filesystem  in  the
244       cluster.  The -j option is the number of journals to create; there must
245       be one for each node that will mount the filesystem.
246
247
248       2. Mount the gfs2 filesystem
249
250       If you are using a clustered resource manager,  see  its  documentation
251       for enabling a gfs2 filesystem resource. Otherwise, run:
252
253       mount /path/to/storage /mountpoint
254
255       Run "dlm_tool ls" to verify the nodes that have each fs mounted.
256
257
258       3. Shut down
259
260       If  you  are  using a clustered resource manager, see its documentation
261       for disabling a gfs2 filesystem resource. Otherwise, run:
262
263       umount -a -t gfs2
264
265

SEE ALSO

267       mount(8) and umount(8) for  general  mount  information,  chmod(1)  and
268       chmod(2)  for access permission flags, acl(5) for access control lists,
269       lvm(8)   for   volume   management,    dlm_controld(8),    dlm_tool(8),
270       dlm.conf(5), corosync(8), corosync.conf(5),
271
272
273
274                                                                       gfs2(5)
Impressum