1gfs_controld(8)             System Manager's Manual            gfs_controld(8)
2
3
4

NAME

6       gfs_controld  -  daemon that manages mounting, unmounting, recovery and
7       posix locks
8
9

SYNOPSIS

11       gfs_controld [OPTION]...
12
13

DESCRIPTION

15       GFS lives in the kernel, and the cluster infrastructure  (cluster  mem‐
16       bership  and  group management) lives in user space.  GFS in the kernel
17       needs to adjust/recover for certain cluster events.  It's  the  job  of
18       gfs_controld  to  receive  these  events and reconfigure gfs as needed.
19       gfs_controld controls and configures gfs through sysfs files  that  are
20       considered gfs-internal interfaces; not a general API/ABI.
21
22       Mounting,  unmounting and node failure are the main cluster events that
23       gfs_controld controls.  It also manages the assignment of  journals  to
24       different  nodes.   The  mount.gfs  and umount.gfs programs communicate
25       with gfs_controld to join/leave the mount group and receive the  neces‐
26       sary options for the kernel mount.
27
28       GFS  also  sends all posix lock operations to gfs_controld for process‐
29       ing.  gfs_controld manages cluster-wide posix locks for gfs and  passes
30       results back to gfs in the kernel.
31
32

CONFIGURATION FILE

34       Optional  cluster.conf  settings  are placed in the <gfs_controld> sec‐
35       tion.
36
37
38   Posix locks
39       Heavy use of plocks can result in high network load.  The rate at which
40       plocks are processed are limited by the plock_rate_limit setting, which
41       limits the maximum plock performance, and limits potentially  excessive
42       network  load.   This value is the maximum number of plock operations a
43       single node will process every second.  To achieve maximum posix  lock‐
44       ing  performance, the rate limiting should be disabled by setting it to
45       0.  The default value is 100.
46
47         <gfs_controld plock_rate_limit="100"/>
48
49       To optimize performance for repeated locking of the same locks by  pro‐
50       cesses  on a single node, plock_ownership can be set to 1.  The default
51       is 0.  If this is enabled, gfs_controld cannot interoperate with  older
52       versions that did not support this option.
53
54         <gfs_controld plock_ownership="1"/>
55
56       Three  options  can be used to tune the behavior of the plock_ownership
57       optimization.  All three relate to the caching of lock ownership state.
58       Specifically,  they  define  how aggressively cached ownership state is
59       dropped.  More caching of ownership state can result in better  perfor‐
60       mance, at the expense of more memory usage.
61
62       drop_resources_time  is the frequency of drop attempts in milliseconds.
63       Default 10000 (10 sec).
64
65       drop_resources_count is the maximum number of items to  drop  from  the
66       cache each time.  Default 10.
67
68       drop_resources_age  is the time in milliseconds a cached item should be
69       unused before being considered for dropping.  Default 10000 (10 sec).
70
71         <gfs_controld drop_resources_time="10000" drop_resources_count="10"
72          drop_resources_age="10000"/>
73
74
75

OPTIONS

77       -D     Run the daemon in the foreground and print debug  statements  to
78              stdout.
79
80       -P     Enable posix lock debugging messages.
81
82       -w     Disable the "withdraw" feature.
83
84       -p     Disable posix lock handling.
85
86       -l <num>
87              Limit  the  rate  at which posix lock messages are sent to <num>
88              messages per second.  0 disables the limit and  results  in  the
89              maximum performance of posix locks. Default 100.
90
91       -o <num>
92              Enable  (1) or disable (0) plock ownership optimization. Default
93              0.  All nodes must run with the same value.
94
95       -t <ms>
96              Ownership cache  tuning,  drop  resources  time  (milliseconds).
97              Default 10000.
98
99       -c <ms>
100              Ownership cache tuning, drop resources count. Default 10.
101
102       -a <ms>
103              Ownership  cache  tuning,  drop  resources  age  (milliseconds).
104              Default 10000.
105
106       -h     Print out a help  message  describing  available  options,  then
107              exit.
108
109       -V     Print the version information and exit.
110
111

DEBUGGING

113       The  gfs_controld daemon keeps a circular buffer of debug messages that
114       can be dumped with the 'group_tool dump gfs' command.
115
116       The state of all gfs posix locks can also be dumped  from  gfs_controld
117       with the 'group_tool dump plocks <fsname>' command.
118
119

SEE ALSO

121       groupd(8), group_tool(8)
122
123
124
125
126                                                               gfs_controld(8)
Impressum