1HUGEADM(8)                  System Manager's Manual                 HUGEADM(8)
2
3
4

NAME

6       hugeadm - Configure the system huge page pools
7

SYNOPSIS

9       hugeadm [options]
10

DESCRIPTION

12       hugeadm  displays  and configures the systems huge page pools. The size
13       of the pools is set as a minimum and maximum  threshold.   The  minimum
14       value  is  allocated up front by the kernel and guaranteed to remain as
15       hugepages until the pool is shrunk. If a maximum  is  set,  the  system
16       will  dynamically allocate pages if applications request more hugepages
17       than the minimum size of the pool. There  is  no  guarantee  that  more
18       pages than this minimum pool size can be allocated.
19
20       The following options create mounts hugetlbfs mount points.
21
22
23       --create-mounts
24
25              This  creates  mount  points  for  each supported huge page size
26              under /var/lib/hugetlbfs.  After creation they  are  mounts  and
27              are  owned by root:root with permissions set to 770.  Each mount
28              point is named pagesize-<size in bytes>.
29
30
31       --create-user-mounts=<user>
32
33              This creates mount points for  each  supported  huge  page  size
34              under /var/lib/hugetlbfs/user/<user>.  Mount point naming is the
35              same as --create-mounts.  After creation they  are  mounted  and
36              are owned by <user>:root with permissions set to 700.
37
38
39       --create-group-mounts=<group>
40
41              This  creates  mount  points  for  each supported huge page size
42              under /var/lib/hugetlbfs/group/<group>.  Mount point  naming  is
43              the  same  as  --create-mounts.  After creation they are mounted
44              and are owned by root:<group> with permissions set to 070.
45
46
47       --create-global-mounts
48
49              This creates mount points for  each  supported  huge  page  size
50              under /var/lib/hugetlbfs/global.  Mount point naming is the same
51              as --create-mounts.  After creation they  are  mounted  and  are
52              owned by root:root with permissions set to 1777.
53
54              The following options affect how mount points are created.
55
56
57       --max-size
58
59              This  option  is  used in conjunction with --create-*-mounts. It
60              limits the maximum amount of memory used  by  files  within  the
61              mount  point  rounded up to the nearest huge page size. This can
62              be used for example to grant different huge page quotas to indi‐
63              vidual users or groups.
64
65
66       --max-inodes
67
68              This  option  is  used in conjunction with --create-*-mounts. It
69              limits the number of inodes (e.g. files) that can be created  on
70              the  new  mount points.  This limits the number of mappings that
71              can be created on a mount point. It could be used for example to
72              limit  the  number  of  application  instances that used a mount
73              point as long as it was known how many inodes  each  application
74              instance required.
75
76              The following options display information about the pools.
77
78
79       --pool-list
80
81              This  displays  the  Minimum, Current and Maximum number of huge
82              pages in the pool for each pagesize supported by the system. The
83              "Minimum"  value  is  the size of the static pool and there will
84              always be at least this number of hugepages in use by  the  sys‐
85              tem,  either by applications or kept by the kernel in a reserved
86              pool. The "Current" value is the number of  hugepages  currently
87              in  use,  either  by  applications or stored on the kernels free
88              list. The "Maximum" value is the  largest  number  of  hugepages
89              that can be in use at any given time.
90
91
92       --set-recommended-min_free_kbytes
93
94              Fragmentation  avoidance in the kernel depends on avoiding pages
95              of different mobility types being mixed with a pageblock arena -
96              typically  the size of the default huge page size. The more mix‐
97              ing that occurs, the less likely the huge page pool will be able
98              to  dynamically  resize. The easiest means of avoiding mixing is
99              to increase /proc/sys/vm/min_free_kbytes.  This  parameter  sets
100              min_free_kbytes  to  a  recommended  value  to aid fragmentation
101              avoidance.
102
103
104       --set-recommended-shmmax
105
106              The maximum shared memory segment size should be set to at least
107              the  size  of  the  largest  shared memory segment size you want
108              available for applications using huge pages, via  /proc/sys/ker‐
109              nel/shmmax. Optionally, it can be set automatically to match the
110              maximum possible size of all huge page allocations and thus  the
111              maximum possible shared memory segment size, using this switch.
112
113
114       --set-shm-group=<gid|groupname>
115
116              Users  in  the group specified in /proc/sys/vm/hugetlb_shm_group
117              are granted full access  to  huge  pages.  The  sysctl  takes  a
118              numeric  gid,  but this hugeadm option can set it for you, using
119              either a gid or group name.
120
121
122       --page-sizes
123
124              This displays every page size supported by the system and has  a
125              pool configured.
126
127
128       --page-sizes-all
129
130              This displays all page sizes supported by the system, even if no
131              pool is available.
132
133
134       --list-all-mounts
135
136              This displays all active mount points for hugetlbfs.
137
138
139       The following options configure the pool.
140
141
142       --pool-pages-min=<size|DEFAULT>:[+|-]<pagecount|memsize<G|M|K>>
143
144              This option sets or adjusts the Minimum number of  hugepages  in
145              the pool for pagesize size. size may be specified in bytes or in
146              kilobytes, megabytes, or gigabytes  by  appending  K,  M,  or  G
147              respectively,  or  as  DEFAULT,  which uses the system's default
148              huge page size for size. The pool size adjustment can be  speci‐
149              fied  by  pagecount pages or by memsize, if postfixed with G, M,
150              or K, for gigabytes, megabytes, or kilobytes,  respectively.  If
151              the adjustment is specified via memsize, then the pagecount will
152              be calculated for you, based on page size size.  The pool is set
153              to  pagecount  pages  if + or - are not specified. If + or - are
154              specified, then the size of the pool will adjust by that amount.
155              Note that there is no guarantee that the system can allocate the
156              hugepages requested for the Minimum pool. The size of the  pools
157              should  be  checked  after executing this command to ensure they
158              were successful.
159
160
161       --obey-numa-mempol
162
163              This option requests that allocation of huge pages to the static
164              pool  with  --pool-pages-min  obey the NUMA memory policy of the
165              current process. This policy can be explicitly  specified  using
166              numactl or inherited from a parent process.
167
168
169       --pool-pages-max=<size|DEFAULT>:[+|-]<pagecount|memsize<G|M|K>>
170
171              This  option  sets  or  adjusts the Maximum number of hugepages.
172              Note that while the Minimum number of pages are guaranteed to be
173              available  to applications, there is not guarantee that the sys‐
174              tem can allocate the pages on demand when  the  number  of  huge
175              pages requested by applications is between the Minimum and Maxi‐
176              mum pool sizes. See --pool-pages-min for usage syntax.
177
178
179       --enable-zone-movable
180
181              This option enables the use of the MOVABLE zone for the  alloca‐
182              tion of hugepages. This zone is created when kernelcore= or mov‐
183              ablecore= are specified on the kernel command line but the  zone
184              is  not  used for the allocation of huge pages by default as the
185              intended use for the zone may be to guarantee that memory can be
186              off-lined  and hot-removed. The kernel guarantees that the pages
187              within this zone can be reclaimed unlike some kernel buffers for
188              example. Unless pages are locked with mlock(), the hugepage pool
189              can grow to at least the size of  the  movable  zone  once  this
190              option  is  set. Use sysctl to permanently enable the use of the
191              MOVABLE zone for the allocation of huge pages.
192
193
194       --disable-zone-movable
195
196              This option disables the use of the MOVABLE zone for the  future
197              allocation  of huge pages. Note that existing huge pages are not
198              reclaimed from the zone.  Use sysctl to permanently disable  the
199              use of the MOVABLE zone for the allocation of huge pages.
200
201
202       --hard
203
204
205              This  option is specified with --pool-pages-min to retry alloca‐
206              tions multiple times on failure to allocate the desired count of
207              pages.  It  initially tries to resize the pool up to 5 times and
208              continues to try if progress is being made towards the resize.
209
210
211       --add-temp-swap<=count>
212
213              This options is specified with --pool-pages-min to initialize  a
214              temporary  swap  file  for the duration of the pool resize. When
215              increasing the size of the pool, it can be necessary to  reclaim
216              pages so that contiguous memory is freed and this often requires
217              swap to be successful. Swap  is  only  created  for  a  positive
218              resize,  and  is  then removed once the resize operation is com‐
219              pleted.  The default swap size is 5  huge  pages,  the  optional
220              argument <count> sets the swap size to <count> huge pages.
221
222
223       --add-ramdisk-swap
224
225              This  option  is  specified  with --pool-pages-min to initialize
226              swap in memory on ram disks.  When increasing the  size  of  the
227              pool,  it  can  be necessary to reclaim pages so that contiguous
228              memory is freed and this often requires swap to  be  successful.
229              If  there  isn't enough free disk space, swap can be initialized
230              in RAM using this option.  If the size of  one  ramdisk  is  not
231              greater  than  the  huge  page size, then swap is initialized on
232              multiple ramdisks.  Swap is only created for a positive  resize,
233              and  by  default  is  removed  once the resize operation is com‐
234              pleted.
235
236
237       --persist
238
239              This option is specified  with  the  --add-temp-swap  or  --add-
240              ramdisk-swap  to  make  the  swap space persist after the resize
241              operation is completed.  The swap spaces can  later  be  removed
242              manually using the swapoff command.
243
244
245       The following options affect the verbosity of libhugetlbfs.
246
247
248       --verbose <level>, -v
249              The  default value for the verbosity level is 1 and the range of
250              the value can be set with --verbose from 0 to 99. The higher the
251              value,  the  more  verbose the library will be. 0 is quiet and 3
252              will output much debugging information. The verbosity  level  is
253              increased by one each time -v is specified.
254
255

SEE ALSO

257       oprofile(1), pagesize(1), libhugetlbfs(7), hugectl(8),
258

AUTHORS

260       libhugetlbfs  was  written  by various people on the libhugetlbfs-devel
261       mailing list.
262
263
264
265
266                                October 1, 2009                     HUGEADM(8)
Impressum