1CGROUPS(7)                 Linux Programmer's Manual                CGROUPS(7)
2
3
4

NAME

6       cgroups - Linux control groups
7

DESCRIPTION

9       Control groups, usually referred to as cgroups, are a Linux kernel fea‐
10       ture which allow processes to be  organized  into  hierarchical  groups
11       whose usage of various types of resources can then be limited and moni‐
12       tored.  The kernel's cgroup interface is  provided  through  a  pseudo-
13       filesystem called cgroupfs.  Grouping is implemented in the core cgroup
14       kernel code, while resource tracking and limits are  implemented  in  a
15       set of per-resource-type subsystems (memory, CPU, and so on).
16
17   Terminology
18       A cgroup is a collection of processes that are bound to a set of limits
19       or parameters defined via the cgroup filesystem.
20
21       A subsystem is a kernel component that modifies  the  behavior  of  the
22       processes  in a cgroup.  Various subsystems have been implemented, mak‐
23       ing it possible to do things such as limiting the amount  of  CPU  time
24       and memory available to a cgroup, accounting for the CPU time used by a
25       cgroup, and freezing and resuming  execution  of  the  processes  in  a
26       cgroup.   Subsystems  are  sometimes also known as resource controllers
27       (or simply, controllers).
28
29       The cgroups for a controller are arranged in a hierarchy.  This hierar‐
30       chy  is  defined  by  creating,  removing,  and renaming subdirectories
31       within  the  cgroup  filesystem.   At  each  level  of  the  hierarchy,
32       attributes  (e.g.,  limits)  can  be defined.  The limits, control, and
33       accounting provided by cgroups generally  have  effect  throughout  the
34       subhierarchy  underneath  the  cgroup where the attributes are defined.
35       Thus, for example, the limits placed on a cgroup at a higher  level  in
36       the hierarchy cannot be exceeded by descendant cgroups.
37
38   Cgroups version 1 and version 2
39       The  initial release of the cgroups implementation was in Linux 2.6.24.
40       Over time, various cgroup controllers have been added to allow the man‐
41       agement  of  various  types  of resources.  However, the development of
42       these controllers was largely uncoordinated, with the result that  many
43       inconsistencies  arose between controllers and management of the cgroup
44       hierarchies became rather complex.   (A  longer  description  of  these
45       problems   can   be   found   in  the  kernel  source  file  Documenta‐
46       tion/cgroup-v2.txt.)
47
48       Because  of  the  problems  with  the  initial  cgroups  implementation
49       (cgroups  version  1),  starting  in  Linux  3.10, work began on a new,
50       orthogonal implementation to remedy these problems.   Initially  marked
51       experimental,  and  hidden  behind  the -o __DEVEL__sane_behavior mount
52       option, the new version (cgroups version 2) was eventually  made  offi‐
53       cial  with  the release of Linux 4.5.  Differences between the two ver‐
54       sions are described in the text below.
55
56       Although cgroups v2 is intended as a replacement for  cgroups  v1,  the
57       older  system  continues  to  exist  (and  for compatibility reasons is
58       unlikely to be removed).  Currently, cgroups v2 implements only a  sub‐
59       set  of  the  controllers available in cgroups v1.  The two systems are
60       implemented so that both v1  controllers  and  v2  controllers  can  be
61       mounted  on  the same system.  Thus, for example, it is possible to use
62       those controllers that are supported under version 2, while also  using
63       version  1  controllers where version 2 does not yet support those con‐
64       trollers.  The only restriction here is  that  a  controller  can't  be
65       simultaneously  employed  in  both  a  cgroups  v1 hierarchy and in the
66       cgroups v2 hierarchy.
67

CGROUPS VERSION 1

69       Under cgroups v1, each controller may be  mounted  against  a  separate
70       cgroup  filesystem  that  provides its own hierarchical organization of
71       the processes on the system.  It is also possible to  comount  multiple
72       (or  even  all) cgroups v1 controllers against the same cgroup filesys‐
73       tem, meaning that the comounted controllers manage the same  hierarchi‐
74       cal organization of processes.
75
76       For  each  mounted  hierarchy,  the  directory tree mirrors the control
77       group hierarchy.  Each control group is  represented  by  a  directory,
78       with  each  of  its child control cgroups represented as a child direc‐
79       tory.   For  instance,  /user/joe/1.session  represents  control  group
80       1.session,  which  is a child of cgroup joe, which is a child of /user.
81       Under each cgroup directory is a set of files  which  can  be  read  or
82       written to, reflecting resource limits and a few general cgroup proper‐
83       ties.
84
85   Tasks (threads) versus processes
86       In cgroups v1, a distinction is drawn between processes and tasks.   In
87       this  view,  a  process  can  consist  of multiple tasks (more commonly
88       called threads, from a user-space perspective, and called such  in  the
89       remainder of this man page).  In cgroups v1, it is possible to indepen‐
90       dently manipulate the cgroup memberships of the threads in a process.
91
92       The cgroups v1 ability to split threads across different cgroups caused
93       problems  in  some cases.  For example, it made no sense for the memory
94       controller, since all of the  threads  of  a  process  share  a  single
95       address space.  Because of these problems, the ability to independently
96       manipulate the cgroup memberships of  the  threads  in  a  process  was
97       removed  in  the  initial  cgroups  v2 implementation, and subsequently
98       restored in a more limited form (see the discussion  of  "thread  mode"
99       below).
100
101   Mounting v1 controllers
102       The  use  of  cgroups  requires  a  kernel built with the CONFIG_CGROUP
103       option.  In addition, each of the v1 controllers has an associated con‐
104       figuration option that must be set in order to employ that controller.
105
106       In  order  to  use a v1 controller, it must be mounted against a cgroup
107       filesystem.  The usual place  for  such  mounts  is  under  a  tmpfs(5)
108       filesystem  mounted  at  /sys/fs/cgroup.  Thus, one might mount the cpu
109       controller as follows:
110
111           mount -t cgroup -o cpu none /sys/fs/cgroup/cpu
112
113       It is possible to comount multiple controllers against the same hierar‐
114       chy.   For  example, here the cpu and cpuacct controllers are comounted
115       against a single hierarchy:
116
117           mount -t cgroup -o cpu,cpuacct none /sys/fs/cgroup/cpu,cpuacct
118
119       Comounting controllers has the effect that a process  is  in  the  same
120       cgroup  for all of the comounted controllers.  Separately mounting con‐
121       trollers allows a process to be in  cgroup  /foo1  for  one  controller
122       while being in /foo2/foo3 for another.
123
124       It  is  possible to comount all v1 controllers against the same hierar‐
125       chy:
126
127           mount -t cgroup -o all cgroup /sys/fs/cgroup
128
129       (One can achieve the same result by omitting -o all, since  it  is  the
130       default if no controllers are explicitly specified.)
131
132       It is not possible to mount the same controller against multiple cgroup
133       hierarchies.  For example, it is not possible to mount both the cpu and
134       cpuacct  controllers  against  one hierarchy, and to mount the cpu con‐
135       troller alone against another hierarchy.  It is possible to create mul‐
136       tiple  mount points with exactly the same set of comounted controllers.
137       However, in this case all that results is multiple mount points provid‐
138       ing a view of the same hierarchy.
139
140       Note that on many systems, the v1 controllers are automatically mounted
141       under /sys/fs/cgroup; in particular, systemd(1)  automatically  creates
142       such mount points.
143
144   Unmounting v1 controllers
145       A  mounted  cgroup filesystem can be unmounted using the umount(8) com‐
146       mand, as in the following example:
147
148           umount /sys/fs/cgroup/pids
149
150       But note well: a cgroup filesystem is unmounted only if it is not busy,
151       that  is,  it  has no child cgroups.  If this is not the case, then the
152       only effect of the umount(8) is to make the mount invisible.  Thus,  to
153       ensure  that  the  mount point is really removed, one must first remove
154       all child cgroups, which in turn can be done only after all member pro‐
155       cesses have been moved from those cgroups to the root cgroup.
156
157   Cgroups version 1 controllers
158       Each  of the cgroups version 1 controllers is governed by a kernel con‐
159       figuration option (listed below).  Additionally,  the  availability  of
160       the cgroups feature is governed by the CONFIG_CGROUPS kernel configura‐
161       tion option.
162
163       cpu (since Linux 2.6.24; CONFIG_CGROUP_SCHED)
164              Cgroups can be guaranteed a minimum number of "CPU shares"  when
165              a  system  is busy.  This does not limit a cgroup's CPU usage if
166              the CPUs are not busy.  For further information, see  Documenta‐
167              tion/scheduler/sched-design-CFS.txt.
168
169              In Linux 3.2, this controller was extended to provide CPU "band‐
170              width"  control.   If  the  kernel  is  configured   with   CON‐
171              FIG_CFS_BANDWIDTH,  then  within each scheduling period (defined
172              via a file in the cgroup directory), it is possible to define an
173              upper  limit  on  the  CPU  time allocated to the processes in a
174              cgroup.  This upper limit applies even if there is no other com‐
175              petition  for  the CPU.  Further information can be found in the
176              kernel source file Documentation/scheduler/sched-bwc.txt.
177
178       cpuacct (since Linux 2.6.24; CONFIG_CGROUP_CPUACCT)
179              This provides accounting for CPU usage by groups of processes.
180
181              Further information can be found in the kernel source file Docu‐
182              mentation/cgroup-v1/cpuacct.txt.
183
184       cpuset (since Linux 2.6.24; CONFIG_CPUSETS)
185              This  cgroup  can be used to bind the processes in a cgroup to a
186              specified set of CPUs and NUMA nodes.
187
188              Further information can be found in the kernel source file Docu‐
189              mentation/cgroup-v1/cpusets.txt.
190
191       memory (since Linux 2.6.25; CONFIG_MEMCG)
192              The memory controller supports reporting and limiting of process
193              memory, kernel memory, and swap used by cgroups.
194
195              Further information can be found in the kernel source file Docu‐
196              mentation/cgroup-v1/memory.txt.
197
198       devices (since Linux 2.6.26; CONFIG_CGROUP_DEVICE)
199              This  supports  controlling  which  processes may create (mknod)
200              devices as well as open them for reading or writing.  The  poli‐
201              cies  may  be specified as whitelists and blacklists.  Hierarchy
202              is enforced, so new rules must not violate  existing  rules  for
203              the target or ancestor cgroups.
204
205              Further information can be found in the kernel source file Docu‐
206              mentation/cgroup-v1/devices.txt.
207
208       freezer (since Linux 2.6.28; CONFIG_CGROUP_FREEZER)
209              The freezer cgroup can suspend and  restore  (resume)  all  pro‐
210              cesses  in a cgroup.  Freezing a cgroup /A also causes its chil‐
211              dren, for example, processes in /A/B, to be frozen.
212
213              Further information can be found in the kernel source file Docu‐
214              mentation/cgroup-v1/freezer-subsystem.txt.
215
216       net_cls (since Linux 2.6.29; CONFIG_CGROUP_NET_CLASSID)
217              This  places  a  classid,  specified  for the cgroup, on network
218              packets created by a cgroup.  These classids can then be used in
219              firewall  rules,  as  well as used to shape traffic using tc(8).
220              This applies only to packets leaving the cgroup, not to  traffic
221              arriving at the cgroup.
222
223              Further information can be found in the kernel source file Docu‐
224              mentation/cgroup-v1/net_cls.txt.
225
226       blkio (since Linux 2.6.33; CONFIG_BLK_CGROUP)
227              The blkio cgroup controls and limits access to  specified  block
228              devices  by  applying  IO  control in the form of throttling and
229              upper limits against leaf nodes and intermediate  nodes  in  the
230              storage hierarchy.
231
232              Two  policies are available.  The first is a proportional-weight
233              time-based division of disk implemented with CFQ.   This  is  in
234              effect  for  leaf  nodes  using CFQ.  The second is a throttling
235              policy which specifies upper I/O rate limits on a device.
236
237              Further information can be found in the kernel source file Docu‐
238              mentation/cgroup-v1/blkio-controller.txt.
239
240       perf_event (since Linux 2.6.39; CONFIG_CGROUP_PERF)
241              This  controller  allows perf monitoring of the set of processes
242              grouped in a cgroup.
243
244              Further information can be  found  in  the  kernel  source  file
245              tools/perf/Documentation/perf-record.txt.
246
247       net_prio (since Linux 3.3; CONFIG_CGROUP_NET_PRIO)
248              This  allows  priorities to be specified, per network interface,
249              for cgroups.
250
251              Further information can be found in the kernel source file Docu‐
252              mentation/cgroup-v1/net_prio.txt.
253
254       hugetlb (since Linux 3.5; CONFIG_CGROUP_HUGETLB)
255              This supports limiting the use of huge pages by cgroups.
256
257              Further information can be found in the kernel source file Docu‐
258              mentation/cgroup-v1/hugetlb.txt.
259
260       pids (since Linux 4.3; CONFIG_CGROUP_PIDS)
261              This controller permits limiting the number of process that  may
262              be created in a cgroup (and its descendants).
263
264              Further information can be found in the kernel source file Docu‐
265              mentation/cgroup-v1/pids.txt.
266
267       rdma (since Linux 4.11; CONFIG_CGROUP_RDMA)
268              The RDMA controller permits limiting the use of RDMA/IB-specific
269              resources per cgroup.
270
271              Further information can be found in the kernel source file Docu‐
272              mentation/cgroup-v1/rdma.txt.
273
274   Creating cgroups and moving processes
275       A cgroup filesystem initially contains a single root cgroup, '/', which
276       all  processes belong to.  A new cgroup is created by creating a direc‐
277       tory in the cgroup filesystem:
278
279           mkdir /sys/fs/cgroup/cpu/cg1
280
281       This creates a new empty cgroup.
282
283       A process may be moved to this cgroup  by  writing  its  PID  into  the
284       cgroup's cgroup.procs file:
285
286           echo $$ > /sys/fs/cgroup/cpu/cg1/cgroup.procs
287
288       Only one PID at a time should be written to this file.
289
290       Writing  the  value 0 to a cgroup.procs file causes the writing process
291       to be moved to the corresponding cgroup.
292
293       When writing a PID into the cgroup.procs, all threads  in  the  process
294       are moved into the new cgroup at once.
295
296       Within  a  hierarchy,  a process can be a member of exactly one cgroup.
297       Writing a process's PID to a cgroup.procs file automatically removes it
298       from the cgroup of which it was previously a member.
299
300       The  cgroup.procs  file  can  be read to obtain a list of the processes
301       that are members of a cgroup.  The returned list of PIDs is not guaran‐
302       teed  to  be  in order.  Nor is it guaranteed to be free of duplicates.
303       (For example, a PID may be recycled while reading from the list.)
304
305       In cgroups v1, an individual thread can be moved to another  cgroup  by
306       writing  its thread ID (i.e., the kernel thread ID returned by clone(2)
307       and gettid(2)) to the tasks file in a cgroup directory.  This file  can
308       be read to discover the set of threads that are members of the cgroup.
309
310   Removing cgroups
311       To  remove a cgroup, it must first have no child cgroups and contain no
312       (nonzombie) processes.  So long as that is the  case,  one  can  simply
313       remove  the  corresponding  directory  pathname.   Note that files in a
314       cgroup directory cannot and need not be removed.
315
316   Cgroups v1 release notification
317       Two files can be used to determine whether the kernel provides  notifi‐
318       cations  when  a  cgroup  becomes  empty.  A cgroup is considered to be
319       empty when it contains no child cgroups and no member processes.
320
321       A special  file  in  the  root  directory  of  each  cgroup  hierarchy,
322       release_agent,  can  be used to register the pathname of a program that
323       may be invoked when a cgroup in the hierarchy becomes empty.  The path‐
324       name  of the newly empty cgroup (relative to the cgroup mount point) is
325       provided as the sole command-line argument when the release_agent  pro‐
326       gram  is  invoked.   The  release_agent program might remove the cgroup
327       directory, or perhaps repopulate it with a process.
328
329       The default value of the release_agent file is empty, meaning  that  no
330       release agent is invoked.
331
332       The content of the release_agent file can also be specified via a mount
333       option when the cgroup filesystem is mounted:
334
335           mount -o release_agent=pathname ...
336
337       Whether or not the release_agent program is invoked when  a  particular
338       cgroup   becomes   empty   is   determined   by   the   value   in  the
339       notify_on_release file in the corresponding cgroup directory.  If  this
340       file  contains  the  value  0,  then  the  release_agent program is not
341       invoked.  If it contains the value  1,  the  release_agent  program  is
342       invoked.   The default value for this file in the root cgroup is 0.  At
343       the time when a new cgroup is created, the value in this file is inher‐
344       ited from the corresponding file in the parent cgroup.
345
346   Cgroup v1 named hierarchies
347       In  cgroups  v1, it is possible to mount a cgroup hierarchy that has no
348       attached controllers:
349
350           mount -t cgroup -o none,name=somename none /some/mount/point
351
352       Multiple instances of such hierarchies can be mounted;  each  hierarchy
353       must  have  a  unique name.  The only purpose of such hierarchies is to
354       track processes.  (See the discussion of release  notification  below.)
355       An example of this is the name=systemd cgroup hierarchy that is used by
356       systemd(1) to track services and user sessions.
357
358       Since Linux 5.0, the cgroup_no_v1 kernel boot option (described  below)
359       can  be  used  to  disable  cgroup  v1 named hierarchies, by specifying
360       cgroup_no_v1=named.
361
362

CGROUPS VERSION 2

364       In cgroups v2, all mounted controllers reside in a single unified hier‐
365       archy.   While  (different)  controllers  may be simultaneously mounted
366       under the v1 and v2 hierarchies, it is not possible to mount  the  same
367       controller simultaneously under both the v1 and the v2 hierarchies.
368
369       The  new behaviors in cgroups v2 are summarized here, and in some cases
370       elaborated in the following subsections.
371
372       1. Cgroups v2 provides a  unified  hierarchy  against  which  all  con‐
373          trollers are mounted.
374
375       2. "Internal"  processes  are not permitted.  With the exception of the
376          root cgroup, processes may reside only in leaf nodes  (cgroups  that
377          do  not themselves contain child cgroups).  The details are somewhat
378          more subtle than this, and are described below.
379
380       3. Active cgroups must be specified via  the  files  cgroup.controllers
381          and cgroup.subtree_control.
382
383       4. The    tasks    file   has   been   removed.    In   addition,   the
384          cgroup.clone_children file that is employed by the cpuset controller
385          has been removed.
386
387       5. An  improved mechanism for notification of empty cgroups is provided
388          by the cgroup.events file.
389
390       For more changes, see the Documentation/cgroup-v2.txt file in the  ker‐
391       nel source.
392
393       Some of the new behaviors listed above saw subsequent modification with
394       the addition in Linux 4.14 of "thread mode" (described below).
395
396   Cgroups v2 unified hierarchy
397       In cgroups v1, the ability to mount different controllers against  dif‐
398       ferent hierarchies was intended to allow great flexibility for applica‐
399       tion design.  In practice, though, the flexibility  turned  out  to  be
400       less  useful than expected, and in many cases added complexity.  There‐
401       fore, in cgroups v2, all available controllers are  mounted  against  a
402       single hierarchy.  The available controllers are automatically mounted,
403       meaning that it is not necessary (or  possible)  to  specify  the  con‐
404       trollers when mounting the cgroup v2 filesystem using a command such as
405       the following:
406
407           mount -t cgroup2 none /mnt/cgroup2
408
409       A cgroup v2 controller is available only if it is not currently in  use
410       via  a  mount against a cgroup v1 hierarchy.  Or, to put things another
411       way, it is not possible to employ the same controller against both a v1
412       hierarchy and the unified v2 hierarchy.  This means that it may be nec‐
413       essary first to unmount a v1 controller  (as  described  above)  before
414       that  controller  is available in v2.  Since systemd(1) makes heavy use
415       of some v1 controllers by default, it can in some cases be  simpler  to
416       boot  the  system  with  selected v1 controllers disabled.  To do this,
417       specify the cgroup_no_v1=list option on the kernel boot  command  line;
418       list  is a comma-separated list of the names of the controllers to dis‐
419       able, or the word all to disable all v1 controllers.   (This  situation
420       is correctly handled by systemd(1), which falls back to operating with‐
421       out the specified controllers.)
422
423       Note that on many modern systems, systemd(1) automatically  mounts  the
424       cgroup2 filesystem at /sys/fs/cgroup/unified during the boot process.
425
426   Cgroups v2 controllers
427       The  following  controllers, documented in the kernel source file Docu‐
428       mentation/cgroup-v2.txt, are supported in cgroups version 2:
429
430       io (since Linux 4.5)
431              This is the successor of the version 1 blkio controller.
432
433       memory (since Linux 4.5)
434              This is the successor of the version 1 memory controller.
435
436       pids (since Linux 4.5)
437              This is the same as the version 1 pids controller.
438
439       perf_event (since Linux 4.11)
440              This is the same as the version 1 perf_event controller.
441
442       rdma (since Linux 4.11)
443              This is the same as the version 1 rdma controller.
444
445       cpu (since Linux 4.15)
446              This is the successor to the version  1  cpu  and  cpuacct  con‐
447              trollers.
448
449   Cgroups v2 subtree control
450       Each cgroup in the v2 hierarchy contains the following two files:
451
452       cgroup.controllers
453              This  read-only  file exposes a list of the controllers that are
454              available in this cgroup.  The contents of this file  match  the
455              contents  of  the  cgroup.subtree_control  file  in  the  parent
456              cgroup.
457
458       cgroup.subtree_control
459              This is a list of controllers that are active (enabled)  in  the
460              cgroup.   The set of controllers in this file is a subset of the
461              set in the cgroup.controllers of this cgroup.  The set of active
462              controllers is modified by writing strings to this file contain‐
463              ing space-delimited controller names, each preceded by  '+'  (to
464              enable a controller) or '-' (to disable a controller), as in the
465              following example:
466
467                  echo '+pids -memory' > x/y/cgroup.subtree_control
468
469              An attempt to  enable  a  controller  that  is  not  present  in
470              cgroup.controllers  leads to an ENOENT error when writing to the
471              cgroup.subtree_control file.
472
473       Because the list of controllers in cgroup.subtree_control is  a  subset
474       of those cgroup.controllers, a controller that has been disabled in one
475       cgroup in the hierarchy can never be re-enabled in  the  subtree  below
476       that cgroup.
477
478       A  cgroup's  cgroup.subtree_control  file  determines  the  set of con‐
479       trollers that are exercised in the child cgroups.   When  a  controller
480       (e.g.,  pids) is present in the cgroup.subtree_control file of a parent
481       cgroup,  then  the  corresponding  controller-interface  files   (e.g.,
482       pids.max)  are automatically created in the children of that cgroup and
483       can be used to exert resource control in the child cgroups.
484
485   Cgroups v2 "no internal processes" rule
486       Cgroups v2 enforces a so-called "no internal processes" rule.   Roughly
487       speaking,  this rule means that, with the exception of the root cgroup,
488       processes may reside only in leaf nodes (cgroups that do not themselves
489       contain  child  cgroups).  This avoids the need to decide how to parti‐
490       tion resources between processes which are members of cgroup A and pro‐
491       cesses in child cgroups of A.
492
493       For  instance,  if cgroup /cg1/cg2 exists, then a process may reside in
494       /cg1/cg2, but not in /cg1.  This is to avoid an ambiguity in cgroups v1
495       with  respect  to the delegation of resources between processes in /cg1
496       and its child cgroups.  The recommended approach in cgroups  v2  is  to
497       create  a  subdirectory called leaf for any nonleaf cgroup which should
498       contain processes, but no child cgroups.  Thus, processes which  previ‐
499       ously  would have gone into /cg1 would now go into /cg1/leaf.  This has
500       the advantage of making explicit the relationship between processes  in
501       /cg1/leaf and /cg1's other children.
502
503       The  "no  internal  processes"  rule is in fact more subtle than stated
504       above.  More precisely, the rule is that a (nonroot) cgroup can't  both
505       (1)  have  member  processes,  and  (2) distribute resources into child
506       cgroups—that is, have a nonempty cgroup.subtree_control file.  Thus, it
507       is  possible  for  a  cgroup  to  have  both member processes and child
508       cgroups, but before controllers can be enabled  for  that  cgroup,  the
509       member  processes  must  be moved out of the cgroup (e.g., perhaps into
510       the child cgroups).
511
512       With the Linux 4.14 addition of "thread mode"  (described  below),  the
513       "no internal processes" rule has been relaxed in some cases.
514
515   Cgroups v2 cgroup.events file
516       With  cgroups  v2,  a  new mechanism is provided to obtain notification
517       about when a cgroup becomes empty.  The cgroups  v1  release_agent  and
518       notify_on_release  files  are removed, and replaced by a new, more gen‐
519       eral-purpose file, cgroup.events.  This read-only  file  contains  key-
520       value  pairs  (delimited  by newline characters, with the key and value
521       separated by spaces) that identify events or state for a cgroup.   Cur‐
522       rently,  only one key appears in this file, populated, which has either
523       the value 0, meaning that the cgroup (and its descendants)  contain  no
524       (nonzombie)  processes,  or  1, meaning that the cgroup contains member
525       processes.
526
527       The cgroup.events file can be monitored, in order to receive  notifica‐
528       tion  when  a  cgroup transitions between the populated and unpopulated
529       states (or vice versa).  When monitoring this  file  using  inotify(7),
530       transitions  generate  IN_MODIFY  events,  and when monitoring the file
531       using poll(2), transitions cause the bits POLLPRI  and  POLLERR  to  be
532       returned in the revents field.
533
534       The cgroups v2 release-notification mechanism provided by the populated
535       field of the cgroup.events file offers at least two advantages over the
536       cgroups v1 release_agent mechanism.  First, it allows for cheaper noti‐
537       fication, since a single process  can  monitor  multiple  cgroup.events
538       files.   By contrast, the cgroups v1 mechanism requires the creation of
539       a process for each notification.  Second, notification can be delegated
540       to  a  process  that lives inside a container associated with the newly
541       empty cgroup.
542
543   Cgroups v2 cgroup.stat file
544       Each cgroup in the v2 hierarchy contains a read-only  cgroup.stat  file
545       (first introduced in Linux 4.14) that consists of lines containing key-
546       value pairs.  The following keys currently appear in this file:
547
548       nr_descendants
549              This is the total number of visible  (i.e.,  living)  descendant
550              cgroups underneath this cgroup.
551
552       nr_dying_descendants
553              This  is the total number of dying descendant cgroups underneath
554              this cgroup.  A  cgroup  enters  the  dying  state  after  being
555              deleted.   It  remains  in  that  state  for an undefined period
556              (which will depend on system load)  while  resources  are  freed
557              before  the cgroup is destroyed.  Note that the presence of some
558              cgroups in the dying state is normal, and is not  indicative  of
559              any problem.
560
561              A  process can't be made a member of a dying cgroup, and a dying
562              cgroup can't be brought back to life.
563
564   Limiting the number of descendant cgroups
565       Each cgroup in the v2 hierarchy contains the following files, which can
566       be  used  to  view  and  set limits on the number of descendant cgroups
567       under that cgroup:
568
569       cgroup.max.depth (since Linux 4.14)
570              This file defines a limit on the depth of nesting of  descendant
571              cgroups.   A  value  of  0 in this file means that no descendant
572              cgroups can be created.  An attempt to create a descendant whose
573              nesting  level  exceeds the limit fails (mkdir(2) fails with the
574              error EAGAIN).
575
576              Writing the string "max" to this file means  that  no  limit  is
577              imposed.  The default value in this file is "max".
578
579       cgroup.max.descendants (since Linux 4.14)
580              This  file  defines  a  limit  on  the number of live descendant
581              cgroups that this cgroup may have.  An attempt  to  create  more
582              descendants than allowed by the limit fails (mkdir(2) fails with
583              the error EAGAIN).
584
585              Writing the string "max" to this file means  that  no  limit  is
586              imposed.  The default value in this file is "max".
587

CGROUPS DELEGATION: DELEGATING A HIERARCHY TO A LESS PRIVILEGED USER

589       In  the context of cgroups, delegation means passing management of some
590       subtree of the cgroup hierarchy to a nonprivileged  user.   Cgroups  v1
591       provides support for delegation based on file permissions in the cgroup
592       hierarchy but with less strict containment  rules  than  v2  (as  noted
593       below).   Cgroups  v2  supports delegation with containment by explicit
594       design.  The focus of the discussion in this section is  on  delegation
595       in  cgroups  v2,  with  some differences for cgroups v1 noted along the
596       way.
597
598       Some terminology is required in order to describe delegation.  A  dele‐
599       gater  is  a  privileged user (i.e., root) who owns a parent cgroup.  A
600       delegatee is a nonprivileged user who will be granted  the  permissions
601       needed  to  manage some subhierarchy under that parent cgroup, known as
602       the delegated subtree.
603
604       To perform delegation, the  delegater  makes  certain  directories  and
605       files writable by the delegatee, typically by changing the ownership of
606       the objects to be the user ID of the delegatee.  Assuming that we  want
607       to  delegate the hierarchy rooted at (say) /dlgt_grp and that there are
608       not yet any child cgroups under that cgroup, the ownership of the  fol‐
609       lowing is changed to the user ID of the delegatee:
610
611       /dlgt_grp
612              Changing the ownership of the root of the subtree means that any
613              new cgroups created under the subtree (and the files  they  con‐
614              tain) will also be owned by the delegatee.
615
616       /dlgt_grp/cgroup.procs
617              Changing the ownership of this file means that the delegatee can
618              move processes into the root of the delegated subtree.
619
620       /dlgt_grp/cgroup.subtree_control (cgroups v2 only)
621              Changing the ownership of this file means that that the  delega‐
622              tee    can    enable    controllers   (that   are   present   in
623              /dlgt_grp/cgroup.controllers) in order to  further  redistribute
624              resources at lower levels in the subtree.  (As an alternative to
625              changing the ownership of this file, the delegater might instead
626              add selected controllers to this file.)
627
628       /dlgt_grp/cgroup.threads (cgroups v2 only)
629              Changing  the  ownership of this file is necessary if a threaded
630              subtree is being  delegated  (see  the  description  of  "thread
631              mode",  below).   This permits the delegatee to write thread IDs
632              to the file.  (The ownership of this file can  also  be  changed
633              when  delegating  a domain subtree, but currently this serves no
634              purpose, since, as described below, it is not possible to move a
635              thread  between  domain  cgroups by writing its thread ID to the
636              cgroup.threads file.)
637
638              In cgroups v1, the corresponding file  that  should  instead  be
639              delegated is the tasks file.
640
641       The  delegater should not change the ownership of any of the controller
642       interfaces files (e.g.,  pids.max,  memory.high)  in  dlgt_grp.   Those
643       files are used from the next level above the delegated subtree in order
644       to distribute resources into the subtree, and the delegatee should  not
645       have  permission  to change the resources that are distributed into the
646       delegated subtree.
647
648       See also the discussion  of  the  /sys/kernel/cgroup/delegate  file  in
649       NOTES for information about further delegatable files in cgroups v2.
650
651       After  the  aforementioned steps have been performed, the delegatee can
652       create child cgroups within the delegated subtree (the cgroup subdirec‐
653       tories  and  the files they contain will be owned by the delegatee) and
654       move processes between cgroups in the subtree.  If some controllers are
655       present  in  dlgt_grp/cgroup.subtree_control,  or the ownership of that
656       file was passed to the delegatee, the delegatee can  also  control  the
657       further  redistribution  of  the corresponding resources into the dele‐
658       gated subtree.
659
660   Cgroups v2 delegation: nsdelegate and cgroup namespaces
661       Starting with Linux 4.13, there is a second way to perform cgroup dele‐
662       gation  in  the  cgroups  v2  hierarchy.   This  is done by mounting or
663       remounting the cgroup v2 filesystem with the nsdelegate  mount  option.
664       For  example,  if the cgroup v2 filesystem has already been mounted, we
665       can remount it with the nsdelegate option as follows:
666
667           mount -t cgroup2 -o remount,nsdelegate \
668                            none /sys/fs/cgroup/unified
669
670       The effect of this mount option is to cause cgroup namespaces to  auto‐
671       matically become delegation boundaries.  More specifically, the follow‐
672       ing restrictions apply for processes inside the cgroup namespace:
673
674       *  Writes to controller interface files in the root  directory  of  the
675          namespace  will  fail  with  the  error EPERM.  Processes inside the
676          cgroup namespace can still write to delegatable files  in  the  root
677          directory   of   the  cgroup  namespace  such  as  cgroup.procs  and
678          cgroup.subtree_control, and can create subhierarchy  underneath  the
679          root directory.
680
681       *  Attempts  to  migrate  processes  across  the namespace boundary are
682          denied (with the error ENOENT).  Processes inside the cgroup  names‐
683          pace  can  still  (subject to the containment rules described below)
684          move processes between cgroups within  the  subhierarchy  under  the
685          namespace root.
686
687       The  ability to define cgroup namespaces as delegation boundaries makes
688       cgroup namespaces more useful.  To  understand  why,  suppose  that  we
689       already have one cgroup hierarchy that has been delegated to a nonpriv‐
690       ileged user, cecilia, using the older  delegation  technique  described
691       above.   Suppose further that cecilia wanted to further delegate a sub‐
692       hierarchy under the existing delegated hierarchy.   (For  example,  the
693       delegated  hierarchy might be associated with an unprivileged container
694       run by cecilia.)  Even if a cgroup namespace was employed, because both
695       hierarchies  are  owned by the unprivileged user cecilia, the following
696       illegitimate actions could be performed:
697
698       *  A process in the inferior hierarchy could change the  resource  con‐
699          troller  settings  in  the root directory of that hierarchy.  (These
700          resource controller settings are intended to  allow  control  to  be
701          exercised  from the parent cgroup; a process inside the child cgroup
702          should not be allowed to modify them.)
703
704       *  A process inside the inferior hierarchy could  move  processes  into
705          and  out  of  the  inferior hierarchy if the cgroups in the superior
706          hierarchy were somehow visible.
707
708       Employing the nsdelegate mount option prevents both of these possibili‐
709       ties.
710
711       The  nsdelegate  mount  option only has an effect when performed in the
712       initial mount namespace; in  other  mount  namespaces,  the  option  is
713       silently ignored.
714
715       Note:  On  some  systems, systemd(1) automatically mounts the cgroup v2
716       filesystem.  In order to experiment with the nsdelegate  operation,  it
717       may  be  useful  to  boot  the  kernel  with the following command-line
718       options:
719
720           cgroup_no_v1=all systemd.legacy_systemd_cgroup_controller
721
722       These options cause the kernel to boot with the cgroups v1  controllers
723       disabled  (meaning that the controllers are available in the v2 hierar‐
724       chy), and tells systemd(1) not to mount and use the cgroup  v2  hierar‐
725       chy,  so that the v2 hierarchy can be manually mounted with the desired
726       options after boot-up.
727
728   Cgroup delegation containment rules
729       Some delegation containment rules ensure that the  delegatee  can  move
730       processes  between cgroups within the delegated subtree, but can't move
731       processes from outside the delegated subtree into the subtree  or  vice
732       versa.  A nonprivileged process (i.e., the delegatee) can write the PID
733       of a "target" process into a cgroup.procs file only if all of the  fol‐
734       lowing are true:
735
736       *  The writer has write permission on the cgroup.procs file in the des‐
737          tination cgroup.
738
739       *  The writer has write permission on  the  cgroup.procs  file  in  the
740          nearest common ancestor of the source and destination cgroups.  Note
741          that in some cases, the nearest common ancestor may be the source or
742          destination  cgroup  itself.   This  requirement is not enforced for
743          cgroups v1 hierarchies, with the consequence that containment in  v1
744          is  less  strict  than  in v2.  (For example, in cgroups v1 the user
745          that owns two distinct delegated subhierarchies can move  a  process
746          between the hierarchies.)
747
748       *  If  the cgroup v2 filesystem was mounted with the nsdelegate option,
749          the writer must be able to see the source  and  destination  cgroups
750          from its cgroup namespace.
751
752       *  In cgroups v1: the effective UID of the writer (i.e., the delegatee)
753          matches the real user ID or the  saved  set-user-ID  of  the  target
754          process.   Before  Linux  4.11,  this  requirement  also  applied in
755          cgroups v2 (This was a historical requirement inherited from cgroups
756          v1  that was later deemed unnecessary, since the other rules suffice
757          for containment in cgroups v2.)
758
759       Note: one consequence of these delegation containment rules is that the
760       unprivileged delegatee can't place the first process into the delegated
761       subtree; instead, the delegater must place the first process (a process
762       owned by the delegatee) into the delegated subtree.
763

CGROUPS VERSION 2 THREAD MODE

765       Among  the  restrictions imposed by cgroups v2 that were not present in
766       cgroups v1 are the following:
767
768       *  No thread-granularity control: all of the threads of a process  must
769          be in the same cgroup.
770
771       *  No internal processes: a cgroup can't both have member processes and
772          exercise controllers on child cgroups.
773
774       Both of these  restrictions  were  added  because  the  lack  of  these
775       restrictions  had  caused  problems  in cgroups v1.  In particular, the
776       cgroups v1 ability to allow thread-level granularity for cgroup member‐
777       ship  made  no  sense for some controllers.  (A notable example was the
778       memory controller: since threads share an address  space,  it  made  no
779       sense to split threads across different memory cgroups.)
780
781       Notwithstanding  the  initial design decision in cgroups v2, there were
782       use cases for certain controllers,  notably  the  cpu  controller,  for
783       which  thread-level  granularity  of control was meaningful and useful.
784       To accommodate such use cases, Linux 4.14 added thread mode for cgroups
785       v2.
786
787       Thread mode allows the following:
788
789       *  The  creation of threaded subtrees in which the threads of a process
790          may be spread across cgroups inside the tree.  (A  threaded  subtree
791          may contain multiple multithreaded processes.)
792
793       *  The  concept of threaded controllers, which can distribute resources
794          across the cgroups in a threaded subtree.
795
796       *  A relaxation of the "no internal processes rule", so that, within  a
797          threaded subtree, a cgroup can both contain member threads and exer‐
798          cise resource control over child cgroups.
799
800       With the addition of thread mode, each nonroot cgroup  now  contains  a
801       new  file,  cgroup.type, that exposes, and in some circumstances can be
802       used to change, the "type" of a cgroup.  This file contains one of  the
803       following type values:
804
805       domain This  is  a  normal  v2 cgroup that provides process-granularity
806              control.  If a process is a member  of  this  cgroup,  then  all
807              threads  of  the process are (by definition) in the same cgroup.
808              This is the default cgroup type, and provides the same  behavior
809              that  was  provided for cgroups in the initial cgroups v2 imple‐
810              mentation.
811
812       threaded
813              This cgroup is a member of a threaded subtree.  Threads  can  be
814              added  to  this  cgroup,  and controllers can be enabled for the
815              cgroup.
816
817       domain threaded
818              This is a domain cgroup that serves as the root  of  a  threaded
819              subtree.  This cgroup type is also known as "threaded root".
820
821       domain invalid
822              This  is  a  cgroup  inside  a  threaded  subtree  that is in an
823              "invalid" state.  Processes can't be added to  the  cgroup,  and
824              controllers  can't  be  enabled  for the cgroup.  The only thing
825              that can be done with this cgroup (other than deleting it) is to
826              convert it to a threaded cgroup by writing the string "threaded"
827              to the cgroup.type file.
828
829              The rationale for the existence of this  "interim"  type  during
830              the  creation of a threaded subtree (rather than the kernel sim‐
831              ply immediately converting all cgroups under the  threaded  root
832              to the type threaded) is to allow for possible future extensions
833              to the thread mode model
834
835   Threaded versus domain controllers
836       With the addition of threads mode, cgroups  v2  now  distinguishes  two
837       types of resource controllers:
838
839       *  Threaded  controllers:  these controllers support thread-granularity
840          for resource control and can be enabled  inside  threaded  subtrees,
841          with  the  result  that the corresponding controller-interface files
842          appear inside the cgroups in the  threaded  subtree.   As  at  Linux
843          4.19,  the  following controllers are threaded: cpu, perf_event, and
844          pids.
845
846       *  Domain controllers: these controllers support only process granular‐
847          ity  for  resource  control.   From the perspective of a domain con‐
848          troller, all threads of a process are always  in  the  same  cgroup.
849          Domain controllers can't be enabled inside a threaded subtree.
850
851   Creating a threaded subtree
852       There are two pathways that lead to the creation of a threaded subtree.
853       The first pathway proceeds as follows:
854
855       1. We write the string "threaded" to the cgroup.type file of  a  cgroup
856          y/z  that  currently  has  the  type domain.  This has the following
857          effects:
858
859          *  The type of the cgroup y/z becomes threaded.
860
861          *  The type of the parent cgroup, y, becomes domain  threaded.   The
862             parent  cgroup  is  the root of a threaded subtree (also known as
863             the "threaded root").
864
865          *  All other cgroups under y that were not already of type  threaded
866             (because  they  were  inside  already  existing threaded subtrees
867             under the  new  threaded  root)  are  converted  to  type  domain
868             invalid.  Any subsequently created cgroups under y will also have
869             the type domain invalid.
870
871       2. We write the string "threaded" to each of the domain invalid cgroups
872          under y, in order to convert them to the type threaded.  As a conse‐
873          quence of this step, all threads under the threaded  root  now  have
874          the type threaded and the threaded subtree is now fully usable.  The
875          requirement to write "threaded" to each of these cgroups is somewhat
876          cumbersome, but allows for possible future extensions to the thread-
877          mode model.
878
879       The second way of creating a threaded subtree is as follows:
880
881       1. In an existing cgroup, z, that currently has the type domain, we (1)
882          enable  one  or  more  threaded controllers and (2) make a process a
883          member of z.  (These two steps can be done in either  order.)   This
884          has the following consequences:
885
886          *  The type of z becomes domain threaded.
887
888          *  All  of the descendant cgroups of x that were not already of type
889             threaded are converted to type domain invalid.
890
891       2. As before, we make the threaded subtree usable by writing the string
892          "threaded"  to  each of the domain invalid cgroups under y, in order
893          to convert them to the type threaded.
894
895       One of the consequences of the above pathways to  creating  a  threaded
896       subtree  is  that  the  threaded  root  cgroup  can be a parent only to
897       threaded (and domain invalid) cgroups.  The threaded root cgroup  can't
898       be  a  parent  of  a domain cgroups, and a threaded cgroup can't have a
899       sibling that is a domain cgroup.
900
901   Using a threaded subtree
902       Within a threaded subtree, threaded controllers can be enabled in  each
903       subgroup  whose  type  has been changed to threaded; upon doing so, the
904       corresponding controller interface files appear in the children of that
905       cgroup.
906
907       A  process  can  be moved into a threaded subtree by writing its PID to
908       the cgroup.procs file in one of the cgroups inside the tree.  This  has
909       the  effect  of making all of the threads in the process members of the
910       corresponding cgroup and makes the process a  member  of  the  threaded
911       subtree.   The  threads  of  the  process can then be spread across the
912       threaded subtree by writing their thread IDs  (see  gettid(2))  to  the
913       cgroup.threads  files  in  different  cgroups  inside the subtree.  The
914       threads of a process must all reside in the same threaded subtree.
915
916       As with writing to cgroup.procs,  some  containment  rules  apply  when
917       writing to the cgroup.threads file:
918
919       *  The  writer must have write permission on the cgroup.threads file in
920          the destination cgroup.
921
922       *  The writer must have write permission on the  cgroup.procs  file  in
923          the common ancestor of the source and destination cgroups.  (In some
924          cases, the common ancestor may be the source or  destination  cgroup
925          itself.)
926
927       *  The source and destination cgroups must be in the same threaded sub‐
928          tree.  (Outside a threaded subtree, an attempt to move a  thread  by
929          writing  its  thread  ID  to  the cgroup.threads file in a different
930          domain cgroup fails with the error EOPNOTSUPP.)
931
932       The cgroup.threads file is present in  each  cgroup  (including  domain
933       cgroups)  and  can be read in order to discover the set of threads that
934       is present in the cgroup.  The set of thread IDs obtained when  reading
935       this file is not guaranteed to be ordered or free of duplicates.
936
937       The  cgroup.procs  file in the threaded root shows the PIDs of all pro‐
938       cesses that are members of  the  threaded  subtree.   The  cgroup.procs
939       files in the other cgroups in the subtree are not readable.
940
941       Domain  controllers  can't  be  enabled  in a threaded subtree; no con‐
942       troller-interface  files  appear  inside  the  cgroups  underneath  the
943       threaded root.  From the point of view of a domain controller, threaded
944       subtrees are invisible: a multithreaded process inside a threaded  sub‐
945       tree  appears  to  a domain controller as a process that resides in the
946       threaded root cgroup.
947
948       Within a threaded subtree, the "no internal processes"  rule  does  not
949       apply: a cgroup can both contain member processes (or thread) and exer‐
950       cise controllers on child cgroups.
951
952   Rules for writing to cgroup.type and creating threaded subtrees
953       A number of rules apply when writing to the cgroup.type file:
954
955       *  Only the string "threaded" may be written.  In other words, the only
956          explicit  transition  that is possible is to convert a domain cgroup
957          to type threaded.
958
959       *  The effect of writing "threaded" depends on  the  current  value  in
960          cgroup.type, as follows:
961
962          ·  domain  or domain threaded: start the creation of a threaded sub‐
963             tree (whose root is the parent of this cgroup) via the  first  of
964             the pathways described above;
965
966          ·  domain invalid:  convert  this cgroup (which is inside a threaded
967             subtree) to a usable (i.e., threaded) state;
968
969          ·  threaded: no effect (a "no-op").
970
971       *  We can't write to a cgroup.type file if the parent's type is  domain
972          invalid.   In other words, the cgroups of a threaded subtree must be
973          converted to the threaded state in a top-down manner.
974
975       There are also some constraints that must be satisfied in order to cre‐
976       ate a threaded subtree rooted at the cgroup x:
977
978       *  There  can  be  no  member processes in the descendant cgroups of x.
979          (The cgroup x can itself have member processes.)
980
981       *  No domain controllers may be enabled in  x's  cgroup.subtree_control
982          file.
983
984       If  any  of the above constraints is violated, then an attempt to write
985       "threaded" to a cgroup.type file fails with the error ENOTSUP.
986
987   The "domain threaded" cgroup type
988       According to the pathways described above, the type  of  a  cgroup  can
989       change to domain threaded in either of the following cases:
990
991       *  The string "threaded" is written to a child cgroup.
992
993       *  A  threaded controller is enabled inside the cgroup and a process is
994          made a member of the cgroup.
995
996       A domain threaded cgroup, x, can revert to the type domain if the above
997       conditions  no  longer hold true—that is, if all threaded child cgroups
998       of x are removed and  either  x  no  longer  has  threaded  controllers
999       enabled or no longer has member processes.
1000
1001       When a domain threaded cgroup x reverts to the type domain:
1002
1003       *  All  domain  invalid  descendants  of  x that are not in lower-level
1004          threaded subtrees revert to the type domain.
1005
1006       *  The root cgroups in any lower-level threaded subtrees revert to  the
1007          type domain threaded.
1008
1009   Exceptions for the root cgroup
1010       The root cgroup of the v2 hierarchy is treated exceptionally: it can be
1011       the parent  of  both  domain  and  threaded  cgroups.   If  the  string
1012       "threaded" is written to the cgroup.type file of one of the children of
1013       the root cgroup, then
1014
1015       *  The type of that cgroup becomes threaded.
1016
1017       *  The type of any descendants of that cgroup  that  are  not  part  of
1018          lower-level threaded subtrees changes to domain invalid.
1019
1020       Note  that  in  this case, there is no cgroup whose type becomes domain
1021       threaded.  (Notionally, the  root  cgroup  can  be  considered  as  the
1022       threaded root for the cgroup whose type was changed to threaded.)
1023
1024       The aim of this exceptional treatment for the root cgroup is to allow a
1025       threaded cgroup that employs the cpu controller to be placed as high as
1026       possible  in  the  hierarchy,  so  as  to  minimize the (small) cost of
1027       traversing the cgroup hierarchy.
1028
1029   The cgroups v2 "cpu" controller and realtime threads
1030       As at Linux 4.19, the cgroups v2 cpu controller does not  support  con‐
1031       trol  of  realtime threads (specifically threads scheduled under any of
1032       the  policies  SCHED_FIFO,  SCHED_RR,  described  SCHED_DEADLINE;   see
1033       sched(7)).   Therefore,  the  cpu controller can be enabled in the root
1034       cgroup only if all realtime threads are in the root cgroup.  (If  there
1035       are  realtime threads in nonroot cgroups, then a write(2) of the string
1036       "+cpu" to the cgroup.subtree_control file fails with the error EINVAL.)
1037
1038       On some systems, systemd(1) places certain realtime threads in  nonroot
1039       cgroups in the v2 hierarchy.  On such systems, these threads must first
1040       be moved to the root cgroup before the cpu controller can be enabled.
1041

ERRORS

1043       The following errors can occur for mount(2):
1044
1045       EBUSY  An attempt to mount a cgroup version 1 filesystem specified nei‐
1046              ther  the  name=  option (to mount a named hierarchy) nor a con‐
1047              troller name (or all).
1048

NOTES

1050       A child process created via fork(2) inherits its parent's  cgroup  mem‐
1051       berships.    A   process's  cgroup  memberships  are  preserved  across
1052       execve(2).
1053
1054   /proc files
1055       /proc/cgroups (since Linux 2.6.24)
1056              This file contains information about the  controllers  that  are
1057              compiled  into  the  kernel.  An example of the contents of this
1058              file (reformatted for readability) is the following:
1059
1060                  #subsys_name    hierarchy      num_cgroups    enabled
1061                  cpuset          4              1              1
1062                  cpu             8              1              1
1063                  cpuacct         8              1              1
1064                  blkio           6              1              1
1065                  memory          3              1              1
1066                  devices         10             84             1
1067                  freezer         7              1              1
1068                  net_cls         9              1              1
1069                  perf_event      5              1              1
1070                  net_prio        9              1              1
1071                  hugetlb         0              1              0
1072                  pids            2              1              1
1073
1074              The fields in this file are, from left to right:
1075
1076              1. The name of the controller.
1077
1078              2. The unique ID of the cgroup  hierarchy  on  which  this  con‐
1079                 troller  is  mounted.  If multiple cgroups v1 controllers are
1080                 bound to the same hierarchy, then each  will  show  the  same
1081                 hierarchy  ID in this field.  The value in this field will be
1082                 0 if:
1083
1084                   a) the controller is not mounted on a cgroups v1 hierarchy;
1085
1086                   b) the controller is bound to the cgroups v2 single unified
1087                      hierarchy; or
1088
1089                   c) the controller is disabled (see below).
1090
1091              3. The  number  of  control  groups in this hierarchy using this
1092                 controller.
1093
1094              4. This field  contains  the  value  1  if  this  controller  is
1095                 enabled, or 0 if it has been disabled (via the cgroup_disable
1096                 kernel command-line boot parameter).
1097
1098       /proc/[pid]/cgroup (since Linux 2.6.24)
1099              This file describes control groups to which the process with the
1100              corresponding  PID  belongs.   The displayed information differs
1101              for cgroups version 1 and version 2 hierarchies.
1102
1103              For each cgroup hierarchy of which  the  process  is  a  member,
1104              there is one entry containing three colon-separated fields:
1105
1106                  hierarchy-ID:controller-list:cgroup-path
1107
1108              For example:
1109
1110                  5:cpuacct,cpu,cpuset:/daemons
1111
1112              The colon-separated fields are, from left to right:
1113
1114              1. For  cgroups  version  1  hierarchies,  this field contains a
1115                 unique hierarchy ID number that can be matched to a hierarchy
1116                 ID  in  /proc/cgroups.   For the cgroups version 2 hierarchy,
1117                 this field contains the value 0.
1118
1119              2. For cgroups version 1  hierarchies,  this  field  contains  a
1120                 comma-separated  list of the controllers bound to the hierar‐
1121                 chy.  For the cgroups version  2  hierarchy,  this  field  is
1122                 empty.
1123
1124              3. This  field contains the pathname of the control group in the
1125                 hierarchy to which the process  belongs.   This  pathname  is
1126                 relative to the mount point of the hierarchy.
1127
1128   /sys/kernel/cgroup files
1129       /sys/kernel/cgroup/delegate (since Linux 4.15)
1130              This  file exports a list of the cgroups v2 files (one per line)
1131              that are delegatable (i.e., whose ownership should be changed to
1132              the  user ID of the delegatee).  In the future, the set of dele‐
1133              gatable files may change or grow, and this file provides  a  way
1134              for  the kernel to inform user-space applications of which files
1135              must be delegated.  As at Linux 4.15,  one  sees  the  following
1136              when inspecting this file:
1137
1138                  $ cat /sys/kernel/cgroup/delegate
1139                  cgroup.procs
1140                  cgroup.subtree_control
1141                  cgroup.threads
1142
1143       /sys/kernel/cgroup/features (since Linux 4.15)
1144              Over  time,  the set of cgroups v2 features that are provided by
1145              the kernel may change or grow,  or  some  features  may  not  be
1146              enabled  by  default.   This  file provides a way for user-space
1147              applications to discover what features the running  kernel  sup‐
1148              ports and has enabled.  Features are listed one per line:
1149
1150                  $ cat /sys/kernel/cgroup/features
1151                  nsdelegate
1152
1153              The entries that can appear in this file are:
1154
1155              nsdelegate (since Linux 4.15)
1156                     The kernel supports the nsdelegate mount option.
1157

SEE ALSO

1159       prlimit(1),  systemd(1),  systemd-cgls(1),  systemd-cgtop(1), clone(2),
1160       ioprio_set(2), perf_event_open(2), setrlimit(2),  cgroup_namespaces(7),
1161       cpuset(7), namespaces(7), sched(7), user_namespaces(7)
1162

COLOPHON

1164       This  page  is  part of release 5.02 of the Linux man-pages project.  A
1165       description of the project, information about reporting bugs,  and  the
1166       latest     version     of     this    page,    can    be    found    at
1167       https://www.kernel.org/doc/man-pages/.
1168
1169
1170
1171Linux                             2019-03-06                        CGROUPS(7)
Impressum