1lxc.container.conf(5) lxc.container.conf(5)
2
3
4
6 lxc.container.conf - LXC container configuration file
7
9 LXC is the well-known and heavily tested low-level Linux container run‐
10 time. It is in active development since 2008 and has proven itself in
11 critical production environments world-wide. Some of its core contribu‐
12 tors are the same people that helped to implement various well-known
13 containerization features inside the Linux kernel.
14
15 LXC's main focus is system containers. That is, containers which offer
16 an environment as close as possible as the one you'd get from a VM but
17 without the overhead that comes with running a separate kernel and sim‐
18 ulating all the hardware.
19
20 This is achieved through a combination of kernel security features such
21 as namespaces, mandatory access control and control groups.
22
23 LXC has support for unprivileged containers. Unprivileged containers
24 are containers that are run without any privilege. This requires sup‐
25 port for user namespaces in the kernel that the container is run on.
26 LXC was the first runtime to support unprivileged containers after user
27 namespaces were merged into the mainline kernel.
28
29 In essence, user namespaces isolate given sets of UIDs and GIDs. This
30 is achieved by establishing a mapping between a range of UIDs and GIDs
31 on the host to a different (unprivileged) range of UIDs and GIDs in the
32 container. The kernel will translate this mapping in such a way that
33 inside the container all UIDs and GIDs appear as you would expect from
34 the host whereas on the host these UIDs and GIDs are in fact unprivi‐
35 leged. For example, a process running as UID and GID 0 inside the con‐
36 tainer might appear as UID and GID 100000 on the host. The implementa‐
37 tion and working details can be gathered from the corresponding user
38 namespace man page. UID and GID mappings can be defined with the
39 lxc.idmap key.
40
41 Linux containers are defined with a simple configuration file. Each
42 option in the configuration file has the form key = value fitting in
43 one line. The "#" character means the line is a comment. List options,
44 like capabilities and cgroups options, can be used with no value to
45 clear any previously defined values of that option.
46
47 LXC namespaces configuration keys use single dots. This means complex
48 configuration keys such as lxc.net.0 expose various subkeys such as
49 lxc.net.0.type, lxc.net.0.link, lxc.net.0.ipv6.address, and others for
50 even more fine-grained configuration.
51
52 CONFIGURATION
53 In order to ease administration of multiple related containers, it is
54 possible to have a container configuration file cause another file to
55 be loaded. For instance, network configuration can be defined in one
56 common file which is included by multiple containers. Then, if the con‐
57 tainers are moved to another host, only one file may need to be
58 updated.
59
60 lxc.include
61 Specify the file to be included. The included file must be in
62 the same valid lxc configuration file format.
63
64 ARCHITECTURE
65 Allows one to set the architecture for the container. For example, set
66 a 32bits architecture for a container running 32bits binaries on a
67 64bits host. This fixes the container scripts which rely on the archi‐
68 tecture to do some work like downloading the packages.
69
70 lxc.arch
71 Specify the architecture for the container.
72
73 Some valid options are x86, i686, x86_64, amd64
74
75 HOSTNAME
76 The utsname section defines the hostname to be set for the container.
77 That means the container can set its own hostname without changing the
78 one from the system. That makes the hostname private for the container.
79
80 lxc.uts.name
81 specify the hostname for the container
82
83 HALT SIGNAL
84 Allows one to specify signal name or number sent to the container's
85 init process to cleanly shutdown the container. Different init systems
86 could use different signals to perform clean shutdown sequence. This
87 option allows the signal to be specified in kill(1) fashion, e.g. SIG‐
88 PWR, SIGRTMIN+14, SIGRTMAX-10 or plain number. The default signal is
89 SIGPWR.
90
91 lxc.signal.halt
92 specify the signal used to halt the container
93
94 REBOOT SIGNAL
95 Allows one to specify signal name or number to reboot the container.
96 This option allows signal to be specified in kill(1) fashion, e.g.
97 SIGTERM, SIGRTMIN+14, SIGRTMAX-10 or plain number. The default signal
98 is SIGINT.
99
100 lxc.signal.reboot
101 specify the signal used to reboot the container
102
103 STOP SIGNAL
104 Allows one to specify signal name or number to forcibly shutdown the
105 container. This option allows signal to be specified in kill(1) fash‐
106 ion, e.g. SIGKILL, SIGRTMIN+14, SIGRTMAX-10 or plain number. The
107 default signal is SIGKILL.
108
109 lxc.signal.stop
110 specify the signal used to stop the container
111
112 INIT COMMAND
113 Sets the command to use as the init system for the containers.
114
115 lxc.execute.cmd
116 Absolute path from container rootfs to the binary to run by
117 default. This mostly makes sense for lxc-execute.
118
119 lxc.init.cmd
120 Absolute path from container rootfs to the binary to use as
121 init. This mostly makes sense for lxc-start. Default is
122 /sbin/init.
123
124 INIT WORKING DIRECTORY
125 Sets the absolute path inside the container as the working directory
126 for the containers. LXC will switch to this directory before executing
127 init.
128
129 lxc.init.cwd
130 Absolute path inside the container to use as the working direc‐
131 tory.
132
133 INIT ID
134 Sets the UID/GID to use for the init system, and subsequent commands.
135 Note that using a non-root UID when booting a system container will
136 likely not work due to missing privileges. Setting the UID/GID is
137 mostly useful when running application containers. Defaults to:
138 UID(0), GID(0)
139
140 lxc.init.uid
141 UID to use for init.
142
143 lxc.init.gid
144 GID to use for init.
145
146 PROC
147 Configure proc filesystem for the container.
148
149 lxc.proc.[proc file name]
150 Specify the proc file name to be set. The file names available
151 are those listed under /proc/PID/. Example:
152
153 lxc.proc.oom_score_adj = 10
154
155
156 EPHEMERAL
157 Allows one to specify whether a container will be destroyed on shut‐
158 down.
159
160 lxc.ephemeral
161 The only allowed values are 0 and 1. Set this to 1 to destroy a
162 container on shutdown.
163
164 NETWORK
165 The network section defines how the network is virtualized in the con‐
166 tainer. The network virtualization acts at layer two. In order to use
167 the network virtualization, parameters must be specified to define the
168 network interfaces of the container. Several virtual interfaces can be
169 assigned and used in a container even if the system has only one physi‐
170 cal network interface.
171
172 lxc.net
173 may be used without a value to clear all previous network
174 options.
175
176 lxc.net.[i].type
177 specify what kind of network virtualization to be used for the
178 container. Multiple networks can be specified by using an addi‐
179 tional index i after all lxc.net.* keys. For example,
180 lxc.net.0.type = veth and lxc.net.1.type = veth specify two dif‐
181 ferent networks of the same type. All keys sharing the same
182 index i will be treated as belonging to the same network. For
183 example, lxc.net.0.link = br0 will belong to lxc.net.0.type.
184 Currently, the different virtualization types can be:
185
186 none: will cause the container to share the host's network
187 namespace. This means the host network devices are usable in the
188 container. It also means that if both the container and host
189 have upstart as init, 'halt' in a container (for instance) will
190 shut down the host. Note that unprivileged containers do not
191 work with this setting due to an inability to mount sysfs. An
192 unsafe workaround would be to bind mount the host's sysfs.
193
194 empty: will create only the loopback interface.
195
196 veth: a virtual ethernet pair device is created with one side
197 assigned to the container and the other side on the host.
198 lxc.net.[i].veth.mode specifies the mode the veth parent will
199 use on the host. The accepted modes are bridge and router. The
200 mode defaults to bridge if not specified. In bridge mode the
201 host side is attached to a bridge specified by the
202 lxc.net.[i].link option. If the bridge link is not specified,
203 then the veth pair device will be created but not attached to
204 any bridge. Otherwise, the bridge has to be created on the sys‐
205 tem before starting the container. lxc won't handle any config‐
206 uration outside of the container. In router mode static routes
207 are created on the host for the container's IP addresses point‐
208 ing to the host side veth interface. Additionally Proxy ARP and
209 Proxy NDP entries are added on the host side veth interface for
210 the gateway IPs defined in the container to allow the container
211 to reach the host. By default, lxc chooses a name for the net‐
212 work device belonging to the outside of the container, but if
213 you wish to handle this name yourselves, you can tell lxc to set
214 a specific name with the lxc.net.[i].veth.pair option (except
215 for unprivileged containers where this option is ignored for
216 security reasons). Static routes can be added on the host
217 pointing to the container using the lxc.net.[i].veth.ipv4.route
218 and lxc.net.[i].veth.ipv6.route options. Several lines specify
219 several routes. The route is in format x.y.z.t/m, eg.
220 192.168.1.0/24.
221
222 vlan: a vlan interface is linked with the interface specified by
223 the lxc.net.[i].link and assigned to the container. The vlan
224 identifier is specified with the option lxc.net.[i].vlan.id.
225
226 macvlan: a macvlan interface is linked with the interface speci‐
227 fied by the lxc.net.[i].link and assigned to the container.
228 lxc.net.[i].macvlan.mode specifies the mode the macvlan will use
229 to communicate between different macvlan on the same upper
230 device. The accepted modes are private, vepa, bridge and
231 passthru. In private mode, the device never communicates with
232 any other device on the same upper_dev (default). In vepa mode,
233 the new Virtual Ethernet Port Aggregator (VEPA) mode, it assumes
234 that the adjacent bridge returns all frames where both source
235 and destination are local to the macvlan port, i.e. the bridge
236 is set up as a reflective relay. Broadcast frames coming in from
237 the upper_dev get flooded to all macvlan interfaces in VEPA
238 mode, local frames are not delivered locally. In bridge mode, it
239 provides the behavior of a simple bridge between different
240 macvlan interfaces on the same port. Frames from one interface
241 to another one get delivered directly and are not sent out
242 externally. Broadcast frames get flooded to all other bridge
243 ports and to the external interface, but when they come back
244 from a reflective relay, we don't deliver them again. Since we
245 know all the MAC addresses, the macvlan bridge mode does not
246 require learning or STP like the bridge module does. In passthru
247 mode, all frames received by the physical interface are for‐
248 warded to the macvlan interface. Only one macvlan interface in
249 passthru mode is possible for one physical interface.
250
251 ipvlan: an ipvlan interface is linked with the interface speci‐
252 fied by the lxc.net.[i].link and assigned to the container.
253 lxc.net.[i].ipvlan.mode specifies the mode the ipvlan will use
254 to communicate between different ipvlan on the same upper
255 device. The accepted modes are l3, l3s and l2. It defaults to l3
256 mode. In l3 mode TX processing up to L3 happens on the stack
257 instance attached to the slave device and packets are switched
258 to the stack instance of the master device for the L2 processing
259 and routing from that instance will be used before packets are
260 queued on the outbound device. In this mode the slaves will not
261 receive nor can send multicast / broadcast traffic. In l3s mode
262 TX processing is very similar to the L3 mode except that ipta‐
263 bles (conn-tracking) works in this mode and hence it is L3-sym‐
264 metric (L3s). This will have slightly less performance but that
265 shouldn't matter since you are choosing this mode over plain-L3
266 mode to make conn-tracking work. In l2 mode TX processing hap‐
267 pens on the stack instance attached to the slave device and
268 packets are switched and queued to the master device to send
269 out. In this mode the slaves will RX/TX multicast and broadcast
270 (if applicable) as well. lxc.net.[i].ipvlan.isolation specifies
271 the isolation mode. The accepted isolation values are bridge,
272 private and vepa. It defaults to bridge. In bridge isolation
273 mode slaves can cross-talk among themselves apart from talking
274 through the master device. In private isolation mode the port
275 is set in private mode. i.e. port won't allow cross communica‐
276 tion between slaves. In vepa isolation mode the port is set in
277 VEPA mode. i.e. port will offload switching functionality to
278 the external entity as described in 802.1Qbg.
279
280 phys: an already existing interface specified by the
281 lxc.net.[i].link is assigned to the container.
282
283 lxc.net.[i].flags
284 Specify an action to do for the network.
285
286 up: activates the interface.
287
288 lxc.net.[i].link
289 Specify the interface to be used for real network traffic.
290
291 lxc.net.[i].l2proxy
292 Controls whether layer 2 IP neighbour proxy entries will be
293 added to the lxc.net.[i].link interface for the IP addresses of
294 the container. Can be set to 0 or 1. Defaults to 0. When used
295 with IPv4 addresses, the following sysctl values need to be set:
296 net.ipv4.conf.[link].forwarding=1 When used with IPv6 addresses,
297 the following sysctl values need to be set:
298 net.ipv6.conf.[link].proxy_ndp=1 net.ipv6.conf.[link].forward‐
299 ing=1
300
301 lxc.net.[i].mtu
302 Specify the maximum transfer unit for this interface.
303
304 lxc.net.[i].name
305 The interface name is dynamically allocated, but if another name
306 is needed because the configuration files being used by the con‐
307 tainer use a generic name, eg. eth0, this option will rename the
308 interface in the container.
309
310 lxc.net.[i].hwaddr
311 The interface mac address is dynamically allocated by default to
312 the virtual interface, but in some cases, this is needed to
313 resolve a mac address conflict or to always have the same link-
314 local ipv6 address. Any "x" in address will be replaced by ran‐
315 dom value, this allows setting hwaddr templates.
316
317 lxc.net.[i].ipv4.address
318 Specify the ipv4 address to assign to the virtualized interface.
319 Several lines specify several ipv4 addresses. The address is in
320 format x.y.z.t/m, eg. 192.168.1.123/24.
321
322 lxc.net.[i].ipv4.gateway
323 Specify the ipv4 address to use as the gateway inside the con‐
324 tainer. The address is in format x.y.z.t, eg. 192.168.1.123.
325 Can also have the special value auto, which means to take the
326 primary address from the bridge interface (as specified by the
327 lxc.net.[i].link option) and use that as the gateway. auto is
328 only available when using the veth, macvlan and ipvlan network
329 types. Can also have the special value of dev, which means to
330 set the default gateway as a device route. This is primarily
331 for use with layer 3 network modes, such as IPVLAN.
332
333 lxc.net.[i].ipv6.address
334 Specify the ipv6 address to assign to the virtualized interface.
335 Several lines specify several ipv6 addresses. The address is in
336 format x::y/m, eg. 2003:db8:1:0:214:1234:fe0b:3596/64
337
338 lxc.net.[i].ipv6.gateway
339 Specify the ipv6 address to use as the gateway inside the con‐
340 tainer. The address is in format x::y, eg. 2003:db8:1:0::1 Can
341 also have the special value auto, which means to take the pri‐
342 mary address from the bridge interface (as specified by the
343 lxc.net.[i].link option) and use that as the gateway. auto is
344 only available when using the veth, macvlan and ipvlan network
345 types. Can also have the special value of dev, which means to
346 set the default gateway as a device route. This is primarily
347 for use with layer 3 network modes, such as IPVLAN.
348
349 lxc.net.[i].script.up
350 Add a configuration option to specify a script to be executed
351 after creating and configuring the network used from the host
352 side.
353
354 In addition to the information available to all hooks. The fol‐
355 lowing information is provided to the script:
356
357 · LXC_HOOK_TYPE: the hook type. This is either 'up' or 'down'.
358
359 · LXC_HOOK_SECTION: the section type 'net'.
360
361 · LXC_NET_TYPE: the network type. This is one of the valid net‐
362 work types listed here (e.g. 'vlan', 'macvlan', 'ipvlan',
363 'veth').
364
365 · LXC_NET_PARENT: the parent device on the host. This is only
366 set for network types 'mavclan', 'veth', 'phys'.
367
368 · LXC_NET_PEER: the name of the peer device on the host. This is
369 only set for 'veth' network types. Note that this information
370 is only available when lxc.hook.version is set to 1.
371
372 Whether this information is provided in the form of environment vari‐
373 ables or as arguments to the script depends on the value of
374 lxc.hook.version. If set to 1 then information is provided in the form
375 of environment variables. If set to 0 information is provided as argu‐
376 ments to the script.
377
378 Standard output from the script is logged at debug level. Standard
379 error is not logged, but can be captured by the hook redirecting its
380 standard error to standard output.
381
382 lxc.net.[i].script.down
383 Add a configuration option to specify a script to be executed
384 before destroying the network used from the host side.
385
386 In addition to the information available to all hooks. The fol‐
387 lowing information is provided to the script:
388
389 · LXC_HOOK_TYPE: the hook type. This is either 'up' or 'down'.
390
391 · LXC_HOOK_SECTION: the section type 'net'.
392
393 · LXC_NET_TYPE: the network type. This is one of the valid net‐
394 work types listed here (e.g. 'vlan', 'macvlan', 'ipvlan',
395 'veth').
396
397 · LXC_NET_PARENT: the parent device on the host. This is only
398 set for network types 'mavclan', 'veth', 'phys'.
399
400 · LXC_NET_PEER: the name of the peer device on the host. This is
401 only set for 'veth' network types. Note that this information
402 is only available when lxc.hook.version is set to 1.
403
404 Whether this information is provided in the form of environment vari‐
405 ables or as arguments to the script depends on the value of
406 lxc.hook.version. If set to 1 then information is provided in the form
407 of environment variables. If set to 0 information is provided as argu‐
408 ments to the script.
409
410 Standard output from the script is logged at debug level. Standard
411 error is not logged, but can be captured by the hook redirecting its
412 standard error to standard output.
413
414 NEW PSEUDO TTY INSTANCE (DEVPTS)
415 For stricter isolation the container can have its own private instance
416 of the pseudo tty.
417
418 lxc.pty.max
419 If set, the container will have a new pseudo tty instance, mak‐
420 ing this private to it. The value specifies the maximum number
421 of pseudo ttys allowed for a pts instance (this limitation is
422 not implemented yet).
423
424 CONTAINER SYSTEM CONSOLE
425 If the container is configured with a root filesystem and the inittab
426 file is setup to use the console, you may want to specify where the
427 output of this console goes.
428
429 lxc.console.buffer.size
430 Setting this option instructs liblxc to allocate an in-memory
431 ringbuffer. The container's console output will be written to
432 the ringbuffer. Note that ringbuffer must be at least as big as
433 a standard page size. When passed a value smaller than a single
434 page size liblxc will allocate a ringbuffer of a single page
435 size. A page size is usually 4KB. The keyword 'auto' will cause
436 liblxc to allocate a ringbuffer of 128KB. When manually speci‐
437 fying a size for the ringbuffer the value should be a power of 2
438 when converted to bytes. Valid size prefixes are 'KB', 'MB',
439 'GB'. (Note that all conversions are based on multiples of 1024.
440 That means 'KB' == 'KiB', 'MB' == 'MiB', 'GB' == 'GiB'. Addi‐
441 tionally, the case of the suffix is ignored, i.e. 'kB', 'KB' and
442 'Kb' are treated equally.)
443
444 lxc.console.size
445 Setting this option instructs liblxc to place a limit on the
446 size of the console log file specified in lxc.console.logfile.
447 Note that size of the log file must be at least as big as a
448 standard page size. When passed a value smaller than a single
449 page size liblxc will set the size of log file to a single page
450 size. A page size is usually 4KB. The keyword 'auto' will cause
451 liblxc to place a limit of 128KB on the log file. When manually
452 specifying a size for the log file the value should be a power
453 of 2 when converted to bytes. Valid size prefixes are 'KB',
454 'MB', 'GB'. (Note that all conversions are based on multiples of
455 1024. That means 'KB' == 'KiB', 'MB' == 'MiB', 'GB' == 'GiB'.
456 Additionally, the case of the suffix is ignored, i.e. 'kB', 'KB'
457 and 'Kb' are treated equally.) If users want to mirror the con‐
458 sole ringbuffer on disk they should set lxc.console.size equal
459 to lxc.console.buffer.size.
460
461 lxc.console.logfile
462 Specify a path to a file where the console output will be writ‐
463 ten. Note that in contrast to the on-disk ringbuffer logfile
464 this file will keep growing potentially filling up the users
465 disks if not rotated and deleted. This problem can also be
466 avoided by using the in-memory ringbuffer options lxc.con‐
467 sole.buffer.size and lxc.console.buffer.logfile.
468
469 lxc.console.rotate
470 Whether to rotate the console logfile specified in lxc.con‐
471 sole.logfile. Users can send an API request to rotate the log‐
472 file. Note that the old logfile will have the same name as the
473 original with the suffix ".1" appended. Users wishing to pre‐
474 vent the console log file from filling the disk should rotate
475 the logfile and delete it if unneeded. This problem can also be
476 avoided by using the in-memory ringbuffer options lxc.con‐
477 sole.buffer.size and lxc.console.buffer.logfile.
478
479 lxc.console.path
480 Specify a path to a device to which the console will be
481 attached. The keyword 'none' will simply disable the console.
482 Note, when specifying 'none' and creating a device node for the
483 console in the container at /dev/console or bind-mounting the
484 hosts's /dev/console into the container at /dev/console the con‐
485 tainer will have direct access to the hosts's /dev/console.
486 This is dangerous when the container has write access to the
487 device and should thus be used with caution.
488
489 CONSOLE THROUGH THE TTYS
490 This option is useful if the container is configured with a root
491 filesystem and the inittab file is setup to launch a getty on the ttys.
492 The option specifies the number of ttys to be available for the con‐
493 tainer. The number of gettys in the inittab file of the container
494 should not be greater than the number of ttys specified in this option,
495 otherwise the excess getty sessions will die and respawn indefinitely
496 giving annoying messages on the console or in /var/log/messages.
497
498 lxc.tty.max
499 Specify the number of tty to make available to the container.
500
501 CONSOLE DEVICES LOCATION
502 LXC consoles are provided through Unix98 PTYs created on the host and
503 bind-mounted over the expected devices in the container. By default,
504 they are bind-mounted over /dev/console and /dev/ttyN. This can prevent
505 package upgrades in the guest. Therefore you can specify a directory
506 location (under /dev under which LXC will create the files and bind-
507 mount over them. These will then be symbolically linked to /dev/console
508 and /dev/ttyN. A package upgrade can then succeed as it is able to
509 remove and replace the symbolic links.
510
511 lxc.tty.dir
512 Specify a directory under /dev under which to create the con‐
513 tainer console devices. Note that LXC will move any bind-mounts
514 or device nodes for /dev/console into this directory.
515
516 /DEV DIRECTORY
517 By default, lxc creates a few symbolic links (fd,stdin,stdout,stderr)
518 in the container's /dev directory but does not automatically create
519 device node entries. This allows the container's /dev to be set up as
520 needed in the container rootfs. If lxc.autodev is set to 1, then after
521 mounting the container's rootfs LXC will mount a fresh tmpfs under /dev
522 (limited to 500k) and fill in a minimal set of initial devices. This
523 is generally required when starting a container containing a "systemd"
524 based "init" but may be optional at other times. Additional devices in
525 the containers /dev directory may be created through the use of the
526 lxc.hook.autodev hook.
527
528 lxc.autodev
529 Set this to 0 to stop LXC from mounting and populating a minimal
530 /dev when starting the container.
531
532 MOUNT POINTS
533 The mount points section specifies the different places to be mounted.
534 These mount points will be private to the container and won't be visi‐
535 ble by the processes running outside of the container. This is useful
536 to mount /etc, /var or /home for examples.
537
538 NOTE - LXC will generally ensure that mount targets and relative bind-
539 mount sources are properly confined under the container root, to avoid
540 attacks involving over-mounting host directories and files. (Symbolic
541 links in absolute mount sources are ignored) However, if the container
542 configuration first mounts a directory which is under the control of
543 the container user, such as /home/joe, into the container at some path,
544 and then mounts under path, then a TOCTTOU attack would be possible
545 where the container user modifies a symbolic link under his home direc‐
546 tory at just the right time.
547
548 lxc.mount.fstab
549 specify a file location in the fstab format, containing the
550 mount information. The mount target location can and in most
551 cases should be a relative path, which will become relative to
552 the mounted container root. For instance,
553
554 proc proc proc nodev,noexec,nosuid 0 0
555
556
557 Will mount a proc filesystem under the container's /proc,
558 regardless of where the root filesystem comes from. This is
559 resilient to block device backed filesystems as well as con‐
560 tainer cloning.
561
562 Note that when mounting a filesystem from an image file or block
563 device the third field (fs_vfstype) cannot be auto as with
564 mount(8) but must be explicitly specified.
565
566 lxc.mount.entry
567 Specify a mount point corresponding to a line in the fstab for‐
568 mat. Moreover lxc supports mount propagation, such as rslave or
569 rprivate, and adds three additional mount options. optional
570 don't fail if mount does not work. create=dir or create=file to
571 create dir (or file) when the point will be mounted. relative
572 source path is taken to be relative to the mounted container
573 root. For instance,
574
575 dev/null proc/kcore none bind,relative 0 0
576 .fi
577
578 Will expand dev/null to ${LXC_ROOTFS_MOUNT}/dev/null,
579 and mount it to proc/kcore inside the container.
580
581 lxc.mount.auto
582 specify which standard kernel file systems should be
583 automatically mounted. This may dramatically simplify
584 the configuration. The file systems are:
585
586 · proc:mixed (or proc):
587 mount /proc as read-write, but
588 remount /proc/sys and
589 /proc/sysrq-trigger read-only
590 for security / container isolation purposes.
591
592 · proc:rw: mount
593 /proc as read-write
594
595 · sys:mixed (or sys):
596 mount /sys as read-only but with
597 /sys/devices/virtual/net writable.
598
599 · sys:ro:
600 mount /sys as read-only
601 for security / container isolation purposes.
602
603 · sys:rw: mount
604 /sys as read-write
605
606 · cgroup:mixed:
607 Mount a tmpfs to /sys/fs/cgroup,
608 create directories for all hierarchies to which the container
609 is added, create subdirectories in those hierarchies with the
610 name of the cgroup, and bind-mount the container's own cgroup
611 into that directory. The container will be able to write to
612 its own cgroup directory, but not the parents, since they will
613 be remounted read-only.
614
615 · cgroup:mixed:force:
616 The force option will cause LXC to perform
617 the cgroup mounts for the container under all circumstances.
618 Otherwise it is similar to cgroup:mixed.
619 This is mainly useful when the cgroup namespaces are enabled
620 where LXC will normally leave mounting cgroups to the init
621 binary of the container since it is perfectly safe to do so.
622
623 · cgroup:ro:
624 similar to cgroup:mixed, but everything will
625 be mounted read-only.
626
627 · cgroup:ro:force:
628 The force option will cause LXC to perform
629 the cgroup mounts for the container under all circumstances.
630 Otherwise it is similar to cgroup:ro.
631 This is mainly useful when the cgroup namespaces are enabled
632 where LXC will normally leave mounting cgroups to the init
633 binary of the container since it is perfectly safe to do so.
634
635 · cgroup:rw: similar to
636 cgroup:mixed, but everything will be mounted
637 read-write. Note that the paths leading up to the container's
638 own cgroup will be writable, but will not be a cgroup
639 filesystem but just part of the tmpfs of
640 /sys/fs/cgroup
641
642 · cgroup:rw:force:
643 The force option will cause LXC to perform
644 the cgroup mounts for the container under all circumstances.
645 Otherwise it is similar to cgroup:rw.
646 This is mainly useful when the cgroup namespaces are enabled
647 where LXC will normally leave mounting cgroups to the init
648 binary of the container since it is perfectly safe to do so.
649
650 · cgroup (without specifier):
651 defaults to cgroup:rw if the
652 container retains the CAP_SYS_ADMIN capability,
653 cgroup:mixed otherwise.
654
655 · cgroup-full:mixed:
656 mount a tmpfs to /sys/fs/cgroup,
657 create directories for all hierarchies to which
658 the container is added, bind-mount the hierarchies
659 from the host to the container and make everything
660 read-only except the container's own cgroup. Note
661 that compared to cgroup, where
662 all paths leading up to the container's own cgroup
663 are just simple directories in the underlying
664 tmpfs, here
665 /sys/fs/cgroup/$hierarchy
666 will contain the host's full cgroup hierarchy,
667 albeit read-only outside the container's own cgroup.
668 This may leak quite a bit of information into the
669 container.
670
671 · cgroup-full:mixed:force:
672 The force option will cause LXC to perform
673 the cgroup mounts for the container under all circumstances.
674 Otherwise it is similar to cgroup-full:mixed.
675 This is mainly useful when the cgroup namespaces are enabled
676 where LXC will normally leave mounting cgroups to the init
677 binary of the container since it is perfectly safe to do so.
678
679 · cgroup-full:ro: similar to
680 cgroup-full:mixed, but everything
681 will be mounted read-only.
682
683 · cgroup-full:ro:force:
684 The force option will cause LXC to perform
685 the cgroup mounts for the container under all circumstances.
686 Otherwise it is similar to cgroup-full:ro.
687 This is mainly useful when the cgroup namespaces are enabled
688 where LXC will normally leave mounting cgroups to the init
689 binary of the container since it is perfectly safe to do so.
690
691 · cgroup-full:rw: similar to
692 cgroup-full:mixed, but everything
693 will be mounted read-write. Note that in this case,
694 the container may escape its own cgroup. (Note also
695 that if the container has CAP_SYS_ADMIN support
696 and can mount the cgroup filesystem itself, it may
697 do so anyway.)
698
699 · cgroup-full:rw:force:
700 The force option will cause LXC to perform
701 the cgroup mounts for the container under all circumstances.
702 Otherwise it is similar to cgroup-full:rw.
703 This is mainly useful when the cgroup namespaces are enabled
704 where LXC will normally leave mounting cgroups to the init
705 binary of the container since it is perfectly safe to do so.
706
707 · cgroup-full (without specifier):
708 defaults to cgroup-full:rw if the
709 container retains the CAP_SYS_ADMIN capability,
710 cgroup-full:mixed otherwise.
711
712 If cgroup namespaces are enabled, then any cgroup
713 auto-mounting request will be ignored, since the container can
714 mount the filesystems itself, and automounting can confuse the
715 container init.
716
717 Note that if automatic mounting of the cgroup filesystem
718 is enabled, the tmpfs under
719 /sys/fs/cgroup will always be
720 mounted read-write (but for the :mixed
721 and :ro cases, the individual
722 hierarchies,
723 /sys/fs/cgroup/$hierarchy, will be
724 read-only). This is in order to work around a quirk in
725 Ubuntu's
726 mountall(8)
727 command that will cause containers to wait for user
728 input at boot if
729 /sys/fs/cgroup is mounted read-only
730 and the container can't remount it read-write due to a
731 lack of CAP_SYS_ADMIN.
732
733 Examples:
734
735 lxc.mount.auto = proc sys cgroup
736 lxc.mount.auto = proc:rw sys:rw cgroup-full:rw
737
738
739 ROOT FILE SYSTEM
740 The root file system of the container can be different than that of the
741 host system.
742
743 lxc.rootfs.path
744 specify the root file system for the container. It can be an
745 image file, a directory or a block device. If not specified, the
746 container shares its root file system with the host.
747
748 For directory or simple block-device backed containers, a path‐
749 name can be used. If the rootfs is backed by a nbd device, then
750 nbd:file:1 specifies that file should be attached to a nbd
751 device, and partition 1 should be mounted as the rootfs.
752 nbd:file specifies that the nbd device itself should be mounted.
753 overlayfs:/lower:/upper specifies that the rootfs should be an
754 overlay with /upper being mounted read-write over a read-only
755 mount of /lower. For overlay multiple /lower directories can be
756 specified. loop:/file tells lxc to attach /file to a loop device
757 and mount the loop device.
758
759 lxc.rootfs.mount
760 where to recursively bind lxc.rootfs.path before pivoting. This
761 is to ensure success of the pivot_root(8) syscall. Any directory
762 suffices, the default should generally work.
763
764 lxc.rootfs.options
765 extra mount options to use when mounting the rootfs.
766
767 lxc.rootfs.managed
768 Set this to 0 to indicate that LXC is not managing the container
769 storage, then LXC will not modify the container storage. The
770 default is 1.
771
772 CONTROL GROUP
773 The control group section contains the configuration for the different
774 subsystem. lxc does not check the correctness of the subsystem name.
775 This has the disadvantage of not detecting configuration errors until
776 the container is started, but has the advantage of permitting any
777 future subsystem.
778
779 lxc.cgroup.[controller name]
780 Specify the control group value to be set on a legacy cgroup
781 hierarchy. The controller name is the literal name of the con‐
782 trol group. The permitted names and the syntax of their values
783 is not dictated by LXC, instead it depends on the features of
784 the Linux kernel running at the time the container is started,
785 eg. lxc.cgroup.cpuset.cpus
786
787 lxc.cgroup2.[controller name]
788 Specify the control group value to be set on the unified cgroup
789 hierarchy. The controller name is the literal name of the con‐
790 trol group. The permitted names and the syntax of their values
791 is not dictated by LXC, instead it depends on the features of
792 the Linux kernel running at the time the container is started,
793 eg. lxc.cgroup2.memory.high
794
795 lxc.cgroup.dir
796 specify a directory or path in which the container's cgroup will
797 be created. For example, setting lxc.cgroup.dir =
798 my-cgroup/first for a container named "c1" will create the con‐
799 tainer's cgroup as a sub-cgroup of "my-cgroup". For example, if
800 the user's current cgroup "my-user" is located in the root
801 cgroup of the cpuset controller in a cgroup v1 hierarchy this
802 would create the cgroup "/sys/fs/cgroup/cpuset/my-user/my-
803 cgroup/first/c1" for the container. Any missing cgroups will be
804 created by LXC. This presupposes that the user has write access
805 to its current cgroup.
806
807 lxc.cgroup.relative
808 Set this to 1 to instruct LXC to never escape to the root
809 cgroup. This makes it easy for users to adhere to restrictions
810 enforced by cgroup2 and systemd. Specifically, this makes it
811 possible to run LXC containers as systemd services.
812
813 CAPABILITIES
814 The capabilities can be dropped in the container if this one is run as
815 root.
816
817 lxc.cap.drop
818 Specify the capability to be dropped in the container. A single
819 line defining several capabilities with a space separation is
820 allowed. The format is the lower case of the capability defini‐
821 tion without the "CAP_" prefix, eg. CAP_SYS_MODULE should be
822 specified as sys_module. See capabilities(7). If used with no
823 value, lxc will clear any drop capabilities specified up to this
824 point.
825
826 lxc.cap.keep
827 Specify the capability to be kept in the container. All other
828 capabilities will be dropped. When a special value of "none" is
829 encountered, lxc will clear any keep capabilities specified up
830 to this point. A value of "none" alone can be used to drop all
831 capabilities.
832
833 NAMESPACES
834 A namespace can be cloned (lxc.namespace.clone), kept (lxc.names‐
835 pace.keep) or shared (lxc.namespace.share.[namespace identifier]).
836
837 lxc.namespace.clone
838 Specify namespaces which the container is supposed to be created
839 with. The namespaces to create are specified as a space sepa‐
840 rated list. Each namespace must correspond to one of the stan‐
841 dard namespace identifiers as seen in the /proc/PID/ns direc‐
842 tory. When lxc.namespace.clone is not explicitly set all names‐
843 paces supported by the kernel and the current configuration will
844 be used.
845
846 To create a new mount, net and ipc namespace set lxc.names‐
847 pace.clone=mount net ipc.
848
849 lxc.namespace.keep
850 Specify namespaces which the container is supposed to inherit
851 from the process that created it. The namespaces to keep are
852 specified as a space separated list. Each namespace must corre‐
853 spond to one of the standard namespace identifiers as seen in
854 the /proc/PID/ns directory. The lxc.namespace.keep is a black‐
855 list option, i.e. it is useful when enforcing that containers
856 must keep a specific set of namespaces.
857
858 To keep the network, user and ipc namespace set lxc.names‐
859 pace.keep=user net ipc.
860
861 Note that sharing pid namespaces will likely not work with most
862 init systems.
863
864 Note that if the container requests a new user namespace and the
865 container wants to inherit the network namespace it needs to
866 inherit the user namespace as well.
867
868 lxc.namespace.share.[namespace identifier]
869 Specify a namespace to inherit from another container or
870 process. The [namespace identifier] suffix needs to be replaced
871 with one of the namespaces that appear in the /proc/PID/ns
872 directory.
873
874 To inherit the namespace from another process set the lxc.names‐
875 pace.share.[namespace identifier] to the PID of the process,
876 e.g. lxc.namespace.share.net=42.
877
878 To inherit the namespace from another container set the
879 lxc.namespace.share.[namespace identifier] to the name of the
880 container, e.g. lxc.namespace.share.pid=c3.
881
882 To inherit the namespace from another container located in a
883 different path than the standard liblxc path set the lxc.names‐
884 pace.share.[namespace identifier] to the full path to the con‐
885 tainer, e.g. lxc.namespace.share.user=/opt/c3.
886
887 In order to inherit namespaces the caller needs to have suffi‐
888 cient privilege over the process or container.
889
890 Note that sharing pid namespaces between system containers will
891 likely not work with most init systems.
892
893 Note that if two processes are in different user namespaces and
894 one process wants to inherit the other's network namespace it
895 usually needs to inherit the user namespace as well.
896
897 Note that without careful additional configuration of an LSM,
898 sharing user+pid namespaces with a task may allow that task to
899 escalate privileges to that of the task calling liblxc.
900
901 RESOURCE LIMITS
902 The soft and hard resource limits for the container can be changed.
903 Unprivileged containers can only lower them. Resources which are not
904 explicitly specified will be inherited.
905
906 lxc.prlimit.[limit name]
907 Specify the resource limit to be set. A limit is specified as
908 two colon separated values which are either numeric or the word
909 'unlimited'. A single value can be used as a shortcut to set
910 both soft and hard limit to the same value. The permitted names
911 the "RLIMIT_" resource names in lowercase without the "RLIMIT_"
912 prefix, eg. RLIMIT_NOFILE should be specified as "nofile". See
913 setrlimit(2). If used with no value, lxc will clear the
914 resource limit specified up to this point. A resource with no
915 explicitly configured limitation will be inherited from the
916 process starting up the container.
917
918 SYSCTL
919 Configure kernel parameters for the container.
920
921 lxc.sysctl.[kernel parameters name]
922 Specify the kernel parameters to be set. The parameters avail‐
923 able are those listed under /proc/sys/. Note that not all
924 sysctls are namespaced. Changing Non-namespaced sysctls will
925 cause the system-wide setting to be modified. sysctl(8). If
926 used with no value, lxc will clear the parameters specified up
927 to this point.
928
929 APPARMOR PROFILE
930 If lxc was compiled and installed with apparmor support, and the host
931 system has apparmor enabled, then the apparmor profile under which the
932 container should be run can be specified in the container configura‐
933 tion. The default is lxc-container-default-cgns if the host kernel is
934 cgroup namespace aware, or lxc-container-default otherwise.
935
936 lxc.apparmor.profile
937 Specify the apparmor profile under which the container should be
938 run. To specify that the container should be unconfined, use
939
940 lxc.apparmor.profile = unconfined
941
942 If the apparmor profile should remain unchanged (i.e. if you are
943 nesting containers and are already confined), then use
944
945 lxc.apparmor.profile = unchanged
946
947 If you instruct LXC to generate the apparmor profile, then use
948
949 lxc.apparmor.profile = generated
950
951 lxc.apparmor.allow_incomplete
952 Apparmor profiles are pathname based. Therefore many file
953 restrictions require mount restrictions to be effective against
954 a determined attacker. However, these mount restrictions are not
955 yet implemented in the upstream kernel. Without the mount
956 restrictions, the apparmor profiles still protect against acci‐
957 dental damager.
958
959 If this flag is 0 (default), then the container will not be
960 started if the kernel lacks the apparmor mount features, so that
961 a regression after a kernel upgrade will be detected. To start
962 the container under partial apparmor protection, set this flag
963 to 1.
964
965 lxc.apparmor.allow_nesting
966 If set this to 1, causes the following changes. When generated
967 apparmor profiles are used, they will contain the necessary
968 changes to allow creating a nested container. In addition to the
969 usual mount points, /dev/.lxc/proc and /dev/.lxc/sys will con‐
970 tain procfs and sysfs mount points without the lxcfs overlays,
971 which, if generated apparmor profiles are being used, will not
972 be read/writable directly.
973
974 lxc.apparmor.raw
975 A list of raw AppArmor profile lines to append to the profile.
976 Only valid when using generated profiles.
977
978 SELINUX CONTEXT
979 If lxc was compiled and installed with SELinux support, and the host
980 system has SELinux enabled, then the SELinux context under which the
981 container should be run can be specified in the container configura‐
982 tion. The default is unconfined_t, which means that lxc will not
983 attempt to change contexts. See /usr/share/lxc/selinux/lxc.te for an
984 example policy and more information.
985
986 lxc.selinux.context
987 Specify the SELinux context under which the container should be
988 run or unconfined_t. For example
989
990 lxc.selinux.context = system_u:system_r:lxc_t:s0:c22
991
992 SECCOMP CONFIGURATION
993 A container can be started with a reduced set of available system calls
994 by loading a seccomp profile at startup. The seccomp configuration file
995 must begin with a version number on the first line, a policy type on
996 the second line, followed by the configuration.
997
998 Versions 1 and 2 are currently supported. In version 1, the policy is a
999 simple whitelist. The second line therefore must read "whitelist", with
1000 the rest of the file containing one (numeric) syscall number per line.
1001 Each syscall number is whitelisted, while every unlisted number is
1002 blacklisted for use in the container
1003
1004 In version 2, the policy may be blacklist or whitelist, supports per-
1005 rule and per-policy default actions, and supports per-architecture sys‐
1006 tem call resolution from textual names.
1007
1008 An example blacklist policy, in which all system calls are allowed
1009 except for mknod, which will simply do nothing and return 0 (success),
1010 looks like:
1011
1012 2
1013 blacklist
1014 mknod errno 0
1015 ioctl notify
1016
1017
1018 Specifying "errno" as action will cause LXC to register a seccomp fil‐
1019 ter that will cause a specific errno to be returned to the caller. The
1020 errno value can be specified after the "errno" action word.
1021
1022 Specifying "notify" as action will cause LXC to register a seccomp lis‐
1023 tener and retrieve a listener file descriptor from the kernel. When a
1024 syscall is made that is registered as "notify" the kernel will generate
1025 a poll event and send a message over the file descriptor. The caller
1026 can read this message, inspect the syscalls including its arguments.
1027 Based on this information the caller is expected to send back a message
1028 informing the kernel which action to take. Until that message is sent
1029 the kernel will block the calling process. The format of the messages
1030 to read and sent is documented in seccomp itself.
1031
1032 lxc.seccomp.profile
1033 Specify a file containing the seccomp configuration to load
1034 before the container starts.
1035
1036 lxc.seccomp.allow_nesting
1037 If this flag is set to 1, then seccomp filters will be stacked
1038 regardless of whether a seccomp profile is already loaded. This
1039 allows nested containers to load their own seccomp profile. The
1040 default setting is 0.
1041
1042 lxc.seccomp.notify.proxy
1043 Specify a unix socket to which LXC will connect and forward sec‐
1044 comp events to. The path must be in the form
1045 unix:/path/to/socket or unix:@socket. The former specifies a
1046 path-bound unix domain socket while the latter specifies an
1047 abstract unix domain socket.
1048
1049 lxc.seccomp.notify.cookie
1050 An additional string sent along with proxied seccomp notifica‐
1051 tion requests.
1052
1053 PR_SET_NO_NEW_PRIVS
1054 With PR_SET_NO_NEW_PRIVS active execve() promises not to grant privi‐
1055 leges to do anything that could not have been done without the execve()
1056 call (for example, rendering the set-user-ID and set-group-ID mode
1057 bits, and file capabilities non-functional). Once set, this bit cannot
1058 be unset. The setting of this bit is inherited by children created by
1059 fork() and clone(), and preserved across execve(). Note that
1060 PR_SET_NO_NEW_PRIVS is applied after the container has changed into its
1061 intended AppArmor profile or SElinux context.
1062
1063 lxc.no_new_privs
1064 Specify whether the PR_SET_NO_NEW_PRIVS flag should be set for
1065 the container. Set to 1 to activate.
1066
1067 UID MAPPINGS
1068 A container can be started in a private user namespace with user and
1069 group id mappings. For instance, you can map userid 0 in the container
1070 to userid 200000 on the host. The root user in the container will be
1071 privileged in the container, but unprivileged on the host. Normally a
1072 system container will want a range of ids, so you would map, for
1073 instance, user and group ids 0 through 20,000 in the container to the
1074 ids 200,000 through 220,000.
1075
1076 lxc.idmap
1077 Four values must be provided. First a character, either 'u', or
1078 'g', to specify whether user or group ids are being mapped. Next
1079 is the first userid as seen in the user namespace of the con‐
1080 tainer. Next is the userid as seen on the host. Finally, a range
1081 indicating the number of consecutive ids to map.
1082
1083 CONTAINER HOOKS
1084 Container hooks are programs or scripts which can be executed at vari‐
1085 ous times in a container's lifetime.
1086
1087 When a container hook is executed, additional information is passed
1088 along. The lxc.hook.version argument can be used to determine if the
1089 following arguments are passed as command line arguments or through
1090 environment variables. The arguments are:
1091
1092 · Container name.
1093
1094 · Section (always 'lxc').
1095
1096 · The hook type (i.e. 'clone' or 'pre-mount').
1097
1098 · Additional arguments. In the case of the clone hook, any extra argu‐
1099 ments passed will appear as further arguments to the hook. In the
1100 case of the stop hook, paths to filedescriptors for each of the con‐
1101 tainer's namespaces along with their types are passed.
1102
1103 The following environment variables are set:
1104
1105 · LXC_CGNS_AWARE: indicator whether the container is cgroup namespace
1106 aware.
1107
1108 · LXC_CONFIG_FILE: the path to the container configuration file.
1109
1110 · LXC_HOOK_TYPE: the hook type (e.g. 'clone', 'mount', 'pre-mount').
1111 Note that the existence of this environment variable is conditional
1112 on the value of lxc.hook.version. If it is set to 1 then
1113 LXC_HOOK_TYPE will be set.
1114
1115 · LXC_HOOK_SECTION: the section type (e.g. 'lxc', 'net'). Note that the
1116 existence of this environment variable is conditional on the value of
1117 lxc.hook.version. If it is set to 1 then LXC_HOOK_SECTION will be
1118 set.
1119
1120 · LXC_HOOK_VERSION: the version of the hooks. This value is identical
1121 to the value of the container's lxc.hook.version config item. If it
1122 is set to 0 then old-style hooks are used. If it is set to 1 then
1123 new-style hooks are used.
1124
1125 · LXC_LOG_LEVEL: the container's log level.
1126
1127 · LXC_NAME: is the container's name.
1128
1129 · LXC_[NAMESPACE IDENTIFIER]_NS: path under /proc/PID/fd/ to a file
1130 descriptor referring to the container's namespace. For each preserved
1131 namespace type there will be a separate environment variable. These
1132 environment variables will only be set if lxc.hook.version is set to
1133 1.
1134
1135 · LXC_ROOTFS_MOUNT: the path to the mounted root filesystem.
1136
1137 · LXC_ROOTFS_PATH: this is the lxc.rootfs.path entry for the container.
1138 Note this is likely not where the mounted rootfs is to be found, use
1139 LXC_ROOTFS_MOUNT for that.
1140
1141 · LXC_SRC_NAME: in the case of the clone hook, this is the original
1142 container's name.
1143
1144 Standard output from the hooks is logged at debug level. Standard
1145 error is not logged, but can be captured by the hook redirecting its
1146 standard error to standard output.
1147
1148 lxc.hook.version
1149 To pass the arguments in new style via environment variables set
1150 to 1 otherwise set to 0 to pass them as arguments. This setting
1151 affects all hooks arguments that were traditionally passed as
1152 arguments to the script. Specifically, it affects the container
1153 name, section (e.g. 'lxc', 'net') and hook type (e.g. 'clone',
1154 'mount', 'pre-mount') arguments. If new-style hooks are used
1155 then the arguments will be available as environment variables.
1156 The container name will be set in LXC_NAME. (This is set inde‐
1157 pendently of the value used for this config item.) The section
1158 will be set in LXC_HOOK_SECTION and the hook type will be set in
1159 LXC_HOOK_TYPE. It also affects how the paths to file descrip‐
1160 tors referring to the container's namespaces are passed. If set
1161 to 1 then for each namespace a separate environment variable
1162 LXC_[NAMESPACE IDENTIFIER]_NS will be set. If set to 0 then the
1163 paths will be passed as arguments to the stop hook.
1164
1165 lxc.hook.pre-start
1166 A hook to be run in the host's namespace before the container
1167 ttys, consoles, or mounts are up.
1168
1169 lxc.hook.pre-mount
1170 A hook to be run in the container's fs namespace but before the
1171 rootfs has been set up. This allows for manipulation of the
1172 rootfs, i.e. to mount an encrypted filesystem. Mounts done in
1173 this hook will not be reflected on the host (apart from mounts
1174 propagation), so they will be automatically cleaned up when the
1175 container shuts down.
1176
1177 lxc.hook.mount
1178 A hook to be run in the container's namespace after mounting has
1179 been done, but before the pivot_root.
1180
1181 lxc.hook.autodev
1182 A hook to be run in the container's namespace after mounting has
1183 been done and after any mount hooks have run, but before the
1184 pivot_root, if lxc.autodev == 1. The purpose of this hook is to
1185 assist in populating the /dev directory of the container when
1186 using the autodev option for systemd based containers. The con‐
1187 tainer's /dev directory is relative to the ${LXC_ROOTFS_MOUNT}
1188 environment variable available when the hook is run.
1189
1190 lxc.hook.start-host
1191 A hook to be run in the host's namespace after the container has
1192 been setup, and immediately before starting the container init.
1193
1194 lxc.hook.start
1195 A hook to be run in the container's namespace immediately before
1196 executing the container's init. This requires the program to be
1197 available in the container.
1198
1199 lxc.hook.stop
1200 A hook to be run in the host's namespace with references to the
1201 container's namespaces after the container has been shut down.
1202 For each namespace an extra argument is passed to the hook con‐
1203 taining the namespace's type and a filename that can be used to
1204 obtain a file descriptor to the corresponding namespace, sepa‐
1205 rated by a colon. The type is the name as it would appear in the
1206 /proc/PID/ns directory. For instance for the mount namespace
1207 the argument usually looks like mnt:/proc/PID/fd/12.
1208
1209 lxc.hook.post-stop
1210 A hook to be run in the host's namespace after the container has
1211 been shut down.
1212
1213 lxc.hook.clone
1214 A hook to be run when the container is cloned to a new one. See
1215 lxc-clone(1) for more information.
1216
1217 lxc.hook.destroy
1218 A hook to be run when the container is destroyed.
1219
1220 CONTAINER HOOKS ENVIRONMENT VARIABLES
1221 A number of environment variables are made available to the startup
1222 hooks to provide configuration information and assist in the function‐
1223 ing of the hooks. Not all variables are valid in all contexts. In par‐
1224 ticular, all paths are relative to the host system and, as such, not
1225 valid during the lxc.hook.start hook.
1226
1227 LXC_NAME
1228 The LXC name of the container. Useful for logging messages in
1229 common log environments. [-n]
1230
1231 LXC_CONFIG_FILE
1232 Host relative path to the container configuration file. This
1233 gives the container to reference the original, top level, con‐
1234 figuration file for the container in order to locate any addi‐
1235 tional configuration information not otherwise made available.
1236 [-f]
1237
1238 LXC_CONSOLE
1239 The path to the console output of the container if not NULL.
1240 [-c] [lxc.console.path]
1241
1242 LXC_CONSOLE_LOGPATH
1243 The path to the console log output of the container if not NULL.
1244 [-L]
1245
1246 LXC_ROOTFS_MOUNT
1247 The mount location to which the container is initially bound.
1248 This will be the host relative path to the container rootfs for
1249 the container instance being started and is where changes should
1250 be made for that instance. [lxc.rootfs.mount]
1251
1252 LXC_ROOTFS_PATH
1253 The host relative path to the container root which has been
1254 mounted to the rootfs.mount location. [lxc.rootfs.path]
1255
1256 LXC_SRC_NAME
1257 Only for the clone hook. Is set to the original container name.
1258
1259 LXC_TARGET
1260 Only for the stop hook. Is set to "stop" for a container shut‐
1261 down or "reboot" for a container reboot.
1262
1263 LXC_CGNS_AWARE
1264 If unset, then this version of lxc is not aware of cgroup names‐
1265 paces. If set, it will be set to 1, and lxc is aware of cgroup
1266 namespaces. Note this does not guarantee that cgroup namespaces
1267 are enabled in the kernel. This is used by the lxcfs mount hook.
1268
1269 LOGGING
1270 Logging can be configured on a per-container basis. By default, depend‐
1271 ing upon how the lxc package was compiled, container startup is logged
1272 only at the ERROR level, and logged to a file named after the container
1273 (with '.log' appended) either under the container path, or under
1274 /var/log/lxc.
1275
1276 Both the default log level and the log file can be specified in the
1277 container configuration file, overriding the default behavior. Note
1278 that the configuration file entries can in turn be overridden by the
1279 command line options to lxc-start.
1280
1281 lxc.log.level
1282 The level at which to log. The log level is an integer in the
1283 range of 0..8 inclusive, where a lower number means more verbose
1284 debugging. In particular 0 = trace, 1 = debug, 2 = info, 3 =
1285 notice, 4 = warn, 5 = error, 6 = critical, 7 = alert, and 8 =
1286 fatal. If unspecified, the level defaults to 5 (error), so that
1287 only errors and above are logged.
1288
1289 Note that when a script (such as either a hook script or a net‐
1290 work interface up or down script) is called, the script's stan‐
1291 dard output is logged at level 1, debug.
1292
1293 lxc.log.file
1294 The file to which logging info should be written.
1295
1296 lxc.log.syslog
1297 Send logging info to syslog. It respects the log level defined
1298 in lxc.log.level. The argument should be the syslog facility to
1299 use, valid ones are: daemon, local0, local1, local2, local3,
1300 local4, local5, local5, local6, local7.
1301
1302 AUTOSTART
1303 The autostart options support marking which containers should be auto-
1304 started and in what order. These options may be used by LXC tools
1305 directly or by external tooling provided by the distributions.
1306
1307 lxc.start.auto
1308 Whether the container should be auto-started. Valid values are
1309 0 (off) and 1 (on).
1310
1311 lxc.start.delay
1312 How long to wait (in seconds) after the container is started
1313 before starting the next one.
1314
1315 lxc.start.order
1316 An integer used to sort the containers when auto-starting a
1317 series of containers at once.
1318
1319 lxc.monitor.unshare
1320 If not zero the mount namespace will be unshared from the host
1321 before initializing the container (before running any pre-start
1322 hooks). This requires the CAP_SYS_ADMIN capability at startup.
1323 Default is 0.
1324
1325 lxc.monitor.signal.pdeath
1326 Set the signal to be sent to the container's init when the lxc
1327 monitor exits. By default it is set to SIGKILL which will cause
1328 all container processes to be killed when the lxc monitor
1329 process dies. To ensure that containers stay alive even if lxc
1330 monitor dies set this to 0.
1331
1332 lxc.group
1333 A multi-value key (can be used multiple times) to put the con‐
1334 tainer in a container group. Those groups can then be used
1335 (amongst other things) to start a series of related containers.
1336
1337 AUTOSTART AND SYSTEM BOOT
1338 Each container can be part of any number of groups or no group at all.
1339 Two groups are special. One is the NULL group, i.e. the container does
1340 not belong to any group. The other group is the "onboot" group.
1341
1342 When the system boots with the LXC service enabled, it will first
1343 attempt to boot any containers with lxc.start.auto == 1 that is a mem‐
1344 ber of the "onboot" group. The startup will be in order of
1345 lxc.start.order. If an lxc.start.delay has been specified, that delay
1346 will be honored before attempting to start the next container to give
1347 the current container time to begin initialization and reduce overload‐
1348 ing the host system. After starting the members of the "onboot" group,
1349 the LXC system will proceed to boot containers with lxc.start.auto == 1
1350 which are not members of any group (the NULL group) and proceed as with
1351 the onboot group.
1352
1353 CONTAINER ENVIRONMENT
1354 If you want to pass environment variables into the container (that is,
1355 environment variables which will be available to init and all of its
1356 descendents), you can use lxc.environment parameters to do so. Be care‐
1357 ful that you do not pass in anything sensitive; any process in the con‐
1358 tainer which doesn't have its environment scrubbed will have these
1359 variables available to it, and environment variables are always avail‐
1360 able via /proc/PID/environ.
1361
1362 This configuration parameter can be specified multiple times; once for
1363 each environment variable you wish to configure.
1364
1365 lxc.environment
1366 Specify an environment variable to pass into the container.
1367 Example:
1368
1369 lxc.environment = APP_ENV=production
1370 lxc.environment = SYSLOG_SERVER=192.0.2.42
1371
1372
1373 It is possible to inherit host environment variables by setting
1374 the name of the variable without a "=" sign. For example:
1375
1376 lxc.environment = PATH
1377
1378
1380 In addition to the few examples given below, you will find some other
1381 examples of configuration file in /usr/share/doc/lxc/examples
1382
1383 NETWORK
1384 This configuration sets up a container to use a veth pair device with
1385 one side plugged to a bridge br0 (which has been configured before on
1386 the system by the administrator). The virtual network device visible in
1387 the container is renamed to eth0.
1388
1389 lxc.uts.name = myhostname
1390 lxc.net.0.type = veth
1391 lxc.net.0.flags = up
1392 lxc.net.0.link = br0
1393 lxc.net.0.name = eth0
1394 lxc.net.0.hwaddr = 4a:49:43:49:79:bf
1395 lxc.net.0.ipv4.address = 10.2.3.5/24 10.2.3.255
1396 lxc.net.0.ipv6.address = 2003:db8:1:0:214:1234:fe0b:3597
1397
1398
1399 UID/GID MAPPING
1400 This configuration will map both user and group ids in the range 0-9999
1401 in the container to the ids 100000-109999 on the host.
1402
1403 lxc.idmap = u 0 100000 10000
1404 lxc.idmap = g 0 100000 10000
1405
1406
1407 CONTROL GROUP
1408 This configuration will setup several control groups for the applica‐
1409 tion, cpuset.cpus restricts usage of the defined cpu, cpus.share prior‐
1410 itize the control group, devices.allow makes usable the specified
1411 devices.
1412
1413 lxc.cgroup.cpuset.cpus = 0,1
1414 lxc.cgroup.cpu.shares = 1234
1415 lxc.cgroup.devices.deny = a
1416 lxc.cgroup.devices.allow = c 1:3 rw
1417 lxc.cgroup.devices.allow = b 8:0 rw
1418
1419
1420 COMPLEX CONFIGURATION
1421 This example show a complex configuration making a complex network
1422 stack, using the control groups, setting a new hostname, mounting some
1423 locations and a changing root file system.
1424
1425 lxc.uts.name = complex
1426 lxc.net.0.type = veth
1427 lxc.net.0.flags = up
1428 lxc.net.0.link = br0
1429 lxc.net.0.hwaddr = 4a:49:43:49:79:bf
1430 lxc.net.0.ipv4.address = 10.2.3.5/24 10.2.3.255
1431 lxc.net.0.ipv6.address = 2003:db8:1:0:214:1234:fe0b:3597
1432 lxc.net.0.ipv6.address = 2003:db8:1:0:214:5432:feab:3588
1433 lxc.net.1.type = macvlan
1434 lxc.net.1.flags = up
1435 lxc.net.1.link = eth0
1436 lxc.net.1.hwaddr = 4a:49:43:49:79:bd
1437 lxc.net.1.ipv4.address = 10.2.3.4/24
1438 lxc.net.1.ipv4.address = 192.168.10.125/24
1439 lxc.net.1.ipv6.address = 2003:db8:1:0:214:1234:fe0b:3596
1440 lxc.net.2.type = phys
1441 lxc.net.2.flags = up
1442 lxc.net.2.link = dummy0
1443 lxc.net.2.hwaddr = 4a:49:43:49:79:ff
1444 lxc.net.2.ipv4.address = 10.2.3.6/24
1445 lxc.net.2.ipv6.address = 2003:db8:1:0:214:1234:fe0b:3297
1446 lxc.cgroup.cpuset.cpus = 0,1
1447 lxc.cgroup.cpu.shares = 1234
1448 lxc.cgroup.devices.deny = a
1449 lxc.cgroup.devices.allow = c 1:3 rw
1450 lxc.cgroup.devices.allow = b 8:0 rw
1451 lxc.mount.fstab = /etc/fstab.complex
1452 lxc.mount.entry = /lib /root/myrootfs/lib none ro,bind 0 0
1453 lxc.rootfs.path = dir:/mnt/rootfs.complex
1454 lxc.cap.drop = sys_module mknod setuid net_raw
1455 lxc.cap.drop = mac_override
1456
1457
1459 chroot(1), pivot_root(8), fstab(5), capabilities(7)
1460
1462 lxc(7), lxc-create(1), lxc-copy(1), lxc-destroy(1), lxc-start(1), lxc-
1463 stop(1), lxc-execute(1), lxc-console(1), lxc-monitor(1), lxc-wait(1),
1464 lxc-cgroup(1), lxc-ls(1), lxc-info(1), lxc-freeze(1), lxc-unfreeze(1),
1465 lxc-attach(1), lxc.conf(5)
1466
1468 Daniel Lezcano <daniel.lezcano@free.fr>
1469
1470
1471
1472 2020-01-29 lxc.container.conf(5)