1LXC(7) LXC(7)
2
3
4
6 lxc - linux containers
7
9 You are in a hurry, and you don't want to read this man page. Ok, with‐
10 out warranty, here are the commands to launch a shell inside a con‐
11 tainer with a predefined configuration template, it may work.
12 /usr/bin/lxc-execute -n foo -f /usr/share/doc/lxc/examples/lxc-
13 macvlan.conf /bin/bash
14
16 The container technology is actively being pushed into the mainstream
17 linux kernel. It provides the resource management through the control
18 groups aka process containers and resource isolation through the names‐
19 paces.
20
21 The linux containers, lxc, aims to use these new functionalities to
22 provide an userspace container object which provides full resource iso‐
23 lation and resource control for an applications or a system.
24
25 The first objective of this project is to make the life easier for the
26 kernel developers involved in the containers project and especially to
27 continue working on the Checkpoint/Restart new features. The lxc is
28 small enough to easily manage a container with simple command lines and
29 complete enough to be used for other purposes.
30
32 The lxc relies on a set of functionalities provided by the kernel which
33 needs to be active. Depending of the missing functionalities the lxc
34 will work with a restricted number of functionalities or will simply
35 fails.
36
37 The following list gives the kernel features to be enabled in the ker‐
38 nel to have the full features container:
39
40 * General setup
41 * Control Group support
42 -> Namespace cgroup subsystem
43 -> Freezer cgroup subsystem
44 -> Cpuset support
45 -> Simple CPU accounting cgroup subsystem
46 -> Resource counters
47 -> Memory resource controllers for Control Groups
48 * Group CPU scheduler
49 -> Basis for grouping tasks (Control Groups)
50 * Namespaces support
51 -> UTS namespace
52 -> IPC namespace
53 -> User namespace
54 -> Pid namespace
55 -> Network namespace
56 * Device Drivers
57 * Character devices
58 -> Support multiple instances of devpts
59 * Network device support
60 -> MAC-VLAN support
61 -> Virtual ethernet pair device
62 * Networking
63 * Networking options
64 -> 802.1d Ethernet Bridging
65 * Security options
66 -> File POSIX Capabilities
67
68
69
70 The kernel version >= 2.6.27 shipped with the distros, will work with
71 lxc, this one will have less functionalities but enough to be interest‐
72 ing. With the kernel 2.6.29, lxc is fully functional. The helper
73 script lxc-checkconfig will give you information about your kernel con‐
74 figuration.
75
76 Before using the lxc, your system should be configured with the file
77 capabilities, otherwise you will need to run the lxc commands as root.
78
79 The control group can be mounted anywhere, eg: mount -t cgroup cgroup
80 /cgroup. If you want to dedicate a specific cgroup mount point for
81 lxc, that is to have different cgroups mounted at different places with
82 different options but let lxc to use one location, you can bind the
83 mount point with the lxc name, eg: mount -t cgroup lxc /cgroup4lxc or
84 mount -t cgroup -ons,cpuset,freezer,devices lxc /cgroup4lxc
85
87 A container is an object isolating some resources of the host, for the
88 application or system running in it.
89
90 The application / system will be launched inside a container specified
91 by a configuration that is either initially created or passed as param‐
92 eter of the starting commands.
93
94 How to run an application in a container ?
95
96 Before running an application, you should know what are the resources
97 you want to isolate. The default configuration is to isolate the pids,
98 the sysv ipc and the mount points. If you want to run a simple shell
99 inside a container, a basic configuration is needed, especially if you
100 want to share the rootfs. If you want to run an application like sshd,
101 you should provide a new network stack and a new hostname. If you want
102 to avoid conflicts with some files eg. /var/run/httpd.pid, you should
103 remount /var/run with an empty directory. If you want to avoid the con‐
104 flicts in all the cases, you can specify a rootfs for the container.
105 The rootfs can be a directory tree, previously bind mounted with the
106 initial rootfs, so you can still use your distro but with your own /etc
107 and /home
108
109 Here is an example of directory tree for sshd:
110
111
112 [root@lxc sshd]$ tree -d rootfs
113
114 rootfs
115 |-- bin
116 |-- dev
117 | |-- pts
118 | `-- shm
119 | `-- network
120 |-- etc
121 | `-- ssh
122 |-- lib
123 |-- proc
124 |-- root
125 |-- sbin
126 |-- sys
127 |-- usr
128 `-- var
129 |-- empty
130 | `-- sshd
131 |-- lib
132 | `-- empty
133 | `-- sshd
134 `-- run
135 `-- sshd
136
137
138 and the mount points file associated with it:
139
140 [root@lxc sshd]$ cat fstab
141
142 /lib /home/root/sshd/rootfs/lib none ro,bind 0 0
143 /bin /home/root/sshd/rootfs/bin none ro,bind 0 0
144 /usr /home/root/sshd/rootfs/usr none ro,bind 0 0
145 /sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0
146
147
148
149 How to run a system in a container ?
150
151 Running a system inside a container is paradoxically easier than run‐
152 ning an application. Why ? Because you don't have to care about the
153 resources to be isolated, everything need to be isolated, the other
154 resources are specified as being isolated but without configuration
155 because the container will set them up. eg. the ipv4 address will be
156 setup by the system container init scripts. Here is an example of the
157 mount points file:
158
159 [root@lxc debian]$ cat fstab
160
161 /dev /home/root/debian/rootfs/dev none bind 0 0
162 /dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0
163
164
165 More information can be added to the container to facilitate the con‐
166 figuration. For example, make accessible from the container the
167 resolv.conf file belonging to the host.
168
169 /etc/resolv.conf /home/root/debian/rootfs/etc/resolv.conf none bind 0 0
170
171
172
173 CONTAINER LIFE CYCLE
174 When the container is created, it contains the configuration informa‐
175 tion. When a process is launched, the container will be starting and
176 running. When the last process running inside the container exits, the
177 container is stopped.
178
179 In case of failure when the container is initialized, it will pass
180 through the aborting state.
181
182
183 ---------
184 | STOPPED |<---------------
185 --------- |
186 | |
187 start |
188 | |
189 V |
190 ---------- |
191 | STARTING |--error- |
192 ---------- | |
193 | | |
194 V V |
195 --------- ---------- |
196 | RUNNING | | ABORTING | |
197 --------- ---------- |
198 | | |
199 no process | |
200 | | |
201 V | |
202 ---------- | |
203 | STOPPING |<------- |
204 ---------- |
205 | |
206 ---------------------
207
208
209
210
211 CONFIGURATION
212 The container is configured through a configuration file, the format of
213 the configuration file is described in lxc.conf(5)
214
215 CREATING / DESTROYING CONTAINER (PERSISTENT CONTAINER)
216 A persistent container object can be created via the lxc-create com‐
217 mand. It takes a container name as parameter and optional configuration
218 file and template. The name is used by the different commands to refer
219 to this container. The lxc-destroy command will destroy the container
220 object.
221
222 lxc-create -n foo
223 lxc-destroy -n foo
224
225
226
227 VOLATILE CONTAINER
228 It is not mandatory to create a container object before to start it.
229 The container can be directly started with a configuration file as
230 parameter.
231
232 STARTING / STOPPING CONTAINER
233 When the container has been created, it is ready to run an application
234 / system. This is the purpose of the lxc-execute and lxc-start com‐
235 mands. If the container was not created before starting the applica‐
236 tion, the container will use the configuration file passed as parameter
237 to the command, and if there is no such parameter either, then it will
238 use a default isolation. If the application is ended, the container
239 will be stopped also, but if needed the lxc-stop command can be used to
240 kill the still running application.
241
242 Running an application inside a container is not exactly the same thing
243 as running a system. For this reason, there are two different commands
244 to run an application into a container:
245
246 lxc-execute -n foo [-f config] /bin/bash
247 lxc-start -n foo [-f config] [/bin/bash]
248
249
250
251 lxc-execute command will run the specified command into the container
252 via an intermediate process, lxc-init. This lxc-init after launching
253 the specified command, will wait for its end and all other reparented
254 processes. (that allows to support daemons in the container). In
255 other words, in the container, lxc-init has the pid 1 and the first
256 process of the application has the pid 2.
257
258 lxc-start command will run directly the specified command into the con‐
259 tainer. The pid of the first process is 1. If no command is specified
260 lxc-start will run /sbin/init.
261
262 To summarize, lxc-execute is for running an application and lxc-start
263 is better suited for running a system.
264
265 If the application is no longer responding, is inaccessible or is not
266 able to finish by itself, a wild lxc-stop command will kill all the
267 processes in the container without pity.
268
269 lxc-stop -n foo
270
271
272
273 CONNECT TO AN AVAILABLE TTY
274 If the container is configured with the ttys, it is possible to access
275 it through them. It is up to the container to provide a set of avail‐
276 able tty to be used by the following command. When the tty is lost, it
277 is possible to reconnect it without login again.
278
279 lxc-console -n foo -t 3
280
281
282
283 FREEZE / UNFREEZE CONTAINER
284 Sometime, it is useful to stop all the processes belonging to a con‐
285 tainer, eg. for job scheduling. The commands:
286
287 lxc-freeze -n foo
288
289
290 will put all the processes in an uninteruptible state and
291
292 lxc-unfreeze -n foo
293
294
295 will resume them.
296
297 This feature is enabled if the cgroup freezer is enabled in the kernel.
298
299 GETTING INFORMATION ABOUT CONTAINER
300 When there are a lot of containers, it is hard to follow what has been
301 created or destroyed, what is running or what are the pids running into
302 a specific container. For this reason, the following commands may be
303 usefull:
304
305 lxc-ls
306 lxc-ps --name foo
307 lxc-info -n foo
308
309
310
311 lxc-ls lists the containers of the system. The command is a script
312 built on top of ls, so it accepts the options of the ls commands, eg:
313
314 lxc-ls -C1
315
316
317 will display the containers list in one column or:
318
319 lxc-ls -l
320
321
322 will display the containers list and their permissions.
323
324 lxc-ps will display the pids for a specific container. Like lxc-ls,
325 lxc-ps is built on top of ps and accepts the same options, eg:
326
327 lxc-ps --name foo --forest
328
329 will display the processes hierarchy for the processes belonging the
330 'foo' container.
331
332 lxc-ps --lxc
333
334 will display all the containers and their processes.
335
336 lxc-info gives informations for a specific container, at present time,
337 only the state of the container is displayed.
338
339 Here is an example on how the combination of these commands allow to
340 list all the containers and retrieve their state.
341
342 for i in $(lxc-ls -1); do
343 lxc-info -n $i
344 done
345
346
347 And displaying all the pids of all the containers:
348
349 for i in $(lxc-ls -1); do
350 lxc-ps --name $i --forest
351 done
352
353
354
355 lxc-netstat display network information for a specific container. This
356 command is built on top of the netstat command and will accept its
357 options
358
359 The following command will display the socket informations for the con‐
360 tainer 'foo'.
361
362 lxc-netstat -n foo -tano
363
364
365
366 MONITORING CONTAINER
367 It is sometime useful to track the states of a container, for example
368 to monitor it or just to wait for a specific state in a script.
369
370 lxc-monitor command will monitor one or several containers. The parame‐
371 ter of this command accept a regular expression for example:
372
373 lxc-monitor -n "foo|bar"
374
375
376 will monitor the states of containers named 'foo' and 'bar', and:
377
378 lxc-monitor -n ".*"
379
380
381 will monitor all the containers.
382
383 For a container 'foo' starting, doing some work and exiting, the output
384 will be in the form:
385
386 'foo' changed state to [STARTING]
387 'foo' changed state to [RUNNING]
388 'foo' changed state to [STOPPING]
389 'foo' changed state to [STOPPED]
390
391
392
393 lxc-wait command will wait for a specific state change and exit. This
394 is useful for scripting to synchronize the launch of a container or the
395 end. The parameter is an ORed combination of different states. The fol‐
396 lowing example shows how to wait for a container if he went to the
397 background.
398
399
400 # launch lxc-wait in background
401 lxc-wait -n foo -s STOPPED &
402 LXC_WAIT_PID=$!
403
404 # this command goes in background
405 lxc-execute -n foo mydaemon &
406
407 # block until the lxc-wait exits
408 # and lxc-wait exits when the container
409 # is STOPPED
410 wait $LXC_WAIT_PID
411 echo "'foo' is finished"
412
413
414
415
416 SETTING THE CONTROL GROUP FOR CONTAINER
417 The container is tied with the control groups, when a container is
418 started a control group is created and associated with it. The control
419 group properties can be read and modified when the container is running
420 by using the lxc-cgroup command.
421
422 lxc-cgroup command is used to set or get a control group subsystem
423 which is associated with a container. The subsystem name is handled by
424 the user, the command won't do any syntax checking on the subsystem
425 name, if the subsystem name does not exists, the command will fail.
426
427 lxc-cgroup -n foo cpuset.cpus
428
429
430 will display the content of this subsystem.
431
432 lxc-cgroup -n foo cpu.shares 512
433
434
435 will set the subsystem to the specified value.
436
438 The lxc is still in development, so the command syntax and the API can
439 change. The version 1.0.0 will be the frozen version.
440
442 lxc(1), lxc-create(1), lxc-destroy(1), lxc-start(1), lxc-stop(1), lxc-
443 execute(1), lxc-kill(1), lxc-console(1), lxc-monitor(1), lxc-wait(1),
444 lxc-cgroup(1), lxc-ls(1), lxc-ps(1), lxc-info(1), lxc-freeze(1), lxc-
445 unfreeze(1), lxc.conf(5)
446
448 Daniel Lezcano <daniel.lezcano@free.fr>
449
450
451
452Version 0.7.2 Mon Jul 26 17:09:32 UTC 2010 LXC(7)