1icecream(7) icecream(7)
2
3
4
6 icecream - A distributed compile system
7
9 Icecream is a distributed compile system for C and C++.
10
11 Icecream is created by SUSE and is based on ideas and code by distcc.
12 Like distcc it takes compile jobs from your build and distributes it to
13 remote machines allowing a parallel build on several machines you have
14 got. But unlike distcc Icecream uses a central server that schedules
15 the compile jobs to the fastest free server and is as this dynamic.
16 This advantage pays off mostly for shared computers, if you are the
17 only user on X machines, you have full control over them anyway.
18
20 You need:
21
22 · One machine that runs the scheduler (icecc-scheduler -d)
23
24 · Many machines that run the daemon (iceccd -d)
25
26 If you want to compile using icecream, make sure $prefix/lib/icecc/bin
27 is the first entry in your path, e.g. type export
28 PATH=/usr/lib/icecc/bin:$PATH (Hint: put this in ~/.bashrc or /etc/pro‐
29 file to not have to type it in everytime)
30
31 Then you just compile with make -j num, where num is the amount of jobs
32 you want to compile in parallel. Do not exaggerate. Too large numbers
33 can overload your machine or the compile cluster and make the build in
34 fact slower.
35
36 Warning
37
38 Never use icecream in untrusted environments. Run the daemons
39 and the scheduler as unpriviliged user in such networks if you
40 have to! But you will have to rely on homogeneous networks then
41 (see below).
42
43 If you want an overview of your icecream compile cluster, or if you
44 just want funny stats, you might want to run icemon.
45
47 If you are running icecream daemons in the same icecream network but on
48 machines with incompatible compiler versions, icecream needs to send
49 your build environment to remote machines (note: they all must be run‐
50 ning as root if compiled without libcap-ng support. In the future ice‐
51 cream might gain the ability to know when machines cannot accept a dif‐
52 ferent environment, but for now it is all or nothing).
53
54 Under normal circumstances this is handled transparently by the ice‐
55 cream daemon, which will prepare a tarball with the environment when
56 needed. This is the recommended way, as the daemon will also automati‐
57 cally update the tarball whenever your compiler changes.
58
59 If you want to handle this manually for some reason, you have to tell
60 icecream which environment you are using. Use icecc --build-native to
61 create an archive file containing all the files necessary to setup the
62 compiler environment. The file will have a random unique name like
63 ddaea39ca1a7c88522b185eca04da2d8.tar.bz2 per default. Rename it to
64 something more expressive for your convenience, e.g.
65 i386-3.3.1.tar.bz2. Set ICECC_VERSION=filename_of_archive_contain‐
66 ing_your_environment in the shell environment where you start the com‐
67 pile jobs and the file will be transferred to the daemons where your
68 compile jobs run and installed to a chroot environment for executing
69 the compile jobs in the environment fitting to the environment of the
70 client. This requires that the icecream daemon runs as root.
71
73 SUSE got quite some good machines not having a processor from Intel or
74 AMD, so icecream is pretty good in using cross-compiler environments
75 similar to the above way of spreading compilers. There the ICECC_VER‐
76 SION variable looks like <native_filename>(,<platform>:<cross_com‐
77 piler_filename>)*, for example like this:
78 /work/9.1-i386.tar.bz2,ia64:/work/9.1-cross-ia64.tar.bz2
79
80 How to package such a cross compiler is pretty straightforward if you
81 look what is inside the tarballs generated by icecc --build-native.
82
84 When building for embedded targets like ARM often you will have a
85 toolchain that runs on your host and produces code for the target. In
86 these situations you can exploit the power of icecream as well.
87
88 Create symlinks from where icecc is to the name of your cross compilers
89 (e.g. arm-linux-g++ and arm-linux-gcc), make sure that these symlinks
90 are in the path and before the path of your toolchain, with $ICECC_CC
91 and $ICECC_CXX you need to tell icecream which compilers to use for
92 preprocessing and local compiling. e.g. set it to ICECC_CC=arm-linux-
93 gcc and ICECC_CXX=arm-linux-g++.
94
95 As the next step you need to create a .tar.bz2 of your cross compiler,
96 check the result of build-native to see what needs to be present.
97
98 Finally one needs to set ICECC_VERSION and point it to the .tar.bz2 you
99 have created. When you start compiling your toolchain will be used.
100
101 Note
102
103 With ICECC_VERSION you point out on which platforms your
104 toolchain runs, you do not indicate for which target code will
105 be generated.
106
108 When working with toolchains for multiple targets, icecream can be con‐
109 figured to support multiple toolchains in the same environment.
110
111 Multiple toolchains can be configured by appending =<target> to the
112 tarball filename in the ICECC_VERSION variable. Where the <target> is
113 the cross compiler prefix. There the ICECC_VERSION variable will look
114 like <native_filename>(,<platform>:<cross_compiler_filename>=<tar‐
115 get>)*.
116
117 Below an example of how to configure icecream to use two toolchains,
118 /work/toolchain1/bin/arm-eabi-[gcc,g++] and /work/toolchain2/bin/arm-
119 linux-androideabi-[gcc,g++], for the same host architecture:
120
121 · Create symbolic links with the cross compilers names (e.g. arm-
122 eabi-[gcc,g++] and arm-linux-androideabi-[gcc,g++]) pointing to where
123 the icecc binary is. Make sure these symbolic links are in the $PATH
124 and before the path of the toolchains.
125
126 · Create a tarball file for each toolchain that you want to use with
127 icecream. The icecc-create-env script can be used to create the tar‐
128 ball file for each toolchain, for example: icecc-create-env
129 /work/toolchain1/bin/arm-eabi-gcc icecc-create-env
130 /work/toolchain2/bin/arm-linux-androideabi-gcc.
131
132 · Set ICECC_VERSION to point to the native tarball file and for each
133 tarball file created to the toolchains (e.g ICECC_VER‐
134 SION=/work/i386-native.tar.gz,/work/arm-eabi-toolchain1.tar.gz=arm-
135 eabi,/work/arm-linux-androideabi-toolchain2.tar.gz=arm-linux-
136 androideabi).
137
138 With these steps the icecrem will use /work/arm-eabi-toolchain1.tar.gz
139 file to cross compilers with the prefix arm-eabi (e.g. arm-eabi-gcc and
140 arm-eabi-g++), use /work/arm-linux-androideabi-toolchain2.tar.gz file
141 to cross compilers with the prefix arm-linux-androideabi (e.g. arm-
142 linux-androideabi-gcc and arm-linux-androideabi-g++) and use
143 /work/i386-native.tar.gz file to compilers without prefix, the native
144 compilers.
145
147 The easiest way to use ccache with icecream is to set CCACHE_PREFIX to
148 icecc (the actual icecream client wrapper)
149
150 export CCACHE_PREFIX=icecc
151
152 This will make ccache prefix any compilation command it needs to do
153 with icecc, making it use icecream for the compilation (but not for
154 preprocessing alone).
155
156 To actually use ccache, the mechanism is the same like with using ice‐
157 cream alone. Since ccache does not provide any symlinks in
158 /opt/ccache/bin, you can create them manually:
159
160 mkdir /opt/ccache/bin
161 ln -s /usr/bin/ccache /opt/ccache/bin/gcc
162 ln -s /usr/bin/ccache /opt/ccache/bin/g++
163
164 And then compile with
165
166 export PATH=/opt/ccache/bin:$PATH
167
168 Note however that ccache is not really worth the trouble if you are not
169 recompiling your project three times a day from scratch (it adds quite
170 some overhead in comparing the preprocessor output and uses quite some
171 disc space and I found a cache hit of 18% a bit too few, so I disabled
172 it again).
173
175 You can use the environment variable ICECC_DEBUG to control if icecream
176 gives debug output or not. Set it to debug to get debug output. The
177 other possible values are error, warning and info (the -v option for
178 daemon and scheduler raise the level per -v on the command line - so
179 use -vvv for full debug).
180
182 It is possible that compilation on some hosts fails because they are
183 too old (typically the kernel on the remote host is too old for the
184 glibc from the local host). Recent icecream versions should automati‐
185 cally detect this and avoid such hosts when compilation would fail. If
186 some hosts are running old icecream versions and it is not possible to
187 upgrade them for some reason, use
188
189 export ICECC_IGNORE_UNVERIFIED=1
190
192 Numbers of my test case (some STL C++ genetic algorithm)
193
194 · g++ on my machine: 1.6s
195
196 · g++ on fast machine: 1.1s
197
198 · icecream using my machine as remote machine: 1.9s
199
200 · icecream using fast machine: 1.8s
201
202 The icecream overhead is quite huge as you might notice, but the com‐
203 piler cannot interleave preprocessing with compilation and the file
204 needs to be read/written once more and in between the file is trans‐
205 ferred.
206
207 But even if the other computer is faster, using g++ on my local machine
208 is faster. If you are (for whatever reason) alone in your network at
209 some point, you lose all advantages of distributed compiling and only
210 add the overhead. So icecream got a special case for local compilations
211 (the same special meaning that localhost got within $DISTCC_HOSTS).
212 This makes compiling on my machine using icecream down to 1.7s (the
213 overhead is actually less than 0.1s in average).
214
215 As the scheduler is aware of that meaning, it will prefer your own com‐
216 puter if it is free and got not less than 70% of the fastest available
217 computer.
218
219 Keep in mind, that this affects only the first compile job, the second
220 one is distributed anyway. So if I had to compile two of my files, I
221 would get
222
223 · g++ -j1 on my machine: 3.2s
224
225 · g++ -j1 on the fast machine: 2.2s
226
227 · using icecream -j2 on my machine: max(1.7,1.8)=1.8s
228
229 · (using icecream -j2 on the other machine: max(1.1,1.8)=1.8s)
230
231 The math is a bit tricky and depends a lot on the current state of the
232 compilation network, but make sure you are not blindly assuming make
233 -j2 halves your compilation time.
234
236 In most requirements icecream is not special, e.g. it does not matter
237 what distributed compile system you use, you will not have fun if your
238 nodes are connected through than less or equal to 10MBit. Note that
239 icecream compresses input and output files (using lzo), so you can calc
240 with ~1MBit per compile job - i.e. more than make -j10 will not be pos‐
241 sible without delays.
242
243 Remember that more machines are only good if you can use massive paral‐
244 lelization, but you will for sure get the best result if your submit‐
245 ting machine (the one you called g++ on) will be fast enough to feed
246 the others. Especially if your project consists of many easy to com‐
247 pile files, the preprocessing and file I/O will be job enough to need a
248 quick machine.
249
250 The scheduler will try to give you the fastest machines available, so
251 even if you add old machines, they will be used only in exceptional
252 situations, but still you can have bad luck - the scheduler does not
253 know how long a job will take before it started. So if you have 3
254 machines and two quick to compile and one long to compile source file,
255 you are not safe from a choice where everyone has to wait on the slow
256 machine. Keep that in mind.
257
259 A short overview of the ports icecream requires:
260
261 · TCP/10245 on the daemon computers (required)
262
263 · TCP/8765 for the the scheduler computer (required)
264
265 · TCP/8766 for the telnet interface to the scheduler (optional)
266
267 · UDP/8765 for broadcast to find the scheduler (optional)
268
269 If the monitor cannot find the scheduler, use ICECC_SCHEDULER=host ice‐
270 mon.
271
273 icecc-scheduler(1), iceccd(1), icemon(1)
274
276 Stephan Kulow <coolo@suse.de>
277
278 Michael Matz <matz@suse.de>
279
280 Cornelius Schumacher <cschum@suse.de>
281
282 ...and various other contributors.
283
284
285
286 April 21th, 2005 icecream(7)