1guestfs-performance(1)      Virtualization Support      guestfs-performance(1)
2
3
4

NAME

6       guestfs-performance - engineering libguestfs for greatest performance
7

DESCRIPTION

9       This page documents how to get the greatest performance out of
10       libguestfs, especially when you expect to use libguestfs to manipulate
11       thousands of virtual machines or disk images.
12
13       Three main areas are covered. Libguestfs runs an appliance (a small
14       Linux distribution) inside qemu/KVM.  The first two areas are:
15       minimizing the time taken to start this appliance, and the number of
16       times the appliance has to be started.  The third area is shortening
17       the time taken for inspection of VMs.
18

BASELINE MEASUREMENTS

20       Before making changes to how you use libguestfs, take baseline
21       measurements.
22
23   Baseline: Starting the appliance
24       On an unloaded machine, time how long it takes to start up the
25       appliance:
26
27        time guestfish -a /dev/null run
28
29       Run this command several times in a row and discard the first few runs,
30       so that you are measuring a typical "hot cache" case.
31
32       Side note for developers: There is a program called boot-benchmark in
33       https://github.com/libguestfs/libguestfs-analysis-tools which does the
34       same thing, but performs multiple runs and prints the mean and standard
35       deviation.
36
37       Explanation
38
39       The guestfish command above starts up the libguestfs appliance on a
40       null disk, and then immediately shuts it down.  The first time you run
41       the command, it will create an appliance and cache it (usually under
42       /var/tmp/.guestfs-*).  Subsequent runs should reuse the cached
43       appliance.
44
45       Expected results
46
47       You should expect to be getting times under 6 seconds.  If the times
48       you see on an unloaded machine are above this, then see the section
49       "TROUBLESHOOTING POOR PERFORMANCE" below.
50
51   Baseline: Performing inspection of a guest
52       For this test you will need an unloaded machine and at least one real
53       guest or disk image.  If you are planning to use libguestfs against
54       only X guests (eg. X = Windows), then using an X guest here would be
55       most appropriate.  If you are planning to run libguestfs against a mix
56       of guests, then use a mix of guests for testing here.
57
58       Time how long it takes to perform inspection and mount the disks of the
59       guest.  Use the first command if you will be using disk images, and the
60       second command if you will be using libvirt.
61
62        time guestfish --ro -a disk.img -i exit
63
64        time guestfish --ro -d GuestName -i exit
65
66       Run the command several times in a row and discard the first few runs,
67       so that you are measuring a typical "hot cache" case.
68
69       Explanation
70
71       This command starts up the libguestfs appliance on the named disk image
72       or libvirt guest, performs libguestfs inspection on it (see
73       "INSPECTION" in guestfs(3)), mounts the guest’s disks, then discards
74       all these results and shuts down.
75
76       The first time you run the command, it will create an appliance and
77       cache it (usually under /var/tmp/.guestfs-*).  Subsequent runs should
78       reuse the cached appliance.
79
80       Expected results
81
82       You should expect times which are ≤ 5 seconds greater than measured in
83       the first baseline test above.  (For example, if the first baseline
84       test ran in 5 seconds, then this test should run in ≤ 10 seconds).
85

UNDERSTANDING THE APPLIANCE AND WHEN IT IS BUILT/CACHED

87       The first time you use libguestfs, it will build and cache an
88       appliance.  This is usually in /var/tmp/.guestfs-*, unless you have set
89       $TMPDIR or $LIBGUESTFS_CACHEDIR in which case it will be under that
90       temporary directory.
91
92       For more information about how the appliance is constructed, see
93       "SUPERMIN APPLIANCES" in supermin(1).
94
95       Every time libguestfs runs it will check that no host files used by the
96       appliance have changed.  If any have, then the appliance is rebuilt.
97       This usually happens when a package is installed or updated on the host
98       (eg. using programs like "yum" or "apt-get").  The reason for
99       reconstructing the appliance is security: the new program that has been
100       installed might contain a security fix, and so we want to include the
101       fixed program in the appliance automatically.
102
103       These are the performance implications:
104
105       •   The process of building (or rebuilding) the cached appliance is
106           slow, and you can avoid this happening by using a fixed appliance
107           (see below).
108
109       •   If not using a fixed appliance, be aware that updating software on
110           the host will cause a one time rebuild of the appliance.
111
112/var/tmp (or $TMPDIR, $LIBGUESTFS_CACHEDIR) should be on a fast
113           disk, and have plenty of space for the appliance.
114

USING A FIXED APPLIANCE

116       To fully control when the appliance is built, you can build a fixed
117       appliance.  This appliance should be stored on a fast local disk.
118
119       To build the appliance, run the command:
120
121        libguestfs-make-fixed-appliance <directory>
122
123       replacing "<directory>" with the name of a directory where the
124       appliance will be stored (normally you would name a subdirectory, for
125       example: /usr/local/lib/guestfs/appliance or /dev/shm/appliance).
126
127       Then set $LIBGUESTFS_PATH (and ensure this environment variable is set
128       in your libguestfs program), or modify your program so it calls
129       "guestfs_set_path".  For example:
130
131        export LIBGUESTFS_PATH=/usr/local/lib/guestfs/appliance
132
133       Now you can run libguestfs programs, virt tools, guestfish etc. as
134       normal.  The programs will use your fixed appliance, and will not ever
135       build, rebuild, or cache their own appliance.
136
137       (For detailed information on this subject, see:
138       libguestfs-make-fixed-appliance(1)).
139
140   Performance of the fixed appliance
141       In our testing we did not find that using a fixed appliance gave any
142       measurable performance benefit, even when the appliance was located in
143       memory (ie. on /dev/shm).  However there are two points to consider:
144
145       1.  Using a fixed appliance stops libguestfs from ever rebuilding the
146           appliance, meaning that libguestfs will have more predictable
147           start-up times.
148
149       2.  The appliance is loaded on demand.  A simple test such as:
150
151            time guestfish -a /dev/null run
152
153           does not load very much of the appliance.  A real libguestfs
154           program using complicated API calls would demand-load a lot more of
155           the appliance.  Being able to store the appliance in a specified
156           location makes the performance more predictable.
157

REDUCING THE NUMBER OF TIMES THE APPLIANCE IS LAUNCHED

159       By far the most effective, though not always the simplest way to get
160       good performance is to ensure that the appliance is launched the
161       minimum number of times.  This will probably involve changing your
162       libguestfs application.
163
164       Try to call "guestfs_launch" at most once per target virtual machine or
165       disk image.
166
167       Instead of using a separate instance of guestfish(1) to make a series
168       of changes to the same guest, use a single instance of guestfish and/or
169       use the guestfish --listen option.
170
171       Consider writing your program as a daemon which holds a guest open
172       while making a series of changes.  Or marshal all the operations you
173       want to perform before opening the guest.
174
175       You can also try adding disks from multiple guests to a single
176       appliance.  Before trying this, note the following points:
177
178       1.  Adding multiple guests to one appliance is a security problem
179           because it may allow one guest to interfere with the disks of
180           another guest.  Only do it if you trust all the guests, or if you
181           can group guests by trust.
182
183       2.  There is a hard limit to the number of disks you can add to a
184           single appliance.  Call "guestfs_max_disks" in guestfs(3) to get
185           this limit.  For further information see "LIMITS" in guestfs(3).
186
187       3.  Using libguestfs this way is complicated.  Disks can have
188           unexpected interactions: for example, if two guests use the same
189           UUID for a filesystem (because they were cloned), or have volume
190           groups with the same name (but see "guestfs_lvm_set_filter").
191
192       virt-df(1) adds multiple disks by default, so the source code for this
193       program would be a good place to start.
194

SHORTENING THE TIME TAKEN FOR INSPECTION OF VMs

196       The main advice is obvious: Do not perform inspection (which is
197       expensive) unless you need the results.
198
199       If you previously performed inspection on the guest, then it may be
200       safe to cache and reuse the results from last time.
201
202       Some disks don’t need to be inspected at all: for example, if you are
203       creating a disk image, or if the disk image is not a VM, or if the disk
204       image has a known layout.
205
206       Even when basic inspection ("guestfs_inspect_os") is required,
207       auxiliary inspection operations may be avoided:
208
209       •   Mounting disks is only necessary to get further filesystem
210           information.
211
212       •   Listing applications ("guestfs_inspect_list_applications") is an
213           expensive operation on Linux, but almost free on Windows.
214
215       •   Generating a guest icon ("guestfs_inspect_get_icon") is cheap on
216           Linux but expensive on Windows.
217

PARALLEL APPLIANCES

219       Libguestfs appliances are mostly I/O bound and you can launch multiple
220       appliances in parallel.  Provided there is enough free memory, there
221       should be little difference in launching 1 appliance vs N appliances in
222       parallel.
223
224       On a 2-core (4-thread) laptop with 16 GB of RAM, using the (not
225       especially realistic) test Perl script below, the following plot shows
226       excellent scalability when running between 1 and 20 appliances in
227       parallel:
228
229         12 ++---+----+----+----+-----+----+----+----+----+---++
230            +    +    +    +    +     +    +    +    +    +    *
231            |                                                  |
232            |                                               *  |
233         11 ++                                                ++
234            |                                                  |
235            |                                                  |
236            |                                          *  *    |
237         10 ++                                                ++
238            |                                        *         |
239            |                                                  |
240        s   |                                                  |
241          9 ++                                                ++
242        e   |                                                  |
243            |                                     *            |
244        c   |                                                  |
245          8 ++                                  *             ++
246        o   |                                *                 |
247            |                                                  |
248        n 7 ++                                                ++
249            |                              *                   |
250        d   |                           *                      |
251            |                                                  |
252        s 6 ++                                                ++
253            |                      *  *                        |
254            |                   *                              |
255            |                                                  |
256          5 ++                                                ++
257            |                                                  |
258            |                 *                                |
259            |            * *                                   |
260          4 ++                                                ++
261            |                                                  |
262            |                                                  |
263            +    *  * *    +    +     +    +    +    +    +    +
264          3 ++-*-+----+----+----+-----+----+----+----+----+---++
265            0    2    4    6    8     10   12   14   16   18   20
266                      number of parallel appliances
267
268       It is possible to run many more than 20 appliances in parallel, but if
269       you are using the libvirt backend then you should be aware that out of
270       the box libvirt limits the number of client connections to 20.
271
272       The simple Perl script below was used to collect the data for the plot
273       above, but there is much more information on this subject, including
274       more advanced test scripts and graphs, available in the following blog
275       postings:
276
277       http://rwmj.wordpress.com/2013/02/25/multiple-libguestfs-appliances-in-parallel-part-1/
278       http://rwmj.wordpress.com/2013/02/25/multiple-libguestfs-appliances-in-parallel-part-2/
279       http://rwmj.wordpress.com/2013/02/25/multiple-libguestfs-appliances-in-parallel-part-3/
280       http://rwmj.wordpress.com/2013/02/25/multiple-libguestfs-appliances-in-parallel-part-4/
281
282        #!/usr/bin/env perl
283
284        use strict;
285        use threads;
286        use warnings;
287        use Sys::Guestfs;
288        use Time::HiRes qw(time);
289
290        sub test {
291            my $g = Sys::Guestfs->new;
292            $g->add_drive_ro ("/dev/null");
293            $g->launch ();
294
295            # You could add some work for libguestfs to do here.
296
297            $g->close ();
298        }
299
300        # Get everything into cache.
301        test (); test (); test ();
302
303        for my $nr_threads (1..20) {
304            my $start_t = time ();
305            my @threads;
306            foreach (1..$nr_threads) {
307                push @threads, threads->create (\&test)
308            }
309            foreach (@threads) {
310                $_->join ();
311                if (my $err = $_->error ()) {
312                    die "launch failed with $nr_threads threads: $err"
313                }
314            }
315            my $end_t = time ();
316            printf ("%d %.2f\n", $nr_threads, $end_t - $start_t);
317        }
318

TROUBLESHOOTING POOR PERFORMANCE

320   Ensure hardware virtualization is available
321       Use /proc/cpuinfo to ensure that hardware virtualization is available.
322       Note that you may need to enable it in your BIOS.
323
324       Hardware virt is not usually available inside VMs, and libguestfs will
325       run slowly inside another virtual machine whatever you do.  Nested
326       virtualization does not work well in our experience, and is certainly
327       no substitute for running libguestfs on baremetal.
328
329   Ensure KVM is available
330       Ensure that KVM is enabled and available to the user that will run
331       libguestfs.  It should be safe to set 0666 permissions on /dev/kvm and
332       most distributions now do this.
333
334   Processors to avoid
335       Avoid processors that don’t have hardware virtualization, and some
336       processors which are simply very slow (AMD Geode being a great
337       example).
338
339   Xen dom0
340       In Xen, dom0 is a virtual machine, and so hardware virtualization is
341       not available.
342
343   Use libguestfs ≥ 1.34 and qemu ≥ 2.7
344       During the libguestfs 1.33 development cycle, we spent a large amount
345       of time concentrating on boot performance, and added some patches to
346       libguestfs, qemu and Linux which in some cases can reduce boot times to
347       well under 1 second.  You may therefore get much better performance by
348       moving to the versions of libguestfs or qemu mentioned in the heading.
349

DETAILED ANALYSIS

351   Boot analysis
352       In https://github.com/libguestfs/libguestfs-analysis-tools is a program
353       called "boot-analysis".  This program is able to produce a very
354       detailed breakdown of the boot steps (eg. qemu, BIOS, kernel,
355       libguestfs init script), and can measure how long it takes to perform
356       each step.
357
358   Detailed timings using ts
359       Use the ts(1) command (from moreutils) to show detailed timings:
360
361        $ guestfish -a /dev/null run -v |& ts -i '%.s'
362        0.000022 libguestfs: launch: program=guestfish
363        0.000134 libguestfs: launch: version=1.29.31fedora=23,release=2.fc23,libvirt
364        0.000044 libguestfs: launch: backend registered: unix
365        0.000035 libguestfs: launch: backend registered: uml
366        0.000035 libguestfs: launch: backend registered: libvirt
367        0.000032 libguestfs: launch: backend registered: direct
368        0.000030 libguestfs: launch: backend=libvirt
369        0.000031 libguestfs: launch: tmpdir=/tmp/libguestfsw18rBQ
370        0.000029 libguestfs: launch: umask=0002
371        0.000031 libguestfs: launch: euid=1000
372        0.000030 libguestfs: libvirt version = 1002012 (1.2.12)
373        [etc]
374
375       The timestamps are seconds (incrementally since the previous line).
376
377   Detailed timings using SystemTap
378       You can use SystemTap (stap(1)) to get detailed timings from libguestfs
379       programs.
380
381       Save the following script as time.stap:
382
383        global last;
384
385        function display_time () {
386              now = gettimeofday_us ();
387              delta = 0;
388              if (last > 0)
389                    delta = now - last;
390              last = now;
391
392              printf ("%d (+%d):", now, delta);
393        }
394
395        probe begin {
396              last = 0;
397              printf ("ready\n");
398        }
399
400        /* Display all calls to static markers. */
401        probe process("/usr/lib*/libguestfs.so.0")
402                  .provider("guestfs").mark("*") ? {
403              display_time();
404              printf ("\t%s %s\n", $$name, $$parms);
405        }
406
407        /* Display all calls to guestfs_* functions. */
408        probe process("/usr/lib*/libguestfs.so.0")
409                  .function("guestfs_[a-z]*") ? {
410              display_time();
411              printf ("\t%s %s\n", probefunc(), $$parms);
412        }
413
414       Run it as root in one window:
415
416        # stap time.stap
417        ready
418
419       It prints "ready" when SystemTap has loaded the program.  Run your
420       libguestfs program, guestfish or a virt tool in another window.  For
421       example:
422
423        $ guestfish -a /dev/null run
424
425       In the stap window you will see a large amount of output, with the time
426       taken for each step shown (microseconds in parenthesis).  For example:
427
428        xxxx (+0):     guestfs_create
429        xxxx (+29):    guestfs_set_pgroup g=0x17a9de0 pgroup=0x1
430        xxxx (+9):     guestfs_add_drive_opts_argv g=0x17a9de0 [...]
431        xxxx (+8):     guestfs_int_safe_strdup g=0x17a9de0 str=0x7f8a153bed5d
432        xxxx (+19):    guestfs_int_safe_malloc g=0x17a9de0 nbytes=0x38
433        xxxx (+5):     guestfs_int_safe_strdup g=0x17a9de0 str=0x17a9f60
434        xxxx (+10):    guestfs_launch g=0x17a9de0
435        xxxx (+4):     launch_start
436        [etc]
437
438       You will need to consult, and even modify, the source to libguestfs to
439       fully understand the output.
440
441   Detailed debugging using gdb
442       You can attach to the appliance BIOS/kernel using gdb.  If you know
443       what you're doing, this can be a useful way to diagnose boot
444       regressions.
445
446       Firstly, you have to change qemu so it runs with the "-S" and "-s"
447       options.  These options cause qemu to pause at boot and allow you to
448       attach a debugger.  Read qemu(1) for further information.  Libguestfs
449       invokes qemu several times (to scan the help output and so on) and you
450       only want the final invocation of qemu to use these options, so use a
451       qemu wrapper script like this:
452
453        #!/bin/bash -
454
455        # Set this to point to the real qemu binary.
456        qemu=/usr/bin/qemu-kvm
457
458        if [ "$1" != "-global" ]; then
459            # Scanning help output etc.
460            exec $qemu "$@"
461        else
462            # Really running qemu.
463            exec $qemu -S -s "$@"
464        fi
465
466       Now run guestfish or another libguestfs tool with the qemu wrapper (see
467       "QEMU WRAPPERS" in guestfs(3) to understand what this is doing):
468
469        LIBGUESTFS_HV=/path/to/qemu-wrapper guestfish -a /dev/null -v run
470
471       This should pause just after qemu launches.  In another window, attach
472       to qemu using gdb:
473
474        $ gdb
475        (gdb) set architecture i8086
476        The target architecture is assumed to be i8086
477        (gdb) target remote :1234
478        Remote debugging using :1234
479        0x0000fff0 in ?? ()
480        (gdb) cont
481
482       At this point you can use standard gdb techniques, eg. hitting "^C" to
483       interrupt the boot and "bt" get a stack trace, setting breakpoints,
484       etc.  Note that when you are past the BIOS and into the Linux kernel,
485       you'll want to change the architecture back to 32 or 64 bit.
486

PERFORMANCE REGRESSIONS IN OTHER PROGRAMS

488       Sometimes performance regressions happen in other programs (eg. qemu,
489       the kernel) that cause problems for libguestfs.
490
491       In https://github.com/libguestfs/libguestfs-analysis-tools
492       boot-benchmark/boot-benchmark-range.pl is a script which can be used to
493       benchmark libguestfs across a range of git commits in another project
494       to find out if any commit is causing a slowdown (or speedup).
495
496       To find out how to use this script, consult the manual:
497
498        ./boot-benchmark/boot-benchmark-range.pl --man
499

SEE ALSO

501       supermin(1), guestfish(1), guestfs(3), guestfs-examples(3),
502       guestfs-internals(1), libguestfs-make-fixed-appliance(1), stap(1),
503       qemu(1), gdb(1), http://libguestfs.org/.
504

AUTHORS

506       Richard W.M. Jones ("rjones at redhat dot com")
507
509       Copyright (C) 2012-2020 Red Hat Inc.
510

LICENSE

512       This library is free software; you can redistribute it and/or modify it
513       under the terms of the GNU Lesser General Public License as published
514       by the Free Software Foundation; either version 2 of the License, or
515       (at your option) any later version.
516
517       This library is distributed in the hope that it will be useful, but
518       WITHOUT ANY WARRANTY; without even the implied warranty of
519       MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
520       Lesser General Public License for more details.
521
522       You should have received a copy of the GNU Lesser General Public
523       License along with this library; if not, write to the Free Software
524       Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
525       02110-1301 USA
526

BUGS

528       To get a list of bugs against libguestfs, use this link:
529       https://bugzilla.redhat.com/buglist.cgi?component=libguestfs&product=Virtualization+Tools
530
531       To report a new bug against libguestfs, use this link:
532       https://bugzilla.redhat.com/enter_bug.cgi?component=libguestfs&product=Virtualization+Tools
533
534       When reporting a bug, please supply:
535
536       •   The version of libguestfs.
537
538       •   Where you got libguestfs (eg. which Linux distro, compiled from
539           source, etc)
540
541       •   Describe the bug accurately and give a way to reproduce it.
542
543       •   Run libguestfs-test-tool(1) and paste the complete, unedited output
544           into the bug report.
545
546
547
548libguestfs-1.48.3                 2022-05-26            guestfs-performance(1)
Impressum