1guestfs-faq(1)              Virtualization Support              guestfs-faq(1)
2
3
4

NAME

6       guestfs-faq - libguestfs Frequently Asked Questions (FAQ)
7

ABOUT LIBGUESTFS

9   What is libguestfs?
10       libguestfs is a way to create, access and modify disk images.  You can
11       look inside disk images, modify the files they contain, create them
12       from scratch, resize them, and much more.  It’s especially useful from
13       scripts and programs and from the command line.
14
15       libguestfs is a C library (hence "lib-"), and a set of tools built on
16       this library, and bindings for many common programming languages.
17
18       For more information about what libguestfs can do read the introduction
19       on the home page (http://libguestfs.org).
20
21   What are the virt tools?
22       Virt tools (website: http://virt-tools.org) are a whole set of
23       virtualization management tools aimed at system administrators.  Some
24       of them come from libguestfs, some from libvirt and many others from
25       other open source projects.  So virt tools is a superset of libguestfs.
26       However libguestfs comes with many important tools.  See
27       http://libguestfs.org for a full list.
28
29   Does libguestfs need { libvirt / KVM / Red Hat / Fedora }?
30       No!
31
32       libvirt is not a requirement for libguestfs.
33
34       libguestfs works with any disk image, including ones created in VMware,
35       KVM, qemu, VirtualBox, Xen, and many other hypervisors, and ones which
36       you have created from scratch.
37
38       Red Hat sponsors (ie. pays for) development of libguestfs and a huge
39       number of other open source projects.  But you can run libguestfs and
40       the virt tools on many different Linux distros and Mac OS X.  We try
41       our best to support all Linux distros as first-class citizens.  Some
42       virt tools have been ported to Windows.
43
44   How does libguestfs compare to other tools?
45       vs. kpartx
46           Libguestfs takes a different approach from kpartx.  kpartx needs
47           root, and mounts filesystems on the host kernel (which can be
48           insecure - see guestfs-security(1)).  Libguestfs isolates your host
49           kernel from guests, is more flexible, scriptable, supports LVM,
50           doesn't require root, is isolated from other processes, and cleans
51           up after itself.  Libguestfs is more than just file access because
52           you can use it to create images from scratch.
53
54       vs. vdfuse
55           vdfuse is like kpartx but for VirtualBox images.  See the kpartx
56           comparison above.  You can use libguestfs on the partition files
57           exposed by vdfuse, although it’s not necessary since libguestfs can
58           access VirtualBox images directly.
59
60       vs. qemu-nbd
61           NBD (Network Block Device) is a protocol for exporting block
62           devices over the network.  qemu-nbd is an NBD server which can
63           handle any disk format supported by qemu (eg. raw, qcow2).  You can
64           use libguestfs and qemu-nbd or nbdkit together to access block
65           devices over the network, for example: "guestfish -a nbd://remote"
66
67       vs. mounting filesystems in the host
68           Mounting guest filesystems in the host is insecure and should be
69           avoided completely for untrusted guests.  Use libguestfs to provide
70           a layer of protection against filesystem exploits.  See also
71           guestmount(1).
72
73       vs. parted
74           Libguestfs supports LVM.  Libguestfs uses parted and provides most
75           parted features through the libguestfs API.
76

GETTING HELP AND REPORTING BUGS

78   How do I know what version I'm using?
79       The simplest method is:
80
81        guestfish --version
82
83       Libguestfs development happens along an unstable branch and we
84       periodically create a stable branch which we backport stable patches
85       to.  To find out more, read "LIBGUESTFS VERSION NUMBERS" in guestfs(3).
86
87   How can I get help?
88   What mailing lists or chat rooms are available?
89       If you are a Red Hat customer using Red Hat Enterprise Linux, please
90       contact Red Hat Support: http://redhat.com/support
91
92       There is a mailing list, mainly for development, but users are also
93       welcome to ask questions about libguestfs and the virt tools:
94       https://lists.libguestfs.org
95
96       You can also talk to us on IRC channel "#guestfs" on Libera Chat.
97       We're not always around, so please stay in the channel after asking
98       your question and someone will get back to you.
99
100       For other virt tools (not ones supplied with libguestfs) there is a
101       general virt tools mailing list:
102       https://www.redhat.com/mailman/listinfo/virt-tools-list
103
104   How do I report bugs?
105       Please use the following link to enter a bug in Bugzilla:
106
107       https://bugzilla.redhat.com/enter_bug.cgi?component=libguestfs&product=Virtualization+Tools
108
109       Include as much detail as you can and a way to reproduce the problem.
110
111       Include the full output of libguestfs-test-tool(1).
112

COMMON PROBLEMS

114       See also "LIBGUESTFS GOTCHAS" in guestfs(3) for some "gotchas" with
115       using the libguestfs API.
116
117   "Could not allocate dynamic translator buffer"
118       This obscure error is in fact an SELinux failure.  You have to enable
119       the following SELinux boolean:
120
121        setsebool -P virt_use_execmem=on
122
123       For more information see
124       https://bugzilla.redhat.com/show_bug.cgi?id=806106.
125
126   "child process died unexpectedly"
127       [This error message was changed in libguestfs 1.21.18 to something more
128       explanatory.]
129
130       This error indicates that qemu failed or the host kernel could not
131       boot.  To get further information about the failure, you have to run:
132
133        libguestfs-test-tool
134
135       If, after using this, you still don’t understand the failure, contact
136       us (see previous section).
137
138   libguestfs: error: cannot find any suitable libguestfs supermin, fixed or
139       old-style appliance on LIBGUESTFS_PATH
140   febootstrap-supermin-helper: ext2: parent directory not found
141   supermin-helper: ext2: parent directory not found
142       [This issue is fixed permanently in libguestfs ≥ 1.26.]
143
144       If you see any of these errors on Debian/Ubuntu, you need to run the
145       following command:
146
147        sudo update-guestfs-appliance
148
149   "Permission denied" when running libguestfs as root
150       You get a permission denied error when opening a disk image, even
151       though you are running libguestfs as root.
152
153       This is caused by libvirt, and so only happens when using the libvirt
154       backend.  When run as root, libvirt decides to run the qemu appliance
155       as user "qemu.qemu".  Unfortunately this usually means that qemu cannot
156       open disk images, especially if those disk images are owned by root, or
157       are present in directories which require root access.
158
159       There is a bug open against libvirt to fix this:
160       https://bugzilla.redhat.com/show_bug.cgi?id=1045069
161
162       You can work around this by one of the following methods:
163
164       •   Switch to the direct backend:
165
166            export LIBGUESTFS_BACKEND=direct
167
168       •   Don’t run libguestfs as root.
169
170       •   Chmod the disk image and any parent directories so that the qemu
171           user can access them.
172
173       •   (Nasty) Edit /etc/libvirt/qemu.conf and change the "user" setting.
174
175   execl: /init: Permission denied
176       Note: If this error happens when you are using a distro package of
177       libguestfs (eg. from Fedora, Debian, etc) then file a bug against the
178       distro.  This is not an error which normal users should ever see if the
179       distro package has been prepared correctly.
180
181       This error happens during the supermin boot phase of starting the
182       appliance:
183
184        supermin: mounting new root on /root
185        supermin: chroot
186        execl: /init: Permission denied
187        supermin: debug: listing directory /
188        [...followed by a lot of debug output...]
189
190       This is a complicated bug related to supermin(1) appliances.  The
191       appliance is constructed by copying files like /bin/bash and many
192       libraries from the host.  The file "hostfiles" lists the files that
193       should be copied from the host into the appliance.  If some files don't
194       exist on the host then they are missed out, but if these files are
195       needed in order to (eg) run /bin/bash then you'll see the above error.
196
197       Diagnosing the problem involves studying the libraries needed by
198       /bin/bash, ie:
199
200        ldd /bin/bash
201
202       comparing that with "hostfiles", with the files actually available in
203       the host filesystem, and with the debug output printed in the error
204       message.  Once you've worked out which file is missing, install that
205       file using your package manager and try again.
206
207       You should also check that files like /init and /bin/bash (in the
208       appliance) are executable.  The debug output shows file modes.
209

DOWNLOADING, INSTALLING, COMPILING LIBGUESTFS

211   Where can I get the latest binaries for ...?
212       Fedora ≥ 11
213           Use:
214
215            yum install '*guestf*'
216
217           For the latest builds, see:
218           http://koji.fedoraproject.org/koji/packageinfo?packageID=8391
219
220       Red Hat Enterprise Linux
221           RHEL 6
222           RHEL 7
223               It is part of the default install.  On RHEL 6 and 7 (only) you
224               have to install "libguestfs-winsupport" to get Windows guest
225               support.
226
227       Debian and Ubuntu
228           For libguestfs < 1.26, after installing libguestfs you need to do:
229
230            sudo update-guestfs-appliance
231
232           (This script has been removed on Debian/Ubuntu with libguestfs ≥
233           1.26 and instead the appliance is built on demand.)
234
235           On Ubuntu only:
236
237            sudo chmod 0644 /boot/vmlinuz*
238
239           You may need to add yourself to the "kvm" group:
240
241            sudo usermod -a -G kvm yourlogin
242
243           Debian Squeeze (6)
244               Hilko Bengen has built libguestfs in squeeze backports:
245               http://packages.debian.org/search?keywords=guestfs&searchon=names&section=all&suite=squeeze-backports
246
247           Debian Wheezy and later (7+)
248               Hilko Bengen supports libguestfs on Debian.  Official Debian
249               packages are available:
250               http://packages.debian.org/search?keywords=libguestfs
251
252           Ubuntu
253               We don’t have a full time Ubuntu maintainer, and the packages
254               supplied by Canonical (which are outside our control) are
255               sometimes broken.
256
257               Canonical decided to change the permissions on the kernel so
258               that it's not readable except by root.  This is completely
259               stupid, but they won't change it
260               (https://bugs.launchpad.net/ubuntu/+source/linux/+bug/759725).
261               So every user should do this:
262
263                sudo chmod 0644 /boot/vmlinuz*
264
265               Ubuntu 12.04
266                   libguestfs in this version of Ubuntu works, but you need to
267                   update febootstrap and seabios to the latest versions.
268
269                   You need febootstrap ≥ 3.14-2 from:
270                   http://packages.ubuntu.com/precise/febootstrap
271
272                   After installing or updating febootstrap, rebuild the
273                   appliance:
274
275                    sudo update-guestfs-appliance
276
277                   You need seabios ≥ 0.6.2-0ubuntu2.1 or ≥ 0.6.2-0ubuntu3
278                   from: http://packages.ubuntu.com/precise-updates/seabios or
279                   http://packages.ubuntu.com/quantal/seabios
280
281                   Also you need to do (see above):
282
283                    sudo chmod 0644 /boot/vmlinuz*
284
285       Gentoo
286           Libguestfs was added to Gentoo in 2012-07 by Andreis Vinogradovs
287           (libguestfs) and Maxim Koltsov (mainly hivex).  Do:
288
289            emerge libguestfs
290
291       Mageia
292           Libguestfs was added to Mageia in 2013-08. Do:
293
294            urpmi libguestfs
295
296       SuSE
297           Libguestfs was added to SuSE in 2012 by Olaf Hering.
298
299       ArchLinux
300           Libguestfs was added to the AUR in 2010.
301
302       Other Linux distro
303           Compile from source (next section).
304
305       Other non-Linux distro
306           You'll have to compile from source, and port it.
307
308   How can I compile and install libguestfs from source?
309       You can compile libguestfs from git or a source tarball.  Read the
310       README file before starting.
311
312       Git: https://github.com/libguestfs/libguestfs Source tarballs:
313       http://libguestfs.org/download
314
315       Don’t run "make install"!  Use the "./run" script instead (see README).
316
317   How can I compile and install libguestfs if my distro doesn't have new
318       enough qemu/supermin/kernel?
319       Libguestfs needs supermin 5.  If supermin 5 hasn't been ported to your
320       distro, then see the question below.
321
322       First compile qemu, supermin and/or the kernel from source.  You do not
323       need to "make install" them.
324
325       In the libguestfs source directory, create two files.  "localconfigure"
326       should contain:
327
328        source localenv
329        #export PATH=/tmp/qemu/x86_64-softmmu:$PATH
330        ./configure --prefix /usr "$@"
331
332       Make "localconfigure" executable.
333
334       "localenv" should contain:
335
336        #export SUPERMIN=/tmp/supermin/src/supermin
337        #export LIBGUESTFS_HV=/tmp/qemu/x86_64-softmmu/qemu-system-x86_64
338        #export SUPERMIN_KERNEL=/tmp/linux/arch/x86/boot/bzImage
339        #export SUPERMIN_KERNEL_VERSION=4.XX.0
340        #export SUPERMIN_MODULES=/tmp/lib/modules/4.XX.0
341
342       Uncomment and adjust these lines as required to use the alternate
343       programs you have compiled.
344
345       Use "./localconfigure" instead of "./configure", but otherwise you
346       compile libguestfs as usual.
347
348       Don’t run "make install"!  Use the "./run" script instead (see README).
349
350   How can I compile and install libguestfs without supermin?
351       If supermin 5 supports your distro, but you don’t happen to have a new
352       enough supermin installed, then see the previous question.
353
354       If supermin 5 doesn't support your distro at all, you will need to use
355       the "fixed appliance method" where you use a pre-compiled binary
356       appliance.  To build libguestfs without supermin, you need to pass
357       "--disable-appliance --disable-daemon" to either ./configure or
358       ./configure (depending whether you are building respectively from git
359       or from tarballs).  Then, when using libguestfs, you must set the
360       "LIBGUESTFS_PATH" environment variable to the directory of a pre-
361       compiled appliance, as also described in "FIXED APPLIANCE" in
362       guestfs-internals(1).
363
364       For pre-compiled appliances, see also:
365       http://libguestfs.org/download/binaries/appliance/.
366
367       Patches to port supermin to more Linux distros are welcome.
368
369   How can I add support for sVirt?
370       Note for Fedora/RHEL users: This configuration is the default starting
371       with Fedora 18 and RHEL 7.  If you find any problems, please let us
372       know or file a bug.
373
374       SVirt provides a hardened appliance using SELinux, making it very hard
375       for a rogue disk image to "escape" from the confinement of libguestfs
376       and damage the host (it's fair to say that even in standard libguestfs
377       this would be hard, but sVirt provides an extra layer of protection for
378       the host and more importantly protects virtual machines on the same
379       host from each other).
380
381       Currently to enable sVirt you will need libvirt ≥ 0.10.2 (1.0 or later
382       preferred), libguestfs ≥ 1.20, and the SELinux policies from recent
383       Fedora.  If you are not running Fedora 18+, you will need to make
384       changes to your SELinux policy - contact us on the mailing list.
385
386       Once you have the requirements, do:
387
388        ./configure --with-default-backend=libvirt       # libguestfs >= 1.22
389        ./configure --with-default-attach-method=libvirt # libguestfs <= 1.20
390        make
391
392       Set SELinux to Enforcing mode, and sVirt should be used automatically.
393
394       All, or almost all, features of libguestfs should work under sVirt.
395       There is one known shortcoming: virt-rescue(1) will not use libvirt
396       (hence sVirt), but falls back to direct launch of qemu.  So you won't
397       currently get the benefit of sVirt protection when using virt-rescue.
398
399       You can check if sVirt is being used by enabling libvirtd logging (see
400       /etc/libvirt/libvirtd.log), killing and restarting libvirtd, and
401       checking the log files for "Setting SELinux context on ..." messages.
402
403       In theory sVirt should support AppArmor, but we have not tried it.  It
404       will almost certainly require patching libvirt and writing an AppArmor
405       policy.
406
407   Libguestfs has a really long list of dependencies!
408       The base library doesn't depend on very much, but there are three
409       causes of the long list of other dependencies:
410
411       1.  Libguestfs has to be able to read and edit many different disk
412           formats.  For example, XFS support requires XFS tools.
413
414       2.  There are language bindings for many different languages, all
415           requiring their own development tools.  All language bindings
416           (except C) are optional.
417
418       3.  There are some optional library features which can be disabled.
419
420       Since libguestfs ≥ 1.26 it is possible to split up the appliance
421       dependencies (item 1 in the list above) and thus have (eg)
422       "libguestfs-xfs" as a separate subpackage for processing XFS disk
423       images.  We encourage downstream packagers to start splitting the base
424       libguestfs package into smaller subpackages.
425
426   Errors during launch on Fedora ≥ 18, RHEL ≥ 7
427       In Fedora ≥ 18 and RHEL ≥ 7, libguestfs uses libvirt to manage the
428       appliance.  Previously (and upstream) libguestfs runs qemu directly:
429
430        ┌──────────────────────────────────┐
431        │ libguestfs                       │
432        ├────────────────┬─────────────────┤
433        │ direct backend │ libvirt backend │
434        └────────────────┴─────────────────┘
435               ↓                  ↓
436           ┌───────┐         ┌──────────┐
437           │ qemu  │         │ libvirtd │
438           └───────┘         └──────────┘
439
440                              ┌───────┐
441                              │ qemu  │
442                              └───────┘
443
444           upstream          Fedora 18+
445           non-Fedora         RHEL 7+
446           non-RHEL
447
448       The libvirt backend is more sophisticated, supporting SELinux/sVirt
449       (see above) and more.  It is, however, more complex and so less robust.
450
451       If you have permissions problems using the libvirt backend, you can
452       switch to the direct backend by setting this environment variable:
453
454        export LIBGUESTFS_BACKEND=direct
455
456       before running any libguestfs program or virt tool.
457
458   How can I switch to a fixed / prebuilt appliance?
459       This may improve the stability and performance of libguestfs on Fedora
460       and RHEL.
461
462       Any time after installing libguestfs, run the following commands as
463       root:
464
465        mkdir -p /usr/local/lib/guestfs/appliance
466        libguestfs-make-fixed-appliance /usr/local/lib/guestfs/appliance
467        ls -l /usr/local/lib/guestfs/appliance
468
469       Now set the following environment variable before using libguestfs or
470       any virt tool:
471
472        export LIBGUESTFS_PATH=/usr/local/lib/guestfs/appliance
473
474       Of course you can change the path to any directory you want.  You can
475       share the appliance across machines that have the same architecture
476       (eg. all x86-64), but note that libvirt will prevent you from sharing
477       the appliance across NFS because of permissions problems (so either
478       switch to the direct backend or don't use NFS).
479
480   How can I speed up libguestfs builds?
481       By far the most important thing you can do is to install and properly
482       configure Squid.  Note that the default configuration that ships with
483       Squid is rubbish, so configuring it is not optional.
484
485       A very good place to start with Squid configuration is here:
486       https://fedoraproject.org/wiki/Extras/MockTricks#Using_Squid_to_Speed_Up_Mock_package_downloads
487
488       Make sure Squid is running, and that the environment variables
489       $http_proxy and $ftp_proxy are pointing to it.
490
491       With Squid running and correctly configured, appliance builds should be
492       reduced to a few minutes.
493
494       How can I speed up libguestfs builds (Debian)?
495
496       Hilko Bengen suggests using "approx" which is a Debian archive proxy
497       (http://packages.debian.org/approx).  This tool is documented on Debian
498       in the approx(8) manual page.
499

SPEED, DISK SPACE USED BY LIBGUESTFS

501       Note: Most of the information in this section has moved:
502       guestfs-performance(1).
503
504   Upload or write seem very slow.
505       If the underlying disk is not fully allocated (eg. sparse raw or qcow2)
506       then writes can be slow because the host operating system has to do
507       costly disk allocations while you are writing. The solution is to use a
508       fully allocated format instead, ie. non-sparse raw, or qcow2 with the
509       "preallocation=metadata" option.
510
511   Libguestfs uses too much disk space!
512       libguestfs caches a large-ish appliance in:
513
514        /var/tmp/.guestfs-<UID>
515
516       If the environment variable "TMPDIR" is defined, then
517       $TMPDIR/.guestfs-<UID> is used instead.
518
519       It is safe to delete this directory when you are not using libguestfs.
520
521   virt-sparsify seems to make the image grow to the full size of the virtual
522       disk
523       If the input to virt-sparsify(1) is raw, then the output will be raw
524       sparse.  Make sure you are measuring the output with a tool which
525       understands sparseness such as "du -sh".  It can make a huge
526       difference:
527
528        $ ls -lh test1.img
529        -rw-rw-r--. 1 rjones rjones 100M Aug  8 08:08 test1.img
530        $ du -sh test1.img
531        3.6M   test1.img
532
533       (Compare the apparent size 100M vs the actual size 3.6M)
534
535       If all this confuses you, use a non-sparse output format by specifying
536       the --convert option, eg:
537
538        virt-sparsify --convert qcow2 disk.raw disk.qcow2
539
540   Why doesn't virt-resize work on the disk image in-place?
541       Resizing a disk image is very tricky -- especially making sure that you
542       don't lose data or break the bootloader.  The current method
543       effectively creates a new disk image and copies the data plus
544       bootloader from the old one.  If something goes wrong, you can always
545       go back to the original.
546
547       If we were to make virt-resize work in-place then there would have to
548       be limitations: for example, you wouldn't be allowed to move existing
549       partitions (because moving data across the same disk is most likely to
550       corrupt data in the event of a power failure or crash), and LVM would
551       be very difficult to support (because of the almost arbitrary mapping
552       between LV content and underlying disk blocks).
553
554       Another method we have considered is to place a snapshot over the
555       original disk image, so that the original data is untouched and only
556       differences are recorded in the snapshot.  You can do this today using
557       "qemu-img create" + "virt-resize", but qemu currently isn't smart
558       enough to recognize when the same block is written back to the snapshot
559       as already exists in the backing disk, so you will find that this
560       doesn't save you any space or time.
561
562       In summary, this is a hard problem, and what we have now mostly works
563       so we are reluctant to change it.
564
565   Why doesn't virt-sparsify work on the disk image in-place?
566       In libguestfs ≥ 1.26, virt-sparsify can now work on disk images in
567       place.  Use:
568
569        virt-sparsify --in-place disk.img
570
571       But first you should read "IN-PLACE SPARSIFICATION" in
572       virt-sparsify(1).
573

PROBLEMS OPENING DISK IMAGES

575   Remote libvirt guests cannot be opened.
576       Opening remote libvirt guests is not supported at this time.  For
577       example this won't work:
578
579        guestfish -c qemu://remote/system -d Guest
580
581       To open remote disks you have to export them somehow, then connect to
582       the export.  For example if you decided to use NBD:
583
584        remote$ qemu-nbd -t -p 10809 guest.img
585         local$ guestfish -a nbd://remote:10809 -i
586
587       Other possibilities include ssh (if qemu is recent enough), NFS or
588       iSCSI.  See "REMOTE STORAGE" in guestfs(3).
589
590   How can I open this strange disk source?
591       You have a disk image located inside another system that requires
592       access via a library / HTTP / REST / proprietary API, or is compressed
593       or archived in some way.  (One example would be remote access to
594       OpenStack glance images without actually downloading them.)
595
596       We have a sister project called nbdkit
597       (https://github.com/libguestfs/nbdkit).  This project lets you turn any
598       disk source into an NBD server.  Libguestfs can access NBD servers
599       directly, eg:
600
601        guestfish -a nbd://remote
602
603       nbdkit is liberally licensed, so you can link it to or include it in
604       proprietary libraries and code.  It also has a simple, stable plugin
605       API so you can easily write plugins against the API which will continue
606       to work in future.
607
608   Error opening VMDK disks: "uses a vmdk feature which is not supported by
609       this qemu version: VMDK version 3"
610       Qemu (and hence libguestfs) only supports certain VMDK disk images.
611       Others won't work, giving this or similar errors.
612
613       Ideally someone would fix qemu to support the latest VMDK features, but
614       in the meantime you have three options:
615
616       1.  If the guest is hosted on a live, reachable ESX server, then locate
617           and download the disk image called somename-flat.vmdk.  Despite the
618           name, this is a raw disk image, and can be opened by anything.
619
620           If you have a recent enough version of qemu and libguestfs, then
621           you may be able to access this disk image remotely using either
622           HTTPS or ssh.  See "REMOTE STORAGE" in guestfs(3).
623
624       2.  Use VMware’s proprietary vdiskmanager tool to convert the image to
625           raw format.
626
627       3.  Use nbdkit with the proprietary VDDK plugin to live export the disk
628           image as an NBD source.  This should allow you to read and write
629           the VMDK file.
630
631   UFS disks (as used by BSD) cannot be opened.
632       The UFS filesystem format has many variants, and these are not self-
633       identifying.  The Linux kernel has to be told which variant of UFS it
634       has to use, which libguestfs cannot know.
635
636       You have to pass the right "ufstype" mount option when mounting these
637       filesystems.
638
639       See https://www.kernel.org/doc/Documentation/filesystems/ufs.txt
640
641   Windows ReFS
642       Windows ReFS is Microsoft’s ZFS/Btrfs copy.  This filesystem has not
643       yet been reverse engineered and implemented in the Linux kernel, and
644       therefore libguestfs doesn't support it.  At the moment it seems to be
645       very rare "in the wild".
646
647   Non-ASCII characters don’t appear on VFAT filesystems.
648       Typical symptoms of this problem:
649
650       •   You get an error when you create a file where the filename contains
651           non-ASCII characters, particularly non 8-bit characters from Asian
652           languages (Chinese, Japanese, etc).  The filesystem is VFAT.
653
654       •   When you list a directory from a VFAT filesystem, filenames appear
655           as question marks.
656
657       This is a design flaw of the GNU/Linux system.
658
659       VFAT stores long filenames as UTF-16 characters.  When opening or
660       returning filenames, the Linux kernel has to translate these to some
661       form of 8 bit string.  UTF-8 would be the obvious choice, except for
662       Linux users who persist in using non-UTF-8 locales (the user’s locale
663       is not known to the kernel because it’s a function of libc).
664
665       Therefore you have to tell the kernel what translation you want done
666       when you mount the filesystem.  The two methods are the "iocharset"
667       parameter (which is not relevant to libguestfs) and the "utf8" flag.
668
669       So to use a VFAT filesystem you must add the "utf8" flag when mounting.
670       From guestfish, use:
671
672        ><fs> mount-options utf8 /dev/sda1 /
673
674       or on the guestfish command line:
675
676        guestfish [...] -m /dev/sda1:/:utf8
677
678       or from the API:
679
680        guestfs_mount_options (g, "utf8", "/dev/sda1", "/");
681
682       The kernel will then translate filenames to and from UTF-8 strings.
683
684       We considered adding this mount option transparently, but unfortunately
685       there are several problems with doing that:
686
687       •   On some Linux systems, the "utf8" mount option doesn't work.  We
688           don't precisely understand what systems or why, but this was
689           reliably reported by one user.
690
691       •   It would prevent you from using the "iocharset" parameter because
692           it is incompatible with "utf8".  It is probably not a good idea to
693           use this parameter, but we don't want to prevent it.
694
695   Non-ASCII characters appear as underscore (_) on ISO9660 filesystems.
696       The filesystem was not prepared correctly with mkisofs or genisoimage.
697       Make sure the filesystem was created using Joliet and/or Rock Ridge
698       extensions.  libguestfs does not require any special mount options to
699       handle the filesystem.
700
701   Cannot open Windows guests which use NTFS.
702       You see errors like:
703
704        mount: unknown filesystem type 'ntfs'
705
706       On Red Hat Enterprise Linux or CentOS < 7.2, you have to install the
707       libguestfs-winsupport package.  In RHEL ≥ 7.2, "libguestfs-winsupport"
708       is part of the base RHEL distribution, but see the next question.
709
710   "mount: unsupported filesystem type" with NTFS in RHEL ≥ 7.2
711       In RHEL 7.2 we were able to add "libguestfs-winsupport" to the base
712       RHEL distribution, but we had to disable the ability to use it for
713       opening and editing filesystems.  It is only supported when used with
714       virt-v2v(1).  If you try to use guestfish(1) or guestmount(1) or some
715       other programs on an NTFS filesystem, you will see the error:
716
717        mount: unsupported filesystem type
718
719       This is not a supported configuration, and it will not be made to work
720       in RHEL.  Don't bother to open a bug about it, as it will be
721       immediately "CLOSED -> WONTFIX".
722
723       You may compile your own libguestfs removing this restriction, but that
724       won't be endorsed or supported by Red Hat.
725
726   Cannot open or inspect RHEL 7 guests.
727   Cannot open Linux guests which use XFS.
728       RHEL 7 guests, and any other guests that use XFS, can be opened by
729       libguestfs, but you have to install the "libguestfs-xfs" package.
730

USING LIBGUESTFS IN YOUR OWN PROGRAMS

732   The API has hundreds of methods, where do I start?
733       We recommend you start by reading the API overview: "API OVERVIEW" in
734       guestfs(3).
735
736       Although the API overview covers the C API, it is still worth reading
737       even if you are going to use another programming language, because the
738       API is the same, just with simple logical changes to the names of the
739       calls:
740
741                         C  guestfs_ln_sf (g, target, linkname);
742                    Python  g.ln_sf (target, linkname);
743                     OCaml  g#ln_sf target linkname;
744                      Perl  $g->ln_sf (target, linkname);
745         Shell (guestfish)  ln-sf target linkname
746                       PHP  guestfs_ln_sf ($g, $target, $linkname);
747
748       Once you're familiar with the API overview, you should look at this
749       list of starting points for other language bindings: "USING LIBGUESTFS
750       WITH OTHER PROGRAMMING LANGUAGES" in guestfs(3).
751
752   Can I use libguestfs in my proprietary / closed source / commercial
753       program?
754       In general, yes.  However this is not legal advice - read the license
755       that comes with libguestfs, and if you have specific questions contact
756       a lawyer.
757
758       In the source tree the license is in the file "COPYING.LIB" (LGPLv2+
759       for the library and bindings) and "COPYING" (GPLv2+ for the standalone
760       programs).
761

DEBUGGING LIBGUESTFS

763   Help, it’s not working!
764       If no libguestfs program seems to work at all, run the program below
765       and paste the complete, unedited output into an email to "libguestfs" @
766       "redhat.com":
767
768        libguestfs-test-tool
769
770       If a particular operation fails, supply all the information in this
771       checklist, in an email to "libguestfs" @ "redhat.com":
772
773       1.  What are you trying to do?
774
775       2.  What exact command(s) did you run?
776
777       3.  What was the precise error or output of these commands?
778
779       4.  Enable debugging, run the commands again, and capture the complete
780           output.  Do not edit the output.
781
782            export LIBGUESTFS_DEBUG=1
783            export LIBGUESTFS_TRACE=1
784
785       5.  Include the version of libguestfs, the operating system version,
786           and how you installed libguestfs (eg. from source, "yum install",
787           etc.)
788
789   How do I debug when using any libguestfs program or tool (eg. virt-
790       customize or virt-df)?
791       There are two "LIBGUESTFS_*" environment variables you can set in order
792       to get more information from libguestfs.
793
794       "LIBGUESTFS_TRACE"
795           Set this to 1 and libguestfs will print out each command / API call
796           in a format which is similar to guestfish commands.
797
798       "LIBGUESTFS_DEBUG"
799           Set this to 1 in order to enable massive amounts of debug messages.
800           If you think there is some problem inside the libguestfs appliance,
801           then you should use this option.
802
803       To set these from the shell, do this before running the program:
804
805        export LIBGUESTFS_TRACE=1
806        export LIBGUESTFS_DEBUG=1
807
808       For csh/tcsh the equivalent commands would be:
809
810        setenv LIBGUESTFS_TRACE 1
811        setenv LIBGUESTFS_DEBUG 1
812
813       For further information, see: "ENVIRONMENT VARIABLES" in guestfs(3).
814
815   How do I debug when using guestfish?
816       You can use the same environment variables above.  Alternatively use
817       the guestfish options -x (to trace commands) or -v (to get the full
818       debug output), or both.
819
820       For further information, see: guestfish(1).
821
822   How do I debug when using the API?
823       Call "guestfs_set_trace" in guestfs(3) to enable command traces, and/or
824       "guestfs_set_verbose" in guestfs(3) to enable debug messages.
825
826       For best results, call these functions as early as possible, just after
827       creating the guestfs handle if you can, and definitely before calling
828       launch.
829
830   How do I capture debug output and put it into my logging system?
831       Use the event API.  For examples, see: "SETTING CALLBACKS TO HANDLE
832       EVENTS" in guestfs(3) and the examples/debug-logging.c program in the
833       libguestfs sources.
834
835   Digging deeper into the appliance boot process.
836       Enable debugging and then read this documentation on the appliance boot
837       process: guestfs-internals(1).
838
839   libguestfs hangs or fails during run/launch.
840       Enable debugging and look at the full output.  If you cannot work out
841       what is going on, file a bug report, including the complete output of
842       libguestfs-test-tool(1).
843
844   Debugging libvirt
845       If you are using the libvirt backend, and libvirt is failing, then you
846       can enable debugging by editing /etc/libvirt/libvirtd.conf.
847
848       If you are running as non-root, then you have to edit a different file.
849       Create ~/.config/libvirt/libvirtd.conf containing:
850
851        log_level=1
852        log_outputs="1:file:/tmp/libvirtd.log"
853
854       Kill any session (non-root) libvirtd that is running, and next time you
855       run the libguestfs command, you should see a large amount of useful
856       debugging information from libvirtd in /tmp/libvirtd.log
857
858   Broken kernel, or trying a different kernel.
859       You can choose a different kernel for the appliance by setting some
860       supermin environment variables:
861
862        export SUPERMIN_KERNEL_VERSION=4.8.0-1.fc25.x86_64
863        export SUPERMIN_KERNEL=/boot/vmlinuz-$SUPERMIN_KERNEL_VERSION
864        export SUPERMIN_MODULES=/lib/modules/$SUPERMIN_KERNEL_VERSION
865        rm -rf /var/tmp/.guestfs-*
866        libguestfs-test-tool
867
868   Broken qemu, or trying a different qemu.
869       You can choose a different qemu by setting the hypervisor environment
870       variable:
871
872        export LIBGUESTFS_HV=/path/to/qemu-system-x86_64
873        libguestfs-test-tool
874

DESIGN/INTERNALS OF LIBGUESTFS

876       See also guestfs-internals(1).
877
878   Why don’t you do everything through the FUSE / filesystem interface?
879       We offer a command called guestmount(1) which lets you mount guest
880       filesystems on the host.  This is implemented as a FUSE module.  Why
881       don't we just implement the whole of libguestfs using this mechanism,
882       instead of having the large and rather complicated API?
883
884       The reasons are twofold.  Firstly, libguestfs offers API calls for
885       doing things like creating and deleting partitions and logical volumes,
886       which don't fit into a filesystem model very easily.  Or rather, you
887       could fit them in: for example, creating a partition could be mapped to
888       "mkdir /fs/hda1" but then you'd have to specify some method to choose
889       the size of the partition (maybe "echo 100M > /fs/hda1/.size"), and the
890       partition type, start and end sectors etc., but once you've done that
891       the filesystem-based API starts to look more complicated than the call-
892       based API we currently have.
893
894       The second reason is for efficiency.  FUSE itself is reasonably
895       efficient, but it does make lots of small, independent calls into the
896       FUSE module.  In guestmount these have to be translated into messages
897       to the libguestfs appliance which has a big overhead (in time and round
898       trips).  For example, reading a file in 64 KB chunks is inefficient
899       because each chunk would turn into a single round trip.  In the
900       libguestfs API it is much more efficient to download an entire file or
901       directory through one of the streaming calls like "guestfs_download" or
902       "guestfs_tar_out".
903
904   Why don’t you do everything through GVFS?
905       The problems are similar to the problems with FUSE.
906
907       GVFS is a better abstraction than POSIX/FUSE.  There is an FTP backend
908       for GVFS, which is encouraging because FTP is conceptually similar to
909       the libguestfs API.  However the GVFS FTP backend makes multiple
910       simultaneous connections in order to keep interactivity, which we can't
911       easily do with libguestfs.
912
913   Why can I write to the disk, even though I added it read-only?
914   Why does "--ro" appear to have no effect?
915       When you add a disk read-only, libguestfs places a writable overlay on
916       top of the underlying disk.  Writes go into this overlay, and are
917       discarded when the handle is closed (or "guestfish" etc. exits).
918
919       There are two reasons for doing it this way: Firstly read-only disks
920       aren't possible in many cases (eg. IDE simply doesn't support them, so
921       you couldn't have an IDE-emulated read-only disk, although this is not
922       common in real libguestfs installations).
923
924       Secondly and more importantly, even if read-only disks were possible,
925       you wouldn't want them.  Mounting any filesystem that has a journal,
926       even "mount -o ro", causes writes to the filesystem because the journal
927       has to be replayed and metadata updated.  If the disk was truly read-
928       only, you wouldn't be able to mount a dirty filesystem.
929
930       To make it usable, we create the overlay as a place to temporarily
931       store these writes, and then we discard it afterwards.  This ensures
932       that the underlying disk is always untouched.
933
934       Note also that there is a regression test for this when building
935       libguestfs (in "tests/qemu").  This is one reason why it’s important
936       for packagers to run the test suite.
937
938   Does "--ro" make all disks read-only?
939       No!  The "--ro" option only affects disks added on the command line,
940       ie. using "-a" and "-d" options.
941
942       In guestfish, if you use the "add" command, then disk is added read-
943       write (unless you specify the "readonly:true" flag explicitly with the
944       command).
945
946   Can I use "guestfish --ro" as a way to backup my virtual machines?
947       Usually this is not a good idea.  The question is answered in more
948       detail in this mailing list posting:
949       https://www.redhat.com/archives/libguestfs/2010-August/msg00024.html
950
951       See also the next question.
952
953   Why can’t I run fsck on a live filesystem using "guestfish --ro"?
954       This command will usually not work:
955
956        guestfish --ro -a /dev/vg/my_root_fs run : fsck /dev/sda
957
958       The reason for this is that qemu creates a snapshot over the original
959       filesystem, but it doesn't create a strict point-in-time snapshot.
960       Blocks of data on the underlying filesystem are read by qemu at
961       different times as the fsck operation progresses, with host writes in
962       between.  The result is that fsck sees massive corruption (imaginary,
963       not real!) and fails.
964
965       What you have to do is to create a point-in-time snapshot.  If it’s a
966       logical volume, use an LVM2 snapshot.  If the filesystem is located
967       inside something like a btrfs/ZFS file, use a btrfs/ZFS snapshot, and
968       then run the fsck on the snapshot.  In practice you don't need to use
969       libguestfs for this -- just run /sbin/fsck directly.
970
971       Creating point-in-time snapshots of host devices and files is outside
972       the scope of libguestfs, although libguestfs can operate on them once
973       they are created.
974
975   What’s the difference between guestfish and virt-rescue?
976       A lot of people are confused by the two superficially similar tools we
977       provide:
978
979        $ guestfish --ro -a guest.img
980        ><fs> run
981        ><fs> fsck /dev/sda1
982
983        $ virt-rescue --ro guest.img
984        ><rescue> /sbin/fsck /dev/sda1
985
986       And the related question which then arises is why you can’t type in
987       full shell commands with all the --options in guestfish (but you can in
988       virt-rescue(1)).
989
990       guestfish(1) is a program providing structured access to the guestfs(3)
991       API.  It happens to be a nice interactive shell too, but its primary
992       purpose is structured access from shell scripts.  Think of it more like
993       a language binding, like Python and other bindings, but for shell.  The
994       key differentiating factor of guestfish (and the libguestfs API in
995       general) is the ability to automate changes.
996
997       virt-rescue(1) is a free-for-all freeform way to boot the libguestfs
998       appliance and make arbitrary changes to your VM. It’s not structured,
999       you can't automate it, but for making quick ad-hoc fixes to your
1000       guests, it can be quite useful.
1001
1002       But, libguestfs also has a "backdoor" into the appliance allowing you
1003       to send arbitrary shell commands.  It’s not as flexible as virt-rescue,
1004       because you can't interact with the shell commands, but here it is
1005       anyway:
1006
1007        ><fs> debug sh "cmd arg1 arg2 ..."
1008
1009       Note that you should not rely on this.  It could be removed or changed
1010       in future. If your program needs some operation, please add it to the
1011       libguestfs API instead.
1012
1013   What’s the deal with "guestfish -i"?
1014   Why does virt-cat only work on a real VM image, but virt-df works on any
1015       disk image?
1016   What does "no root device found in this operating system image" mean?
1017       These questions are all related at a fundamental level which may not be
1018       immediately obvious.
1019
1020       At the guestfs(3) API level, a "disk image" is just a pile of
1021       partitions and filesystems.
1022
1023       In contrast, when the virtual machine boots, it mounts those
1024       filesystems into a consistent hierarchy such as:
1025
1026        /          (/dev/sda2)
1027
1028        ├── /boot  (/dev/sda1)
1029
1030        ├── /home  (/dev/vg_external/Homes)
1031
1032        ├── /usr   (/dev/vg_os/lv_usr)
1033
1034        └── /var   (/dev/vg_os/lv_var)
1035
1036       (or drive letters on Windows).
1037
1038       The API first of all sees the disk image at the "pile of filesystems"
1039       level.  But it also has a way to inspect the disk image to see if it
1040       contains an operating system, and how the disks are mounted when the
1041       operating system boots: "INSPECTION" in guestfs(3).
1042
1043       Users expect some tools (like virt-cat(1)) to work with VM paths:
1044
1045        virt-cat fedora.img /var/log/messages
1046
1047       How does virt-cat know that /var is a separate partition?  The trick is
1048       that virt-cat performs inspection on the disk image, and uses that to
1049       translate the path correctly.
1050
1051       Some tools (including virt-cat(1), virt-edit(1), virt-ls(1)) use
1052       inspection to map VM paths.  Other tools, such as virt-df(1) and
1053       virt-filesystems(1) operate entirely at the raw "big pile of
1054       filesystems" level of the libguestfs API, and don't use inspection.
1055
1056       guestfish(1) is in an interesting middle ground.  If you use the -a and
1057       -m command line options, then you have to tell guestfish exactly how to
1058       add disk images and where to mount partitions. This is the raw API
1059       level.
1060
1061       If you use the -i option, libguestfs performs inspection and mounts the
1062       filesystems for you.
1063
1064       The error "no root device found in this operating system image" is
1065       related to this.  It means inspection was unable to locate an operating
1066       system within the disk image you gave it.  You might see this from
1067       programs like virt-cat if you try to run them on something which is
1068       just a disk image, not a virtual machine disk image.
1069
1070   What do these "debug*" and "internal-*" functions do?
1071       There are some functions which are used for debugging and internal
1072       purposes which are not part of the stable API.
1073
1074       The "debug*" (or "guestfs_debug*") functions, primarily "guestfs_debug"
1075       in guestfs(3) and a handful of others, are used for debugging
1076       libguestfs.  Although they are not part of the stable API and thus may
1077       change or be removed at any time, some programs may want to call these
1078       while waiting for features to be added to libguestfs.
1079
1080       The "internal-*" (or "guestfs_internal_*") functions are purely to be
1081       used by libguestfs itself.  There is no reason for programs to call
1082       them, and programs should not try to use them.  Using them will often
1083       cause bad things to happen, as well as not being part of the documented
1084       stable API.
1085

DEVELOPERS

1087   Where do I send patches?
1088       Please send patches to the libguestfs mailing list
1089       https://lists.libguestfs.org.  You don't have to be subscribed, but
1090       there will be a delay until your posting is manually approved.
1091
1092       Please don’t use github pull requests - they will be ignored.  The
1093       reasons are (a) we want to discuss and dissect patches on the mailing
1094       list, and (b) github pull requests turn into merge commits but we
1095       prefer to have a linear history.
1096
1097   How do I propose a feature?
1098       Large new features that you intend to contribute should be discussed on
1099       the mailing list first (https://lists.libguestfs.org).  This avoids
1100       disappointment and wasted work if we don't think the feature would fit
1101       into the libguestfs project.
1102
1103       If you want to suggest a useful feature but don’t want to write the
1104       code, you can file a bug (see "GETTING HELP AND REPORTING BUGS") with
1105       "RFE: " at the beginning of the Summary line.
1106
1107   Who can commit to libguestfs git?
1108       About 5 people have commit access to github.  Patches should be posted
1109       on the list first and ACKed.  The policy for ACKing and pushing patches
1110       is outlined here:
1111
1112       https://www.redhat.com/archives/libguestfs/2012-January/msg00023.html
1113
1114   Can I fork libguestfs?
1115       Of course you can.  Git makes it easy to fork libguestfs.  Github makes
1116       it even easier.  It’s nice if you tell us on the mailing list about
1117       forks and the reasons for them.
1118

MISCELLANEOUS QUESTIONS

1120   Can I monitor the live disk activity of a virtual machine using libguestfs?
1121       A common request is to be able to use libguestfs to monitor the live
1122       disk activity of a guest, for example, to get notified every time a
1123       guest creates a new file.  Libguestfs does not work in the way some
1124       people imagine, as you can see from this diagram:
1125
1126                   ┌─────────────────────────────────────┐
1127                   │ monitoring program using libguestfs │
1128                   └─────────────────────────────────────┘
1129
1130        ┌───────────┐    ┌──────────────────────┐
1131        │ live VM   │    │ libguestfs appliance │
1132        ├───────────┤    ├──────────────────────┤
1133        │ kernel (1)│    │ appliance kernel (2) │
1134        └───────────┘    └──────────────────────┘
1135             ↓                      ↓ (r/o connection)
1136             ┌──────────────────────┐
1137             |      disk image      |
1138             └──────────────────────┘
1139
1140       This scenario is safe (as long as you set the "readonly" flag when
1141       adding the drive).  However the libguestfs appliance kernel (2) does
1142       not see all the changes made to the disk image, for two reasons:
1143
1144       i.  The VM kernel (1) can cache data in memory, so it doesn't appear in
1145           the disk image.
1146
1147       ii. The libguestfs appliance kernel (2) doesn't expect that the disk
1148           image is changing underneath it, so its own cache is not magically
1149           updated even when the VM kernel (1) does update the disk image.
1150
1151       The only supported solution is to restart the entire libguestfs
1152       appliance whenever you want to look at changes in the disk image.  At
1153       the API level that corresponds to calling "guestfs_shutdown" followed
1154       by "guestfs_launch", which is a heavyweight operation (see also
1155       guestfs-performance(3)).
1156
1157       There are some unsupported hacks you can try if relaunching the
1158       appliance is really too costly:
1159
1160       •   Call "guestfs_drop_caches (g, 3)".  This causes all cached data
1161           help by the libguestfs appliance kernel (2) to be discarded, so it
1162           goes back to the disk image.
1163
1164           However this on its own is not sufficient, because qemu also caches
1165           some data.  You will also need to patch libguestfs to (re-)enable
1166           the "cache=none" mode.  See:
1167           https://rwmj.wordpress.com/2013/09/02/new-in-libguestfs-allow-cache-mode-to-be-selected/
1168
1169       •   Use a tool like virt-bmap instead.
1170
1171       •   Run an agent inside the guest.
1172
1173       Nothing helps if the guest is making more fundamental changes (eg.
1174       deleting filesystems).  For those kinds of things you must relaunch the
1175       appliance.
1176
1177       (Note there is a third problem that you need to use consistent
1178       snapshots to really examine live disk images, but that’s a general
1179       problem with using libguestfs against any live disk image.)
1180

SEE ALSO

1182       guestfish(1), guestfs(3), http://libguestfs.org/.
1183

AUTHORS

1185       Richard W.M. Jones ("rjones at redhat dot com")
1186
1188       Copyright (C) 2012-2023 Red Hat Inc.
1189

LICENSE

1191       This library is free software; you can redistribute it and/or modify it
1192       under the terms of the GNU Lesser General Public License as published
1193       by the Free Software Foundation; either version 2 of the License, or
1194       (at your option) any later version.
1195
1196       This library is distributed in the hope that it will be useful, but
1197       WITHOUT ANY WARRANTY; without even the implied warranty of
1198       MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
1199       Lesser General Public License for more details.
1200
1201       You should have received a copy of the GNU Lesser General Public
1202       License along with this library; if not, write to the Free Software
1203       Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
1204       02110-1301 USA
1205

BUGS

1207       To get a list of bugs against libguestfs, use this link:
1208       https://bugzilla.redhat.com/buglist.cgi?component=libguestfs&product=Virtualization+Tools
1209
1210       To report a new bug against libguestfs, use this link:
1211       https://bugzilla.redhat.com/enter_bug.cgi?component=libguestfs&product=Virtualization+Tools
1212
1213       When reporting a bug, please supply:
1214
1215       •   The version of libguestfs.
1216
1217       •   Where you got libguestfs (eg. which Linux distro, compiled from
1218           source, etc)
1219
1220       •   Describe the bug accurately and give a way to reproduce it.
1221
1222       •   Run libguestfs-test-tool(1) and paste the complete, unedited output
1223           into the bug report.
1224
1225
1226
1227libguestfs-1.51.9                 2023-12-09                    guestfs-faq(1)
Impressum