1xl(1)                                 Xen                                xl(1)
2
3
4

NAME

6       xl - Xen management tool, based on libxenlight
7

SYNOPSIS

9       xl subcommand [args]
10

DESCRIPTION

12       The xl program is the new tool for managing Xen guest domains. The
13       program can be used to create, pause, and shutdown domains. It can also
14       be used to list current domains, enable or pin VCPUs, and attach or
15       detach virtual block devices.
16
17       The basic structure of every xl command is almost always:
18
19         xl subcommand [OPTIONS] domain-id
20
21       Where subcommand is one of the subcommands listed below, domain-id is
22       the numeric domain id, or the domain name (which will be internally
23       translated to domain id), and OPTIONS are subcommand specific options.
24       There are a few exceptions to this rule in the cases where the
25       subcommand in question acts on all domains, the entire machine, or
26       directly on the Xen hypervisor.  Those exceptions will be clear for
27       each of those subcommands.
28

NOTES

30       start the script /etc/init.d/xencommons at boot time
31           Most xl operations rely upon xenstored and xenconsoled: make sure
32           you start the script /etc/init.d/xencommons at boot time to
33           initialize all the daemons needed by xl.
34
35       setup a xenbr0 bridge in dom0
36           In the most common network configuration, you need to setup a
37           bridge in dom0 named xenbr0 in order to have a working network in
38           the guest domains.  Please refer to the documentation of your Linux
39           distribution to know how to setup the bridge.
40
41       autoballoon
42           If you specify the amount of memory dom0 has, passing dom0_mem to
43           Xen, it is highly recommended to disable autoballoon. Edit
44           /etc/xen/xl.conf and set it to 0.
45
46       run xl as root
47           Most xl commands require root privileges to run due to the
48           communications channels used to talk to the hypervisor.  Running as
49           non root will return an error.
50

GLOBAL OPTIONS

52       Some global options are always available:
53
54       -v  Verbose.
55
56       -N  Dry run: do not actually execute the command.
57
58       -f  Force execution: xl will refuse to run some commands if it detects
59           that xend is also running, this option will force the execution of
60           those commands, even though it is unsafe.
61
62       -t  Always use carriage-return-based overwriting for displaying
63           progress messages without scrolling the screen.  Without -t, this
64           is done only if stderr is a tty.
65

DOMAIN SUBCOMMANDS

67       The following subcommands manipulate domains directly.  As stated
68       previously, most commands take domain-id as the first parameter.
69
70       button-press domain-id button
71           This command is deprecated. Please use "xl trigger" instead.
72
73           Indicate an ACPI button press to the domain, where button can be
74           'power' or 'sleep'. This command is only available for HVM domains.
75
76       create [configfile] [OPTIONS]
77           The create subcommand takes a config file as its first argument:
78           see xl.cfg(5) for full details of the file format and possible
79           options.  If configfile is missing xl creates the domain assuming
80           the default values for every option.
81
82           configfile has to be an absolute path to a file.
83
84           Create will return as soon as the domain is started.  This does not
85           mean the guest OS in the domain has actually booted, or is
86           available for input.
87
88           If the -F option is specified, create will start the domain and not
89           return until its death.
90
91           OPTIONS
92
93           -q, --quiet
94               No console output.
95
96           -f=FILE, --defconfig=FILE
97               Use the given configuration file.
98
99           -p  Leave the domain paused after it is created.
100
101           -F  Run in foreground until death of the domain.
102
103           -V, --vncviewer
104               Attach to domain's VNC server, forking a vncviewer process.
105
106           -A, --vncviewer-autopass
107               Pass the VNC password to vncviewer via stdin.
108
109           -c  Attach console to the domain as soon as it has started.  This
110               is useful for determining issues with crashing domains and just
111               as a general convenience since you often want to watch the
112               domain boot.
113
114           key=value
115               It is possible to pass key=value pairs on the command line to
116               provide options as if they were written in the configuration
117               file; these override whatever is in the configfile.
118
119               NB: Many config options require characters such as quotes or
120               brackets which are interpreted by the shell (and often
121               discarded) before being passed to xl, resulting in xl being
122               unable to parse the value correctly.  A simple work-around is
123               to put all extra options within a single set of quotes,
124               separated by semicolons.  (See below for an example.)
125
126           EXAMPLES
127
128           with config file
129                 xl create DebianLenny
130
131               This creates a domain with the file /etc/xen/DebianLenny, and
132               returns as soon as it is run.
133
134           with extra parameters
135                 xl create hvm.cfg 'cpus="0-3"; pci=["01:05.1","01:05.2"]'
136
137               This creates a domain with the file hvm.cfg, but additionally
138               pins it to cpus 0-3, and passes through two PCI devices.
139
140       config-update domain-id [configfile] [OPTIONS]
141           Update the saved configuration for a running domain. This has no
142           immediate effect but will be applied when the guest is next
143           restarted. This command is useful to ensure that runtime
144           modifications made to the guest will be preserved when the guest is
145           restarted.
146
147           Since Xen 4.5 xl has improved capabilities to handle dynamic domain
148           configuration changes and will preserve any changes made at runtime
149           when necessary. Therefore it should not normally be necessary to
150           use this command any more.
151
152           configfile has to be an absolute path to a file.
153
154           OPTIONS
155
156           -f=FILE, --defconfig=FILE
157               Use the given configuration file.
158
159           key=value
160               It is possible to pass key=value pairs on the command line to
161               provide options as if they were written in the configuration
162               file; these override whatever is in the configfile.  Please see
163               the note under create on handling special characters when
164               passing key=value pairs on the command line.
165
166       console [OPTIONS] domain-id
167           Attach to the console of a domain specified by domain-id.  If
168           you've set up your domains to have a traditional login console this
169           will look much like a normal text login screen.
170
171           Use the key combination Ctrl+] to detach from the domain console.
172
173           OPTIONS
174
175           -t [pv|serial]
176               Connect to a PV console or connect to an emulated serial
177               console.  PV consoles are the only consoles available for PV
178               domains while HVM domains can have both. If this option is not
179               specified it defaults to emulated serial for HVM guests and PV
180               console for PV guests.
181
182           -n NUM
183               Connect to console number NUM. Console numbers start from 0.
184
185       destroy [OPTIONS] domain-id
186           Immediately terminate the domain specified by domain-id.  This
187           doesn't give the domain OS any chance to react, and is the
188           equivalent of ripping the power cord out on a physical machine.  In
189           most cases you will want to use the shutdown command instead.
190
191           OPTIONS
192
193           -f  Allow domain 0 to be destroyed.  Because a domain cannot
194               destroy itself, this is only possible when using a
195               disaggregated toolstack, and is most useful when using a
196               hardware domain separated from domain 0.
197
198       domid domain-name
199           Converts a domain name to a domain id.
200
201       domname domain-id
202           Converts a domain id to a domain name.
203
204       rename domain-id new-name
205           Change the domain name of a domain specified by domain-id to new-
206           name.
207
208       dump-core domain-id [filename]
209           Dumps the virtual machine's memory for the specified domain to the
210           filename specified, without pausing the domain.  The dump file will
211           be written to a distribution specific directory for dump files, for
212           example: /var/lib/xen/dump/dump.
213
214       help [--long]
215           Displays the short help message (i.e. common commands) by default.
216
217           If the --long option is specified, it displays the complete set of
218           xl subcommands, grouped by function.
219
220       list [OPTIONS] [domain-id ...]
221           Displays information about one or more domains.  If no domains are
222           specified it displays information about all domains.
223
224           OPTIONS
225
226           -l, --long
227               The output for xl list is not the table view shown below, but
228               instead presents the data as a JSON data structure.
229
230           -Z, --context
231               Also displays the security labels.
232
233           -v, --verbose
234               Also displays the domain UUIDs, the shutdown reason and
235               security labels.
236
237           -c, --cpupool
238               Also displays the cpupool the domain belongs to.
239
240           -n, --numa
241               Also displays the domain NUMA node affinity.
242
243           EXAMPLE
244
245           An example format for the list is as follows:
246
247               Name                                        ID   Mem VCPUs      State   Time(s)
248               Domain-0                                     0   750     4     r-----   11794.3
249               win                                          1  1019     1     r-----       0.3
250               linux                                        2  2048     2     r-----    5624.2
251
252           Name is the name of the domain.  ID the numeric domain id.  Mem is
253           the desired amount of memory to allocate to the domain (although it
254           may not be the currently allocated amount).  VCPUs is the number of
255           virtual CPUs allocated to the domain.  State is the run state (see
256           below).  Time is the total run time of the domain as accounted for
257           by Xen.
258
259           STATES
260
261           The State field lists 6 states for a Xen domain, and which ones the
262           current domain is in.
263
264           r - running
265               The domain is currently running on a CPU.
266
267           b - blocked
268               The domain is blocked, and not running or runnable.  This can
269               be because the domain is waiting on IO (a traditional wait
270               state) or has gone to sleep because there was nothing else for
271               it to do.
272
273           p - paused
274               The domain has been paused, usually occurring through the
275               administrator running xl pause.  When in a paused state the
276               domain will still consume allocated resources (like memory),
277               but will not be eligible for scheduling by the Xen hypervisor.
278
279           s - shutdown
280               The guest OS has shut down (SCHEDOP_shutdown has been called)
281               but the domain is not dying yet.
282
283           c - crashed
284               The domain has crashed, which is always a violent ending.
285               Usually this state only occurs if the domain has been
286               configured not to restart on a crash.  See xl.cfg(5) for more
287               info.
288
289           d - dying
290               The domain is in the process of dying, but hasn't completely
291               shut down or crashed.
292
293           NOTES
294
295               The Time column is deceptive.  Virtual IO (network and block
296               devices) used by the domains requires coordination by Domain0,
297               which means that Domain0 is actually charged for much of the
298               time that a DomainU is doing IO.  Use of this time value to
299               determine relative utilizations by domains is thus very
300               unreliable, as a high IO workload may show as less utilized
301               than a high CPU workload.  Consider yourself warned.
302
303       mem-set domain-id mem
304           Set the target for the domain's balloon driver.
305
306           The default unit is kiB.  Add 't' for TiB, 'g' for GiB, 'm' for
307           MiB, 'k' for kiB, and 'b' for bytes (e.g., `2048m` for 2048 MiB).
308
309           This must be less than the initial maxmem parameter in the domain's
310           configuration.
311
312           Note that this operation requests the guest operating system's
313           balloon driver to reach the target amount of memory.  The guest may
314           fail to reach that amount of memory for any number of reasons,
315           including:
316
317           ·   The guest doesn't have a balloon driver installed
318
319           ·   The guest's balloon driver is buggy
320
321           ·   The guest's balloon driver cannot create free guest memory due
322               to guest memory pressure
323
324           ·   The guest's balloon driver cannot allocate memory from Xen
325               because of hypervisor memory pressure
326
327           ·   The guest administrator has disabled the balloon driver
328
329           Warning: There is no good way to know in advance how small of a
330           mem-set will make a domain unstable and cause it to crash.  Be very
331           careful when using this command on running domains.
332
333       mem-max domain-id mem
334           Specify the limit Xen will place on the amount of memory a guest
335           may allocate.
336
337           The default unit is kiB.  Add 't' for TiB, 'g' for GiB, 'm' for
338           MiB, 'k' for kiB, and 'b' for bytes (e.g., `2048m` for 2048 MiB).
339
340           NB that users normally shouldn't need this command; xl mem-set will
341           set this as appropriate automatically.
342
343           mem can't be set lower than the current memory target for domain-
344           id.  It is allowed to be higher than the configured maximum memory
345           size of the domain (maxmem parameter in the domain's
346           configuration). Note however that the initial maxmem value is still
347           used as an upper limit for xl mem-set.  Also note that calling xl
348           mem-set will reset this value.
349
350           The domain will not receive any signal regarding the changed memory
351           limit.
352
353       migrate [OPTIONS] domain-id host
354           Migrate a domain to another host machine. By default xl relies on
355           ssh as a transport mechanism between the two hosts.
356
357           OPTIONS
358
359           -s sshcommand
360               Use <sshcommand> instead of ssh.  String will be passed to sh.
361               If empty, run <host> instead of ssh <host> xl migrate-receive
362               [-d -e].
363
364           -e  On the new <host>, do not wait in the background for the death
365               of the domain. See the corresponding option of the create
366               subcommand.
367
368           -C config
369               Send the specified <config> file instead of the file used on
370               creation of the domain.
371
372           --debug
373               Display huge (!) amount of debug information during the
374               migration process.
375
376           -p  Leave the domain on the receive side paused after migration.
377
378           -D  Preserve the domain-id in the domain coniguration that is
379               transferred such that it will be identical on the destination
380               host, unless that configuration is overridden using the -C
381               option. Note that it is not possible to use this option for a
382               'localhost' migration.
383
384       remus [OPTIONS] domain-id host
385           Enable Remus HA or COLO HA for domain. By default xl relies on ssh
386           as a transport mechanism between the two hosts.
387
388           NOTES
389
390               Remus support in xl is still in experimental (proof-of-concept)
391               phase.  Disk replication support is limited to DRBD disks.
392
393               COLO support in xl is still in experimental (proof-of-concept)
394               phase. All options are subject to change in the future.
395
396           COLO disk configuration looks like:
397
398             disk = ['...,colo,colo-host=xxx,colo-port=xxx,colo-export=xxx,active-disk=xxx,hidden-disk=xxx...']
399
400           The supported options are:
401
402           colo-host   : Secondary host's ip address.
403           colo-port   : Secondary host's port, we will run a nbd server on
404           the secondary host, and the nbd server will listen on this port.
405           colo-export : Nbd server's disk export name of the secondary host.
406           active-disk : Secondary's guest write will be buffered to this
407           disk, and it's used by the secondary.
408           hidden-disk : Primary's modified contents will be buffered in this
409           disk, and it's used by the secondary.
410
411           COLO network configuration looks like:
412
413             vif = [ '...,forwarddev=xxx,...']
414
415           The supported options are:
416
417           forwarddev : Forward devices for the primary and the secondary,
418           they are directly connected.
419
420           OPTIONS
421
422           -i MS
423               Checkpoint domain memory every MS milliseconds (default 200ms).
424
425           -u  Disable memory checkpoint compression.
426
427           -s sshcommand
428               Use <sshcommand> instead of ssh.  String will be passed to sh.
429               If empty, run <host> instead of ssh <host> xl migrate-receive
430               -r [-e].
431
432           -e  On the new <host>, do not wait in the background for the death
433               of the domain.  See the corresponding option of the create
434               subcommand.
435
436           -N netbufscript
437               Use <netbufscript> to setup network buffering instead of the
438               default script (/etc/xen/scripts/remus-netbuf-setup).
439
440           -F  Run Remus in unsafe mode. Use this option with caution as
441               failover may not work as intended.
442
443           -b  Replicate memory checkpoints to /dev/null (blackhole).
444               Generally useful for debugging. Requires enabling unsafe mode.
445
446           -n  Disable network output buffering. Requires enabling unsafe
447               mode.
448
449           -d  Disable disk replication. Requires enabling unsafe mode.
450
451           -c  Enable COLO HA. This conflicts with -i and -b, and memory
452               checkpoint compression must be disabled.
453
454           -p  Use userspace COLO Proxy. This option must be used in
455               conjunction with -c.
456
457       pause domain-id
458           Pause a domain.  When in a paused state the domain will still
459           consume allocated resources (such as memory), but will not be
460           eligible for scheduling by the Xen hypervisor.
461
462       reboot [OPTIONS] domain-id
463           Reboot a domain.  This acts just as if the domain had the reboot
464           command run from the console.  The command returns as soon as it
465           has executed the reboot action, which may be significantly earlier
466           than when the domain actually reboots.
467
468           For HVM domains this requires PV drivers to be installed in your
469           guest OS. If PV drivers are not present but you have configured the
470           guest OS to behave appropriately you may be able to use the -F
471           option to trigger a reset button press.
472
473           The behavior of what happens to a domain when it reboots is set by
474           the on_reboot parameter of the domain configuration file when the
475           domain was created.
476
477           OPTIONS
478
479           -F  If the guest does not support PV reboot control then fallback
480               to sending an ACPI power event (equivalent to the reset option
481               to trigger).
482
483               You should ensure that the guest is configured to behave as
484               expected in response to this event.
485
486       restore [OPTIONS] [configfile] checkpointfile
487           Build a domain from an xl save state file.  See save for more info.
488
489           OPTIONS
490
491           -p  Do not unpause the domain after restoring it.
492
493           -e  Do not wait in the background for the death of the domain on
494               the new host.  See the corresponding option of the create
495               subcommand.
496
497           -d  Enable debug messages.
498
499           -V, --vncviewer
500               Attach to the domain's VNC server, forking a vncviewer process.
501
502           -A, --vncviewer-autopass
503               Pass the VNC password to vncviewer via stdin.
504
505       save [OPTIONS] domain-id checkpointfile [configfile]
506           Saves a running domain to a state file so that it can be restored
507           later.  Once saved, the domain will no longer be running on the
508           system, unless the -c or -p options are used.  xl restore restores
509           from this checkpoint file.  Passing a config file argument allows
510           the user to manually select the VM config file used to create the
511           domain.
512
513           -c  Leave the domain running after creating the snapshot.
514
515           -p  Leave the domain paused after creating the snapshot.
516
517           -D  Preserve the domain-id in the domain coniguration that is
518               embedded in the state file such that it will be identical when
519               the domain is restored, unless that configuration is
520               overridden. (See the restore operation above).
521
522       sharing [domain-id]
523           Display the number of shared pages for a specified domain. If no
524           domain is specified it displays information about all domains.
525
526       shutdown [OPTIONS] -a|domain-id
527           Gracefully shuts down a domain.  This coordinates with the domain
528           OS to perform graceful shutdown, so there is no guarantee that it
529           will succeed, and may take a variable length of time depending on
530           what services must be shut down in the domain.
531
532           For HVM domains this requires PV drivers to be installed in your
533           guest OS. If PV drivers are not present but you have configured the
534           guest OS to behave appropriately you may be able to use the -F
535           option to trigger a power button press.
536
537           The command returns immediately after signaling the domain unless
538           the -w flag is used.
539
540           The behavior of what happens to a domain when it reboots is set by
541           the on_shutdown parameter of the domain configuration file when the
542           domain was created.
543
544           OPTIONS
545
546           -a, --all
547               Shutdown all guest domains.  Often used when doing a complete
548               shutdown of a Xen system.
549
550           -w, --wait
551               Wait for the domain to complete shutdown before returning.  If
552               given once, the wait is for domain shutdown or domain death.
553               If given multiple times, the wait is for domain death only.
554
555           -F  If the guest does not support PV shutdown control then fallback
556               to sending an ACPI power event (equivalent to the power option
557               to trigger).
558
559               You should ensure that the guest is configured to behave as
560               expected in response to this event.
561
562       sysrq domain-id letter
563           Send a <Magic System Request> to the domain, each type of request
564           is represented by a different letter.  It can be used to send SysRq
565           requests to Linux guests, see sysrq.txt in your Linux Kernel
566           sources for more information.  It requires PV drivers to be
567           installed in your guest OS.
568
569       trigger domain-id nmi|reset|init|power|sleep|s3resume [VCPU]
570           Send a trigger to a domain, where the trigger can be: nmi, reset,
571           init, power or sleep.  Optionally a specific vcpu number can be
572           passed as an argument.  This command is only available for HVM
573           domains.
574
575       unpause domain-id
576           Moves a domain out of the paused state.  This will allow a
577           previously paused domain to now be eligible for scheduling by the
578           Xen hypervisor.
579
580       vcpu-set domain-id vcpu-count
581           Enables the vcpu-count virtual CPUs for the domain in question.
582           Like mem-set, this command can only allocate up to the maximum
583           virtual CPU count configured at boot for the domain.
584
585           If the vcpu-count is smaller than the current number of active
586           VCPUs, the highest number VCPUs will be hotplug removed.  This may
587           be important for pinning purposes.
588
589           Attempting to set the VCPUs to a number larger than the initially
590           configured VCPU count is an error.  Trying to set VCPUs to < 1 will
591           be quietly ignored.
592
593           Some guests may need to actually bring the newly added CPU online
594           after vcpu-set, go to SEE ALSO section for information.
595
596       vcpu-list [domain-id]
597           Lists VCPU information for a specific domain.  If no domain is
598           specified, VCPU information for all domains will be provided.
599
600       vcpu-pin [-f|--force] domain-id vcpu cpus hard cpus soft
601           Set hard and soft affinity for a vcpu of <domain-id>. Normally
602           VCPUs can float between available CPUs whenever Xen deems a
603           different run state is appropriate.
604
605           Hard affinity can be used to restrict this, by ensuring certain
606           VCPUs can only run on certain physical CPUs. Soft affinity
607           specifies a preferred set of CPUs. Soft affinity needs special
608           support in the scheduler, which is only provided in credit1.
609
610           The keyword all can be used to apply the hard and soft affinity
611           masks to all the VCPUs in the domain. The symbol '-' can be used to
612           leave either hard or soft affinity alone.
613
614           For example:
615
616            xl vcpu-pin 0 3 - 6-9
617
618           will set soft affinity for vCPU 3 of domain 0 to pCPUs 6,7,8 and 9,
619           leaving its hard affinity untouched. On the other hand:
620
621            xl vcpu-pin 0 3 3,4 6-9
622
623           will set both hard and soft affinity, the former to pCPUs 3 and 4,
624           the latter to pCPUs 6,7,8, and 9.
625
626           Specifying -f or --force will remove a temporary pinning done by
627           the operating system (normally this should be done by the operating
628           system).  In case a temporary pinning is active for a vcpu the
629           affinity of this vcpu can't be changed without this option.
630
631       vm-list
632           Prints information about guests. This list excludes information
633           about service or auxiliary domains such as dom0 and stubdoms.
634
635           EXAMPLE
636
637           An example format for the list is as follows:
638
639               UUID                                  ID    name
640               59e1cf6c-6ab9-4879-90e7-adc8d1c63bf5  2    win
641               50bc8f75-81d0-4d53-b2e6-95cb44e2682e  3    linux
642
643       vncviewer [OPTIONS] domain-id
644           Attach to the domain's VNC server, forking a vncviewer process.
645
646           OPTIONS
647
648           --autopass
649               Pass the VNC password to vncviewer via stdin.
650

XEN HOST SUBCOMMANDS

652       debug-keys keys
653           Send debug keys to Xen. It is the same as pressing the Xen
654           "conswitch" (Ctrl-A by default) three times and then pressing
655           "keys".
656
657       set-parameters params
658           Set hypervisor parameters as specified in params. This allows for
659           some boot parameters of the hypervisor to be modified in the
660           running systems.
661
662       dmesg [OPTIONS]
663           Reads the Xen message buffer, similar to dmesg on a Linux system.
664           The buffer contains informational, warning, and error messages
665           created during Xen's boot process.  If you are having problems with
666           Xen, this is one of the first places to look as part of problem
667           determination.
668
669           OPTIONS
670
671           -c, --clear
672               Clears Xen's message buffer.
673
674       info [OPTIONS]
675           Print information about the Xen host in name : value format.  When
676           reporting a Xen bug, please provide this information as part of the
677           bug report. See
678           https://wiki.xenproject.org/wiki/Reporting_Bugs_against_Xen_Project
679           on how to report Xen bugs.
680
681           Sample output looks as follows:
682
683            host                   : scarlett
684            release                : 3.1.0-rc4+
685            version                : #1001 SMP Wed Oct 19 11:09:54 UTC 2011
686            machine                : x86_64
687            nr_cpus                : 4
688            nr_nodes               : 1
689            cores_per_socket       : 4
690            threads_per_core       : 1
691            cpu_mhz                : 2266
692            hw_caps                : bfebfbff:28100800:00000000:00003b40:009ce3bd:00000000:00000001:00000000
693            virt_caps              : hvm hvm_directio
694            total_memory           : 6141
695            free_memory            : 4274
696            free_cpus              : 0
697            outstanding_claims     : 0
698            xen_major              : 4
699            xen_minor              : 2
700            xen_extra              : -unstable
701            xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
702            xen_scheduler          : credit
703            xen_pagesize           : 4096
704            platform_params        : virt_start=0xffff800000000000
705            xen_changeset          : Wed Nov 02 17:09:09 2011 +0000 24066:54a5e994a241
706            xen_commandline        : com1=115200,8n1 guest_loglvl=all dom0_mem=750M console=com1
707            cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
708            cc_compile_by          : sstabellini
709            cc_compile_domain      : uk.xensource.com
710            cc_compile_date        : Tue Nov  8 12:03:05 UTC 2011
711            xend_config_format     : 4
712
713           FIELDS
714
715           Not all fields will be explained here, but some of the less obvious
716           ones deserve explanation:
717
718           hw_caps
719               A vector showing what hardware capabilities are supported by
720               your processor.  This is equivalent to, though more cryptic,
721               the flags field in /proc/cpuinfo on a normal Linux machine:
722               they both derive from the feature bits returned by the cpuid
723               command on x86 platforms.
724
725           free_memory
726               Available memory (in MB) not allocated to Xen, or any other
727               domains, or claimed for domains.
728
729           outstanding_claims
730               When a claim call is done (see xl.conf(5)) a reservation for a
731               specific amount of pages is set and also a global value is
732               incremented. This global value (outstanding_claims) is then
733               reduced as the domain's memory is populated and eventually
734               reaches zero. Most of the time the value will be zero, but if
735               you are launching multiple guests, and claim_mode is enabled,
736               this value can increase/decrease. Note that the value also
737               affects the free_memory - as it will reflect the free memory in
738               the hypervisor minus the outstanding pages claimed for guests.
739               See xl info claims parameter for detailed listing.
740
741           xen_caps
742               The Xen version and architecture.  Architecture values can be
743               one of: x86_32, x86_32p (i.e. PAE enabled), x86_64, ia64.
744
745           xen_changeset
746               The Xen mercurial changeset id.  Very useful for determining
747               exactly what version of code your Xen system was built from.
748
749           OPTIONS
750
751           -n, --numa
752               List host NUMA topology information
753
754       top Executes the xentop(1) command, which provides real time monitoring
755           of domains.  Xentop has a curses interface, and is reasonably self
756           explanatory.
757
758       uptime
759           Prints the current uptime of the domains running.
760
761       claims
762           Prints information about outstanding claims by the guests. This
763           provides the outstanding claims and currently populated memory
764           count for the guests.  These values added up reflect the global
765           outstanding claim value, which is provided via the info argument,
766           outstanding_claims value.  The Mem column has the cumulative value
767           of outstanding claims and the total amount of memory that has been
768           right now allocated to the guest.
769
770           EXAMPLE
771
772           An example format for the list is as follows:
773
774            Name                                        ID   Mem VCPUs      State   Time(s)  Claimed
775            Domain-0                                     0  2047     4     r-----      19.7     0
776            OL5                                          2  2048     1     --p---       0.0   847
777            OL6                                          3  1024     4     r-----       5.9     0
778            Windows_XP                                   4  2047     1     --p---       0.0  1989
779
780           In which it can be seen that the OL5 guest still has 847MB of
781           claimed memory (out of the total 2048MB where 1191MB has been
782           allocated to the guest).
783

SCHEDULER SUBCOMMANDS

785       Xen ships with a number of domain schedulers, which can be set at boot
786       time with the sched= parameter on the Xen command line.  By default
787       credit is used for scheduling.
788
789       sched-credit [OPTIONS]
790           Set or get credit (aka credit1) scheduler parameters.  The credit
791           scheduler is a proportional fair share CPU scheduler built from the
792           ground up to be work conserving on SMP hosts.
793
794           Each domain (including Domain0) is assigned a weight and a cap.
795
796           OPTIONS
797
798           -d DOMAIN, --domain=DOMAIN
799               Specify domain for which scheduler parameters are to be
800               modified or retrieved.  Mandatory for modifying scheduler
801               parameters.
802
803           -w WEIGHT, --weight=WEIGHT
804               A domain with a weight of 512 will get twice as much CPU as a
805               domain with a weight of 256 on a contended host. Legal weights
806               range from 1 to 65535 and the default is 256.
807
808           -c CAP, --cap=CAP
809               The cap optionally fixes the maximum amount of CPU a domain
810               will be able to consume, even if the host system has idle CPU
811               cycles. The cap is expressed in percentage of one physical CPU:
812               100 is 1 physical CPU, 50 is half a CPU, 400 is 4 CPUs, etc.
813               The default, 0, means there is no upper cap.
814
815               NB: Many systems have features that will scale down the
816               computing power of a cpu that is not 100% utilized.  This can
817               be in the operating system, but can also sometimes be below the
818               operating system in the BIOS.  If you set a cap such that
819               individual cores are running at less than 100%, this may have
820               an impact on the performance of your workload over and above
821               the impact of the cap. For example, if your processor runs at
822               2GHz, and you cap a vm at 50%, the power management system may
823               also reduce the clock speed to 1GHz; the effect will be that
824               your VM gets 25% of the available power (50% of 1GHz) rather
825               than 50% (50% of 2GHz).  If you are not getting the performance
826               you expect, look at performance and cpufreq options in your
827               operating system and your BIOS.
828
829           -p CPUPOOL, --cpupool=CPUPOOL
830               Restrict output to domains in the specified cpupool.
831
832           -s, --schedparam
833               Specify to list or set pool-wide scheduler parameters.
834
835           -t TSLICE, --tslice_ms=TSLICE
836               Timeslice tells the scheduler how long to allow VMs to run
837               before pre-empting.  The default is 30ms.  Valid ranges are 1ms
838               to 1000ms.  The length of the timeslice (in ms) must be higher
839               than the length of the ratelimit (see below).
840
841           -r RLIMIT, --ratelimit_us=RLIMIT
842               Ratelimit attempts to limit the number of schedules per second.
843               It sets a minimum amount of time (in microseconds) a VM must
844               run before we will allow a higher-priority VM to pre-empt it.
845               The default value is 1000 microseconds (1ms).  Valid range is
846               100 to 500000 (500ms).  The ratelimit length must be lower than
847               the timeslice length.
848
849           -m DELAY, --migration_delay_us=DELAY
850               Migration delay specifies for how long a vCPU, after it stopped
851               running should be considered "cache-hot". Basically, if less
852               than DELAY us passed since when the vCPU was executing on a
853               CPU, it is likely that most of the vCPU's working set is still
854               in the CPU's cache, and therefore the vCPU is not migrated.
855
856               Default is 0. Maximum is 100 ms. This can be effective at
857               preventing vCPUs to bounce among CPUs too quickly, but, at the
858               same time, the scheduler stops being fully work-conserving.
859
860           COMBINATION
861
862           The following is the effect of combining the above options:
863
864           <nothing>             : List all domain params and sched params
865           from all pools
866           -d [domid]            : List domain params for domain [domid]
867           -d [domid] [params]   : Set domain params for domain [domid]
868           -p [pool]             : list all domains and sched params for
869           [pool]
870           -s                    : List sched params for poolid 0
871           -s [params]           : Set sched params for poolid 0
872           -p [pool] -s          : List sched params for [pool]
873           -p [pool] -s [params] : Set sched params for [pool]
874           -p [pool] -d...       : Illegal
875       sched-credit2 [OPTIONS]
876           Set or get credit2 scheduler parameters.  The credit2 scheduler is
877           a proportional fair share CPU scheduler built from the ground up to
878           be work conserving on SMP hosts.
879
880           Each domain (including Domain0) is assigned a weight.
881
882           OPTIONS
883
884           -d DOMAIN, --domain=DOMAIN
885               Specify domain for which scheduler parameters are to be
886               modified or retrieved.  Mandatory for modifying scheduler
887               parameters.
888
889           -w WEIGHT, --weight=WEIGHT
890               A domain with a weight of 512 will get twice as much CPU as a
891               domain with a weight of 256 on a contended host. Legal weights
892               range from 1 to 65535 and the default is 256.
893
894           -p CPUPOOL, --cpupool=CPUPOOL
895               Restrict output to domains in the specified cpupool.
896
897           -s, --schedparam
898               Specify to list or set pool-wide scheduler parameters.
899
900           -r RLIMIT, --ratelimit_us=RLIMIT
901               Attempts to limit the rate of context switching. It is
902               basically the same as --ratelimit_us in sched-credit
903
904       sched-rtds [OPTIONS]
905           Set or get rtds (Real Time Deferrable Server) scheduler parameters.
906           This rt scheduler applies Preemptive Global Earliest Deadline First
907           real-time scheduling algorithm to schedule VCPUs in the system.
908           Each VCPU has a dedicated period, budget and extratime.  While
909           scheduled, a VCPU burns its budget.  A VCPU has its budget
910           replenished at the beginning of each period; Unused budget is
911           discarded at the end of each period.  A VCPU with extratime set
912           gets extra time from the unreserved system resource.
913
914           OPTIONS
915
916           -d DOMAIN, --domain=DOMAIN
917               Specify domain for which scheduler parameters are to be
918               modified or retrieved.  Mandatory for modifying scheduler
919               parameters.
920
921           -v VCPUID/all, --vcpuid=VCPUID/all
922               Specify vcpu for which scheduler parameters are to be modified
923               or retrieved.
924
925           -p PERIOD, --period=PERIOD
926               Period of time, in microseconds, over which to replenish the
927               budget.
928
929           -b BUDGET, --budget=BUDGET
930               Amount of time, in microseconds, that the VCPU will be allowed
931               to run every period.
932
933           -e Extratime, --extratime=Extratime
934               Binary flag to decide if the VCPU will be allowed to get extra
935               time from the unreserved system resource.
936
937           -c CPUPOOL, --cpupool=CPUPOOL
938               Restrict output to domains in the specified cpupool.
939
940           EXAMPLE
941
942               1) Use -v all to see the budget and period of all the VCPUs of
943               all the domains:
944
945                   xl sched-rtds -v all
946                   Cpupool Pool-0: sched=RTDS
947                   Name                        ID VCPU    Period    Budget  Extratime
948                   Domain-0                     0    0     10000      4000        yes
949                   vm1                          2    0       300       150        yes
950                   vm1                          2    1       400       200        yes
951                   vm1                          2    2     10000      4000        yes
952                   vm1                          2    3      1000       500        yes
953                   vm2                          4    0     10000      4000        yes
954                   vm2                          4    1     10000      4000        yes
955
956               Without any arguments, it will output the default scheduling
957               parameters for each domain:
958
959                   xl sched-rtds
960                   Cpupool Pool-0: sched=RTDS
961                   Name                        ID    Period    Budget  Extratime
962                   Domain-0                     0     10000      4000        yes
963                   vm1                          2     10000      4000        yes
964                   vm2                          4     10000      4000        yes
965
966               2) Use, for instance, -d vm1, -v all to see the budget and
967               period of all VCPUs of a specific domain (vm1):
968
969                   xl sched-rtds -d vm1 -v all
970                   Name                        ID VCPU    Period    Budget  Extratime
971                   vm1                          2    0       300       150        yes
972                   vm1                          2    1       400       200        yes
973                   vm1                          2    2     10000      4000        yes
974                   vm1                          2    3      1000       500        yes
975
976               To see the parameters of a subset of the VCPUs of a domain,
977               use:
978
979                   xl sched-rtds -d vm1 -v 0 -v 3
980                   Name                        ID VCPU    Period    Budget  Extratime
981                   vm1                          2    0       300       150        yes
982                   vm1                          2    3      1000       500        yes
983
984               If no -v is specified, the default scheduling parameters for
985               the domain are shown:
986
987                   xl sched-rtds -d vm1
988                   Name                        ID    Period    Budget  Extratime
989                   vm1                          2     10000      4000        yes
990
991               3) Users can set the budget and period of multiple VCPUs of a
992               specific domain with only one command, e.g., "xl sched-rtds -d
993               vm1 -v 0 -p 100 -b 50 -e 1 -v 3 -p 300 -b 150 -e 0".
994
995               To change the parameters of all the VCPUs of a domain, use -v
996               all, e.g., "xl sched-rtds -d vm1 -v all -p 500 -b 250 -e 1".
997

CPUPOOLS COMMANDS

999       Xen can group the physical cpus of a server in cpu-pools. Each physical
1000       CPU is assigned at most to one cpu-pool. Domains are each restricted to
1001       a single cpu-pool. Scheduling does not cross cpu-pool boundaries, so
1002       each cpu-pool has its own scheduler.  Physical cpus and domains can be
1003       moved from one cpu-pool to another only by an explicit command.  Cpu-
1004       pools can be specified either by name or by id.
1005
1006       cpupool-create [OPTIONS] [configfile] [variable=value ...]
1007           Create a cpu pool based an config from a configfile or command-line
1008           parameters.  Variable settings from the configfile may be altered
1009           by specifying new or additional assignments on the command line.
1010
1011           See the xlcpupool.cfg(5) manpage for more information.
1012
1013           OPTIONS
1014
1015           -f=FILE, --defconfig=FILE
1016               Use the given configuration file.
1017
1018       cpupool-list [OPTIONS] [cpu-pool]
1019           List CPU pools on the host.
1020
1021           OPTIONS
1022
1023           -c, --cpus
1024               If this option is specified, xl prints a list of CPUs used by
1025               cpu-pool.
1026
1027       cpupool-destroy cpu-pool
1028           Deactivates a cpu pool.  This is possible only if no domain is
1029           active in the cpu-pool.
1030
1031       cpupool-rename cpu-pool <newname>
1032           Renames a cpu-pool to newname.
1033
1034       cpupool-cpu-add cpu-pool cpus|node:nodes
1035           Adds one or more CPUs or NUMA nodes to cpu-pool. CPUs and NUMA
1036           nodes can be specified as single CPU/node IDs or as ranges.
1037
1038           For example:
1039
1040            (a) xl cpupool-cpu-add mypool 4
1041            (b) xl cpupool-cpu-add mypool 1,5,10-16,^13
1042            (c) xl cpupool-cpu-add mypool node:0,nodes:2-3,^10-12,8
1043
1044           means adding CPU 4 to mypool, in (a); adding CPUs
1045           1,5,10,11,12,14,15 and 16, in (b); and adding all the CPUs of NUMA
1046           nodes 0, 2 and 3, plus CPU 8, but keeping out CPUs 10,11,12, in
1047           (c).
1048
1049           All the specified CPUs that can be added to the cpupool will be
1050           added to it. If some CPU can't (e.g., because they're already part
1051           of another cpupool), an error is reported about each one of them.
1052
1053       cpupool-cpu-remove cpus|node:nodes
1054           Removes one or more CPUs or NUMA nodes from cpu-pool. CPUs and NUMA
1055           nodes can be specified as single CPU/node IDs or as ranges, using
1056           the exact same syntax as in cpupool-cpu-add above.
1057
1058       cpupool-migrate domain-id cpu-pool
1059           Moves a domain specified by domain-id or domain-name into a cpu-
1060           pool.  Domain-0 can't be moved to another cpu-pool.
1061
1062       cpupool-numa-split
1063           Splits up the machine into one cpu-pool per numa node.
1064

VIRTUAL DEVICE COMMANDS

1066       Most virtual devices can be added and removed while guests are running,
1067       assuming that the necessary support exists in the guest OS.  The effect
1068       to the guest OS is much the same as any hotplug event.
1069
1070   BLOCK DEVICES
1071       block-attach domain-id disc-spec-component(s) ...
1072           Create a new virtual block device and attach it to the specified
1073           domain.  A disc specification is in the same format used for the
1074           disk variable in the domain config file. See
1075           xl-disk-configuration(5). This will trigger a hotplug event for the
1076           guest.
1077
1078           Note that only PV block devices are supported by block-attach.
1079           Requests to attach emulated devices (eg, vdev=hdc) will result in
1080           only the PV view being available to the guest.
1081
1082       block-detach domain-id devid [OPTIONS]
1083           Detach a domain's virtual block device. devid may be the symbolic
1084           name or the numeric device id given to the device by domain 0.  You
1085           will need to run xl block-list to determine that number.
1086
1087           Detaching the device requires the cooperation of the domain.  If
1088           the domain fails to release the device (perhaps because the domain
1089           is hung or is still using the device), the detach will fail.
1090
1091           OPTIONS
1092
1093           --force
1094               If this parameter is specified the device will be forcefully
1095               detached, which may cause IO errors in the domain.
1096
1097       block-list domain-id
1098           List virtual block devices for a domain.
1099
1100       cd-insert domain-id virtualdevice target
1101           Insert a cdrom into a guest domain's existing virtual cd drive. The
1102           virtual drive must already exist but can be empty. How the device
1103           should be presented to the guest domain is specified by the
1104           virtualdevice parameter; for example "hdc". Parameter target is the
1105           target path in the backend domain (usually domain 0) to be
1106           exported; can be a block device or a file etc.  See target in
1107           xl-disk-configuration(5).
1108
1109           Only works with HVM domains.
1110
1111       cd-eject domain-id virtualdevice
1112           Eject a cdrom from a guest domain's virtual cd drive, specified by
1113           virtualdevice. Only works with HVM domains.
1114
1115   NETWORK DEVICES
1116       network-attach domain-id network-device
1117           Creates a new network device in the domain specified by domain-id.
1118           network-device describes the device to attach, using the same
1119           format as the vif string in the domain config file. See xl.cfg(5)
1120           and xl-network-configuration(5) for more information.
1121
1122           Note that only attaching PV network interfaces is supported.
1123
1124       network-detach domain-id devid|mac
1125           Removes the network device from the domain specified by domain-id.
1126           devid is the virtual interface device number within the domain
1127           (i.e. the 3 in vif22.3). Alternatively, the mac address can be used
1128           to select the virtual interface to detach.
1129
1130       network-list domain-id
1131           List virtual network interfaces for a domain.
1132
1133   CHANNEL DEVICES
1134       channel-list domain-id
1135           List virtual channel interfaces for a domain.
1136
1137   VIRTUAL TRUSTED PLATFORM MODULE (vTPM) DEVICES
1138       vtpm-attach domain-id vtpm-device
1139           Creates a new vtpm (virtual Trusted Platform Module) device in the
1140           domain specified by domain-id. vtpm-device describes the device to
1141           attach, using the same format as the vtpm string in the domain
1142           config file.  See xl.cfg(5) for more information.
1143
1144       vtpm-detach domain-id devid|uuid
1145           Removes the vtpm device from the domain specified by domain-id.
1146           devid is the numeric device id given to the virtual Trusted
1147           Platform Module device. You will need to run xl vtpm-list to
1148           determine that number. Alternatively, the uuid of the vtpm can be
1149           used to select the virtual device to detach.
1150
1151       vtpm-list domain-id
1152           List virtual Trusted Platform Modules for a domain.
1153
1154   VDISPL DEVICES
1155       vdispl-attach domain-id vdispl-device
1156           Creates a new vdispl device in the domain specified by domain-id.
1157           vdispl-device describes the device to attach, using the same format
1158           as the vdispl string in the domain config file. See xl.cfg(5) for
1159           more information.
1160
1161           NOTES
1162
1163               As in vdispl-device string semicolon is used then put quotes or
1164               escaping when using from the shell.
1165
1166               EXAMPLE
1167
1168                   xl vdispl-attach DomU
1169                   connectors='id0:1920x1080;id1:800x600;id2:640x480'
1170
1171                   or
1172
1173                   xl vdispl-attach DomU
1174                   connectors=id0:1920x1080\;id1:800x600\;id2:640x480
1175
1176       vdispl-detach domain-id dev-id
1177           Removes the vdispl device specified by dev-id from the domain
1178           specified by domain-id.
1179
1180       vdispl-list domain-id
1181           List virtual displays for a domain.
1182
1183   VSND DEVICES
1184       vsnd-attach domain-id vsnd-item vsnd-item ...
1185           Creates a new vsnd device in the domain specified by domain-id.
1186           vsnd-item's describe the vsnd device to attach, using the same
1187           format as the VSND_ITEM_SPEC string in the domain config file. See
1188           xl.cfg(5) for more information.
1189
1190           EXAMPLE
1191
1192               xl vsnd-attach DomU 'CARD, short-name=Main,
1193               sample-formats=s16_le;s8;u32_be' 'PCM, name=Main' 'STREAM,
1194               id=0, type=p' 'STREAM, id=1, type=c, channels-max=2'
1195
1196       vsnd-detach domain-id dev-id
1197           Removes the vsnd device specified by dev-id from the domain
1198           specified by domain-id.
1199
1200       vsnd-list domain-id
1201           List vsnd devices for a domain.
1202
1203   KEYBOARD DEVICES
1204       vkb-attach domain-id vkb-device
1205           Creates a new keyboard device in the domain specified by domain-id.
1206           vkb-device describes the device to attach, using the same format as
1207           the VKB_SPEC_STRING string in the domain config file. See xl.cfg(5)
1208           for more informations.
1209
1210       vkb-detach domain-id devid
1211           Removes the keyboard device from the domain specified by domain-id.
1212           devid is the virtual interface device number within the domain
1213
1214       vkb-list domain-id
1215           List virtual network interfaces for a domain.
1216

PCI PASS-THROUGH

1218       pci-assignable-list
1219           List all the assignable PCI devices.  These are devices in the
1220           system which are configured to be available for passthrough and are
1221           bound to a suitable PCI backend driver in domain 0 rather than a
1222           real driver.
1223
1224       pci-assignable-add BDF
1225           Make the device at PCI Bus/Device/Function BDF assignable to
1226           guests.  This will bind the device to the pciback driver and assign
1227           it to the "quarantine domain".  If it is already bound to a driver,
1228           it will first be unbound, and the original driver stored so that it
1229           can be re-bound to the same driver later if desired.  If the device
1230           is already bound, it will assign it to the quarantine domain and
1231           return success.
1232
1233           CAUTION: This will make the device unusable by Domain 0 until it is
1234           returned with pci-assignable-remove.  Care should therefore be
1235           taken not to do this on a device critical to domain 0's operation,
1236           such as storage controllers, network interfaces, or GPUs that are
1237           currently being used.
1238
1239       pci-assignable-remove [-r] BDF
1240           Make the device at PCI Bus/Device/Function BDF not assignable to
1241           guests.  This will at least unbind the device from pciback, and re-
1242           assign it from the "quarantine domain" back to domain 0.  If the -r
1243           option is specified, it will also attempt to re-bind the device to
1244           its original driver, making it usable by Domain 0 again.  If the
1245           device is not bound to pciback, it will return success.
1246
1247           Note that this functionality will work even for devices which were
1248           not made assignable by pci-assignable-add.  This can be used to
1249           allow dom0 to access devices which were automatically quarantined
1250           by Xen after domain destruction as a result of Xen's
1251           iommu=quarantine command-line default.
1252
1253           As always, this should only be done if you trust the guest, or are
1254           confident that the particular device you're re-assigning to dom0
1255           will cancel all in-flight DMA on FLR.
1256
1257       pci-attach domain-id BDF
1258           Hot-plug a new pass-through pci device to the specified domain.
1259           BDF is the PCI Bus/Device/Function of the physical device to pass-
1260           through.
1261
1262       pci-detach [OPTIONS] domain-id BDF
1263           Hot-unplug a previously assigned pci device from a domain. BDF is
1264           the PCI Bus/Device/Function of the physical device to be removed
1265           from the guest domain.
1266
1267           OPTIONS
1268
1269           -f  If this parameter is specified, xl is going to forcefully
1270               remove the device even without guest domain's collaboration.
1271
1272       pci-list domain-id
1273           List pass-through pci devices for a domain.
1274

USB PASS-THROUGH

1276       usbctrl-attach domain-id usbctrl-device
1277           Create a new USB controller in the domain specified by domain-id,
1278           usbctrl-device describes the device to attach, using form
1279           "KEY=VALUE KEY=VALUE ..." where KEY=VALUE has the same meaning as
1280           the usbctrl description in the domain config file.  See xl.cfg(5)
1281           for more information.
1282
1283       usbctrl-detach domain-id devid
1284           Destroy a USB controller from the specified domain.  devid is devid
1285           of the USB controller.
1286
1287       usbdev-attach domain-id usbdev-device
1288           Hot-plug a new pass-through USB device to the domain specified by
1289           domain-id, usbdev-device describes the device to attach, using form
1290           "KEY=VALUE KEY=VALUE ..." where KEY=VALUE has the same meaning as
1291           the usbdev description in the domain config file.  See xl.cfg(5)
1292           for more information.
1293
1294       usbdev-detach domain-id controller=devid port=number
1295           Hot-unplug a previously assigned USB device from a domain.
1296           controller=devid and port=number is USB controller:port in the
1297           guest domain the USB device is attached to.
1298
1299       usb-list domain-id
1300           List pass-through usb devices for a domain.
1301

DEVICE-MODEL CONTROL

1303       qemu-monitor-command domain-id command
1304           Issue a monitor command to the device model of the domain specified
1305           by domain-id. command can be any valid command qemu understands.
1306           This can be e.g. used to add non-standard devices or devices with
1307           non-standard parameters to a domain. The output of the command is
1308           printed to stdout.
1309
1310           Warning: This qemu monitor access is provided for convenience when
1311           debugging, troubleshooting, and experimenting.  Its use is not
1312           supported by the Xen Project.
1313
1314           Specifically, not all information displayed by the qemu monitor
1315           will necessarily be accurate or complete, because in a Xen system
1316           qemu does not have a complete view of the guest.
1317
1318           Furthermore, modifying the guest's setup via the qemu monitor may
1319           conflict with the Xen toolstack's assumptions.  Resulting problems
1320           may include, but are not limited to: guest crashes; toolstack error
1321           messages; inability to migrate the guest; and security
1322           vulnerabilities which are not covered by the Xen Project security
1323           response policy.
1324
1325           EXAMPLE
1326
1327           Obtain information of USB devices connected as such via the device
1328           model (only!) to a domain:
1329
1330            xl qemu-monitor-command vm1 'info usb'
1331             Device 0.2, Port 5, Speed 480 Mb/s, Product Mass Storage
1332

FLASK

1334       FLASK is a security framework that defines a mandatory access control
1335       policy providing fine-grained controls over Xen domains, allowing the
1336       policy writer to define what interactions between domains, devices, and
1337       the hypervisor are permitted. Some example of what you can do using
1338       XSM/FLASK:
1339        - Prevent two domains from communicating via event channels or grants
1340        - Control which domains can use device passthrough (and which devices)
1341        - Restrict or audit operations performed by privileged domains
1342        - Prevent a privileged domain from arbitrarily mapping pages from
1343       other
1344          domains.
1345
1346       You can find more details on how to use FLASK and an example security
1347       policy here:
1348       <https://xenbits.xenproject.org/docs/unstable/misc/xsm-flask.txt>
1349
1350       getenforce
1351           Determine if the FLASK security module is loaded and enforcing its
1352           policy.
1353
1354       setenforce 1|0|Enforcing|Permissive
1355           Enable or disable enforcing of the FLASK access controls. The
1356           default is permissive, but this can be changed to enforcing by
1357           specifying "flask=enforcing" or "flask=late" on the hypervisor's
1358           command line.
1359
1360       loadpolicy policy-file
1361           Load FLASK policy from the given policy file. The initial policy is
1362           provided to the hypervisor as a multiboot module; this command
1363           allows runtime updates to the policy. Loading new security policy
1364           will reset runtime changes to device labels.
1365

PLATFORM SHARED RESOURCE MONITORING/CONTROL

1367       Intel Haswell and later server platforms offer shared resource
1368       monitoring and control technologies. The availability of these
1369       technologies and the hardware capabilities can be shown with psr-
1370       hwinfo.
1371
1372       See <https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html> for
1373       more information.
1374
1375       psr-hwinfo [OPTIONS]
1376           Show Platform Shared Resource (PSR) hardware information.
1377
1378           OPTIONS
1379
1380           -m, --cmt
1381               Show Cache Monitoring Technology (CMT) hardware information.
1382
1383           -a, --cat
1384               Show Cache Allocation Technology (CAT) hardware information.
1385
1386   CACHE MONITORING TECHNOLOGY
1387       Intel Haswell and later server platforms offer monitoring capability in
1388       each logical processor to measure specific platform shared resource
1389       metric, for example, L3 cache occupancy. In the Xen implementation, the
1390       monitoring granularity is domain level. To monitor a specific domain,
1391       just attach the domain id with the monitoring service. When the domain
1392       doesn't need to be monitored any more, detach the domain id from the
1393       monitoring service.
1394
1395       Intel Broadwell and later server platforms also offer total/local
1396       memory bandwidth monitoring. Xen supports per-domain monitoring for
1397       these two additional monitoring types. Both memory bandwidth monitoring
1398       and L3 cache occupancy monitoring share the same set of underlying
1399       monitoring service. Once a domain is attached to the monitoring
1400       service, monitoring data can be shown for any of these monitoring
1401       types.
1402
1403       There is no cache monitoring and memory bandwidth monitoring on L2
1404       cache so far.
1405
1406       psr-cmt-attach domain-id
1407           attach: Attach the platform shared resource monitoring service to a
1408           domain.
1409
1410       psr-cmt-detach domain-id
1411           detach: Detach the platform shared resource monitoring service from
1412           a domain.
1413
1414       psr-cmt-show psr-monitor-type [domain-id]
1415           Show monitoring data for a certain domain or all domains. Current
1416           supported monitor types are:
1417            - "cache-occupancy": showing the L3 cache occupancy(KB).
1418            - "total-mem-bandwidth": showing the total memory bandwidth(KB/s).
1419            - "local-mem-bandwidth": showing the local memory bandwidth(KB/s).
1420
1421   CACHE ALLOCATION TECHNOLOGY
1422       Intel Broadwell and later server platforms offer capabilities to
1423       configure and make use of the Cache Allocation Technology (CAT)
1424       mechanisms, which enable more cache resources (i.e. L3/L2 cache) to be
1425       made available for high priority applications. In the Xen
1426       implementation, CAT is used to control cache allocation on VM basis. To
1427       enforce cache on a specific domain, just set capacity bitmasks (CBM)
1428       for the domain.
1429
1430       Intel Broadwell and later server platforms also offer Code/Data
1431       Prioritization (CDP) for cache allocations, which support specifying
1432       code or data cache for applications. CDP is used on a per VM basis in
1433       the Xen implementation. To specify code or data CBM for the domain, CDP
1434       feature must be enabled and CBM type options need to be specified when
1435       setting CBM, and the type options (code and data) are mutually
1436       exclusive. There is no CDP support on L2 so far.
1437
1438       psr-cat-set [OPTIONS] domain-id cbm
1439           Set cache capacity bitmasks(CBM) for a domain. For how to specify
1440           cbm please refer to
1441           <https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>.
1442
1443           OPTIONS
1444
1445           -s SOCKET, --socket=SOCKET
1446               Specify the socket to process, otherwise all sockets are
1447               processed.
1448
1449           -l LEVEL, --level=LEVEL
1450               Specify the cache level to process, otherwise the last level
1451               cache (L3) is processed.
1452
1453           -c, --code
1454               Set code CBM when CDP is enabled.
1455
1456           -d, --data
1457               Set data CBM when CDP is enabled.
1458
1459       psr-cat-show [OPTIONS] [domain-id]
1460           Show CAT settings for a certain domain or all domains.
1461
1462           OPTIONS
1463
1464           -l LEVEL, --level=LEVEL
1465               Specify the cache level to process, otherwise the last level
1466               cache (L3) is processed.
1467
1468   Memory Bandwidth Allocation
1469       Intel Skylake and later server platforms offer capabilities to
1470       configure and make use of the Memory Bandwidth Allocation (MBA)
1471       mechanisms, which provides OS/VMMs the ability to slow misbehaving
1472       apps/VMs by using a credit-based throttling mechanism. In the Xen
1473       implementation, MBA is used to control memory bandwidth on VM basis. To
1474       enforce bandwidth on a specific domain, just set throttling value
1475       (THRTL) for the domain.
1476
1477       psr-mba-set [OPTIONS] domain-id thrtl
1478           Set throttling value (THRTL) for a domain. For how to specify thrtl
1479           please refer to
1480           <https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>.
1481
1482           OPTIONS
1483
1484           -s SOCKET, --socket=SOCKET
1485               Specify the socket to process, otherwise all sockets are
1486               processed.
1487
1488       psr-mba-show [domain-id]
1489           Show MBA settings for a certain domain or all domains. For linear
1490           mode, it shows the decimal value. For non-linear mode, it shows
1491           hexadecimal value.
1492

IGNORED FOR COMPATIBILITY WITH XM

1494       xl is mostly command-line compatible with the old xm utility used with
1495       the old Python xend.  For compatibility, the following options are
1496       ignored:
1497
1498       xl migrate --live
1499

SEE ALSO

1501       The following man pages:
1502
1503       xl.cfg(5), xlcpupool.cfg(5), xentop(1), xl-disk-configuration(5)
1504       xl-network-configuration(5)
1505
1506       And the following documents on the xenproject.org website:
1507
1508       <https://xenbits.xenproject.org/docs/unstable/misc/xsm-flask.txt>
1509       <https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>
1510
1511       For systems that don't automatically bring the CPU online:
1512
1513       <https://wiki.xenproject.org/wiki/Paravirt_Linux_CPU_Hotplug>
1514

BUGS

1516       Send bugs to xen-devel@lists.xenproject.org, see
1517       https://wiki.xenproject.org/wiki/Reporting_Bugs_against_Xen_Project on
1518       how to send bug reports.
1519
1520
1521
15224.14.1                            2021-03-18                             xl(1)
Impressum