1xl(1)                                 Xen                                xl(1)
2
3
4

NAME

6       xl - Xen management tool, based on LibXenlight
7

SYNOPSIS

9       xl subcommand [args]
10

DESCRIPTION

12       The xl program is the new tool for managing Xen guest domains. The
13       program can be used to create, pause, and shutdown domains. It can also
14       be used to list current domains, enable or pin VCPUs, and attach or
15       detach virtual block devices.
16
17       The basic structure of every xl command is almost always:
18
19         xl subcommand [OPTIONS] domain-id
20
21       Where subcommand is one of the subcommands listed below, domain-id is
22       the numeric domain id, or the domain name (which will be internally
23       translated to domain id), and OPTIONS are subcommand specific options.
24       There are a few exceptions to this rule in the cases where the
25       subcommand in question acts on all domains, the entire machine, or
26       directly on the Xen hypervisor.  Those exceptions will be clear for
27       each of those subcommands.
28

NOTES

30       start the script /etc/init.d/xencommons at boot time
31           Most xl operations rely upon xenstored and xenconsoled: make sure
32           you start the script /etc/init.d/xencommons at boot time to
33           initialize all the daemons needed by xl.
34
35       setup a xenbr0 bridge in dom0
36           In the most common network configuration, you need to setup a
37           bridge in dom0 named xenbr0 in order to have a working network in
38           the guest domains.  Please refer to the documentation of your Linux
39           distribution to know how to setup the bridge.
40
41       autoballoon
42           If you specify the amount of memory dom0 has, passing dom0_mem to
43           Xen, it is highly recommended to disable autoballoon. Edit
44           /etc/xen/xl.conf and set it to 0.
45
46       run xl as root
47           Most xl commands require root privileges to run due to the
48           communications channels used to talk to the hypervisor.  Running as
49           non root will return an error.
50

GLOBAL OPTIONS

52       Some global options are always available:
53
54       -v  Verbose.
55
56       -N  Dry run: do not actually execute the command.
57
58       -f  Force execution: xl will refuse to run some commands if it detects
59           that xend is also running, this option will force the execution of
60           those commands, even though it is unsafe.
61
62       -t  Always use carriage-return-based overwriting for displaying
63           progress messages without scrolling the screen.  Without -t, this
64           is done only if stderr is a tty.
65

DOMAIN SUBCOMMANDS

67       The following subcommands manipulate domains directly.  As stated
68       previously, most commands take domain-id as the first parameter.
69
70       button-press domain-id button
71           This command is deprecated. Please use "xl trigger" instead.
72
73           Indicate an ACPI button press to the domain, where button can be
74           'power' or 'sleep'. This command is only available for HVM domains.
75
76       create [configfile] [OPTIONS]
77           The create subcommand takes a config file as its first argument:
78           see xl.cfg(5) for full details of the file format and possible
79           options.  If configfile is missing xl creates the domain assuming
80           the default values for every option.
81
82           configfile has to be an absolute path to a file.
83
84           Create will return as soon as the domain is started.  This does not
85           mean the guest OS in the domain has actually booted, or is
86           available for input.
87
88           If the -F option is specified, create will start the domain and not
89           return until its death.
90
91           OPTIONS
92
93           -q, --quiet
94               No console output.
95
96           -f=FILE, --defconfig=FILE
97               Use the given configuration file.
98
99           -p  Leave the domain paused after it is created.
100
101           -F  Run in foreground until death of the domain.
102
103           -V, --vncviewer
104               Attach to domain's VNC server, forking a vncviewer process.
105
106           -A, --vncviewer-autopass
107               Pass the VNC password to vncviewer via stdin.
108
109           -c  Attach console to the domain as soon as it has started.  This
110               is useful for determining issues with crashing domains and just
111               as a general convenience since you often want to watch the
112               domain boot.
113
114           key=value
115               It is possible to pass key=value pairs on the command line to
116               provide options as if they were written in the configuration
117               file; these override whatever is in the configfile.
118
119               NB: Many config options require characters such as quotes or
120               brackets which are interpreted by the shell (and often
121               discarded) before being passed to xl, resulting in xl being
122               unable to parse the value correctly.  A simple work-around is
123               to put all extra options within a single set of quotes,
124               separated by semicolons.  (See below for an example.)
125
126           EXAMPLES
127
128           with config file
129                 xl create DebianLenny
130
131               This creates a domain with the file /etc/xen/DebianLenny, and
132               returns as soon as it is run.
133
134           with extra parameters
135                 xl create hvm.cfg 'cpus="0-3"; pci=["01:05.1","01:05.2"]'
136
137               This creates a domain with the file hvm.cfg, but additionally
138               pins it to cpus 0-3, and passes through two PCI devices.
139
140       config-update domain-id [configfile] [OPTIONS]
141           Update the saved configuration for a running domain. This has no
142           immediate effect but will be applied when the guest is next
143           restarted. This command is useful to ensure that runtime
144           modifications made to the guest will be preserved when the guest is
145           restarted.
146
147           Since Xen 4.5 xl has improved capabilities to handle dynamic domain
148           configuration changes and will preserve any changes made at runtime
149           when necessary. Therefore it should not normally be necessary to
150           use this command any more.
151
152           configfile has to be an absolute path to a file.
153
154           OPTIONS
155
156           -f=FILE, --defconfig=FILE
157               Use the given configuration file.
158
159           key=value
160               It is possible to pass key=value pairs on the command line to
161               provide options as if they were written in the configuration
162               file; these override whatever is in the configfile.  Please see
163               the note under create on handling special characters when
164               passing key=value pairs on the command line.
165
166       console [OPTIONS] domain-id
167           Attach to the console of a domain specified by domain-id.  If
168           you've set up your domains to have a traditional login console this
169           will look much like a normal text login screen.
170
171           Use the key combination Ctrl+] to detach from the domain console.
172
173           OPTIONS
174
175           -t [pv|serial]
176               Connect to a PV console or connect to an emulated serial
177               console.  PV consoles are the only consoles available for PV
178               domains while HVM domains can have both. If this option is not
179               specified it defaults to emulated serial for HVM guests and PV
180               console for PV guests.
181
182           -n NUM
183               Connect to console number NUM. Console numbers start from 0.
184
185       destroy [OPTIONS] domain-id
186           Immediately terminate the domain specified by domain-id.  This
187           doesn't give the domain OS any chance to react, and is the
188           equivalent of ripping the power cord out on a physical machine.  In
189           most cases you will want to use the shutdown command instead.
190
191           OPTIONS
192
193           -f  Allow domain 0 to be destroyed.  Because a domain cannot
194               destroy itself, this is only possible when using a
195               disaggregated toolstack, and is most useful when using a
196               hardware domain separated from domain 0.
197
198       domid domain-name
199           Converts a domain name to a domain id.
200
201       domname domain-id
202           Converts a domain id to a domain name.
203
204       rename domain-id new-name
205           Change the domain name of a domain specified by domain-id to new-
206           name.
207
208       dump-core domain-id [filename]
209           Dumps the virtual machine's memory for the specified domain to the
210           filename specified, without pausing the domain.  The dump file will
211           be written to a distribution specific directory for dump files, for
212           example: /var/lib/xen/dump/dump.
213
214       help [--long]
215           Displays the short help message (i.e. common commands) by default.
216
217           If the --long option is specified, it displays the complete set of
218           xl subcommands, grouped by function.
219
220       list [OPTIONS] [domain-id ...]
221           Displays information about one or more domains.  If no domains are
222           specified it displays information about all domains.
223
224           OPTIONS
225
226           -l, --long
227               The output for xl list is not the table view shown below, but
228               instead presents the data as a JSON data structure.
229
230           -Z, --context
231               Also displays the security labels.
232
233           -v, --verbose
234               Also displays the domain UUIDs, the shutdown reason and
235               security labels.
236
237           -c, --cpupool
238               Also displays the cpupool the domain belongs to.
239
240           -n, --numa
241               Also displays the domain NUMA node affinity.
242
243           EXAMPLE
244
245           An example format for the list is as follows:
246
247               Name                                        ID   Mem VCPUs      State   Time(s)
248               Domain-0                                     0   750     4     r-----   11794.3
249               win                                          1  1019     1     r-----       0.3
250               linux                                        2  2048     2     r-----    5624.2
251
252           Name is the name of the domain.  ID the numeric domain id.  Mem is
253           the desired amount of memory to allocate to the domain (although it
254           may not be the currently allocated amount).  VCPUs is the number of
255           virtual CPUs allocated to the domain.  State is the run state (see
256           below).  Time is the total run time of the domain as accounted for
257           by Xen.
258
259           STATES
260
261           The State field lists 6 states for a Xen domain, and which ones the
262           current domain is in.
263
264           r - running
265               The domain is currently running on a CPU.
266
267           b - blocked
268               The domain is blocked, and not running or runnable.  This can
269               be because the domain is waiting on IO (a traditional wait
270               state) or has gone to sleep because there was nothing else for
271               it to do.
272
273           p - paused
274               The domain has been paused, usually occurring through the
275               administrator running xl pause.  When in a paused state the
276               domain will still consume allocated resources (like memory),
277               but will not be eligible for scheduling by the Xen hypervisor.
278
279           s - shutdown
280               The guest OS has shut down (SCHEDOP_shutdown has been called)
281               but the domain is not dying yet.
282
283           c - crashed
284               The domain has crashed, which is always a violent ending.
285               Usually this state only occurs if the domain has been
286               configured not to restart on a crash.  See xl.cfg(5) for more
287               info.
288
289           d - dying
290               The domain is in the process of dying, but hasn't completely
291               shut down or crashed.
292
293           NOTES
294
295               The Time column is deceptive.  Virtual IO (network and block
296               devices) used by the domains requires coordination by Domain0,
297               which means that Domain0 is actually charged for much of the
298               time that a DomainU is doing IO.  Use of this time value to
299               determine relative utilizations by domains is thus very
300               unreliable, as a high IO workload may show as less utilized
301               than a high CPU workload.  Consider yourself warned.
302
303       mem-set domain-id mem
304           Set the target for the domain's balloon driver.
305
306           The default unit is kiB.  Add 't' for TiB, 'g' for GiB, 'm' for
307           MiB, 'k' for kiB, and 'b' for bytes (e.g., `2048m` for 2048 MiB).
308
309           This must be less than the initial maxmem parameter in the domain's
310           configuration.
311
312           Note that this operation requests the guest operating system's
313           balloon driver to reach the target amount of memory.  The guest may
314           fail to reach that amount of memory for any number of reasons,
315           including:
316
317           ·   The guest doesn't have a balloon driver installed
318
319           ·   The guest's balloon driver is buggy
320
321           ·   The guest's balloon driver cannot create free guest memory due
322               to guest memory pressure
323
324           ·   The guest's balloon driver cannot allocate memory from Xen
325               because of hypervisor memory pressure
326
327           ·   The guest administrator has disabled the balloon driver
328
329           Warning: There is no good way to know in advance how small of a
330           mem-set will make a domain unstable and cause it to crash.  Be very
331           careful when using this command on running domains.
332
333       mem-max domain-id mem
334           Specify the limit Xen will place on the amount of memory a guest
335           may allocate.
336
337           The default unit is kiB.  Add 't' for TiB, 'g' for GiB, 'm' for
338           MiB, 'k' for kiB, and 'b' for bytes (e.g., `2048m` for 2048 MiB).
339
340           NB that users normally shouldn't need this command; xl mem-set will
341           set this as appropriate automatically.
342
343           mem can't be set lower than the current memory target for domain-
344           id.  It is allowed to be higher than the configured maximum memory
345           size of the domain (maxmem parameter in the domain's
346           configuration). Note however that the initial maxmem value is still
347           used as an upper limit for xl mem-set.  Also note that calling xl
348           mem-set will reset this value.
349
350           The domain will not receive any signal regarding the changed memory
351           limit.
352
353       migrate [OPTIONS] domain-id host
354           Migrate a domain to another host machine. By default xl relies on
355           ssh as a transport mechanism between the two hosts.
356
357           OPTIONS
358
359           -s sshcommand
360               Use <sshcommand> instead of ssh.  String will be passed to sh.
361               If empty, run <host> instead of ssh <host> xl migrate-receive
362               [-d -e].
363
364           -e  On the new <host>, do not wait in the background for the death
365               of the domain. See the corresponding option of the create
366               subcommand.
367
368           -C config
369               Send the specified <config> file instead of the file used on
370               creation of the domain.
371
372           --debug
373               Display huge (!) amount of debug information during the
374               migration process.
375
376           -p  Leave the domain on the receive side paused after migration.
377
378       remus [OPTIONS] domain-id host
379           Enable Remus HA or COLO HA for domain. By default xl relies on ssh
380           as a transport mechanism between the two hosts.
381
382           NOTES
383
384               Remus support in xl is still in experimental (proof-of-concept)
385               phase.  Disk replication support is limited to DRBD disks.
386
387               COLO support in xl is still in experimental (proof-of-concept)
388               phase. All options are subject to change in the future.
389
390           COLO disk configuration looks like:
391
392             disk = ['...,colo,colo-host=xxx,colo-port=xxx,colo-export=xxx,active-disk=xxx,hidden-disk=xxx...']
393
394           The supported options are:
395
396           colo-host   : Secondary host's ip address.
397           colo-port   : Secondary host's port, we will run a nbd server on
398           the secondary host, and the nbd server will listen on this port.
399           colo-export : Nbd server's disk export name of the secondary host.
400           active-disk : Secondary's guest write will be buffered to this
401           disk, and it's used by the secondary.
402           hidden-disk : Primary's modified contents will be buffered in this
403           disk, and it's used by the secondary.
404
405           COLO network configuration looks like:
406
407             vif = [ '...,forwarddev=xxx,...']
408
409           The supported options are:
410
411           forwarddev : Forward devices for the primary and the secondary,
412           they are directly connected.
413
414           OPTIONS
415
416           -i MS
417               Checkpoint domain memory every MS milliseconds (default 200ms).
418
419           -u  Disable memory checkpoint compression.
420
421           -s sshcommand
422               Use <sshcommand> instead of ssh.  String will be passed to sh.
423               If empty, run <host> instead of ssh <host> xl migrate-receive
424               -r [-e].
425
426           -e  On the new <host>, do not wait in the background for the death
427               of the domain.  See the corresponding option of the create
428               subcommand.
429
430           -N netbufscript
431               Use <netbufscript> to setup network buffering instead of the
432               default script (/etc/xen/scripts/remus-netbuf-setup).
433
434           -F  Run Remus in unsafe mode. Use this option with caution as
435               failover may not work as intended.
436
437           -b  Replicate memory checkpoints to /dev/null (blackhole).
438               Generally useful for debugging. Requires enabling unsafe mode.
439
440           -n  Disable network output buffering. Requires enabling unsafe
441               mode.
442
443           -d  Disable disk replication. Requires enabling unsafe mode.
444
445           -c  Enable COLO HA. This conflicts with -i and -b, and memory
446               checkpoint compression must be disabled.
447
448           -p  Use userspace COLO Proxy. This option must be used in
449               conjunction with -c.
450
451       pause domain-id
452           Pause a domain.  When in a paused state the domain will still
453           consume allocated resources (such as memory), but will not be
454           eligible for scheduling by the Xen hypervisor.
455
456       reboot [OPTIONS] domain-id
457           Reboot a domain.  This acts just as if the domain had the reboot
458           command run from the console.  The command returns as soon as it
459           has executed the reboot action, which may be significantly earlier
460           than when the domain actually reboots.
461
462           For HVM domains this requires PV drivers to be installed in your
463           guest OS. If PV drivers are not present but you have configured the
464           guest OS to behave appropriately you may be able to use the -F
465           option to trigger a reset button press.
466
467           The behavior of what happens to a domain when it reboots is set by
468           the on_reboot parameter of the domain configuration file when the
469           domain was created.
470
471           OPTIONS
472
473           -F  If the guest does not support PV reboot control then fallback
474               to sending an ACPI power event (equivalent to the reset option
475               to trigger).
476
477               You should ensure that the guest is configured to behave as
478               expected in response to this event.
479
480       restore [OPTIONS] [configfile] checkpointfile
481           Build a domain from an xl save state file.  See save for more info.
482
483           OPTIONS
484
485           -p  Do not unpause the domain after restoring it.
486
487           -e  Do not wait in the background for the death of the domain on
488               the new host.  See the corresponding option of the create
489               subcommand.
490
491           -d  Enable debug messages.
492
493           -V, --vncviewer
494               Attach to the domain's VNC server, forking a vncviewer process.
495
496           -A, --vncviewer-autopass
497               Pass the VNC password to vncviewer via stdin.
498
499       save [OPTIONS] domain-id checkpointfile [configfile]
500           Saves a running domain to a state file so that it can be restored
501           later.  Once saved, the domain will no longer be running on the
502           system, unless the -c or -p options are used.  xl restore restores
503           from this checkpoint file.  Passing a config file argument allows
504           the user to manually select the VM config file used to create the
505           domain.
506
507           -c  Leave the domain running after creating the snapshot.
508
509           -p  Leave the domain paused after creating the snapshot.
510
511       sharing [domain-id]
512           Display the number of shared pages for a specified domain. If no
513           domain is specified it displays information about all domains.
514
515       shutdown [OPTIONS] -a|domain-id
516           Gracefully shuts down a domain.  This coordinates with the domain
517           OS to perform graceful shutdown, so there is no guarantee that it
518           will succeed, and may take a variable length of time depending on
519           what services must be shut down in the domain.
520
521           For HVM domains this requires PV drivers to be installed in your
522           guest OS. If PV drivers are not present but you have configured the
523           guest OS to behave appropriately you may be able to use the -F
524           option to trigger a power button press.
525
526           The command returns immediately after signaling the domain unless
527           the -w flag is used.
528
529           The behavior of what happens to a domain when it reboots is set by
530           the on_shutdown parameter of the domain configuration file when the
531           domain was created.
532
533           OPTIONS
534
535           -a, --all
536               Shutdown all guest domains.  Often used when doing a complete
537               shutdown of a Xen system.
538
539           -w, --wait
540               Wait for the domain to complete shutdown before returning.
541
542           -F  If the guest does not support PV shutdown control then fallback
543               to sending an ACPI power event (equivalent to the power option
544               to trigger).
545
546               You should ensure that the guest is configured to behave as
547               expected in response to this event.
548
549       sysrq domain-id letter
550           Send a <Magic System Request> to the domain, each type of request
551           is represented by a different letter.  It can be used to send SysRq
552           requests to Linux guests, see sysrq.txt in your Linux Kernel
553           sources for more information.  It requires PV drivers to be
554           installed in your guest OS.
555
556       trigger domain-id nmi|reset|init|power|sleep|s3resume [VCPU]
557           Send a trigger to a domain, where the trigger can be: nmi, reset,
558           init, power or sleep.  Optionally a specific vcpu number can be
559           passed as an argument.  This command is only available for HVM
560           domains.
561
562       unpause domain-id
563           Moves a domain out of the paused state.  This will allow a
564           previously paused domain to now be eligible for scheduling by the
565           Xen hypervisor.
566
567       vcpu-set domain-id vcpu-count
568           Enables the vcpu-count virtual CPUs for the domain in question.
569           Like mem-set, this command can only allocate up to the maximum
570           virtual CPU count configured at boot for the domain.
571
572           If the vcpu-count is smaller than the current number of active
573           VCPUs, the highest number VCPUs will be hotplug removed.  This may
574           be important for pinning purposes.
575
576           Attempting to set the VCPUs to a number larger than the initially
577           configured VCPU count is an error.  Trying to set VCPUs to < 1 will
578           be quietly ignored.
579
580           Some guests may need to actually bring the newly added CPU online
581           after vcpu-set, go to SEE ALSO section for information.
582
583       vcpu-list [domain-id]
584           Lists VCPU information for a specific domain.  If no domain is
585           specified, VCPU information for all domains will be provided.
586
587       vcpu-pin [-f|--force] domain-id vcpu cpus hard cpus soft
588           Set hard and soft affinity for a vcpu of <domain-id>. Normally
589           VCPUs can float between available CPUs whenever Xen deems a
590           different run state is appropriate.
591
592           Hard affinity can be used to restrict this, by ensuring certain
593           VCPUs can only run on certain physical CPUs. Soft affinity
594           specifies a preferred set of CPUs. Soft affinity needs special
595           support in the scheduler, which is only provided in credit1.
596
597           The keyword all can be used to apply the hard and soft affinity
598           masks to all the VCPUs in the domain. The symbol '-' can be used to
599           leave either hard or soft affinity alone.
600
601           For example:
602
603            xl vcpu-pin 0 3 - 6-9
604
605           will set soft affinity for vCPU 3 of domain 0 to pCPUs 6,7,8 and 9,
606           leaving its hard affinity untouched. On the other hand:
607
608            xl vcpu-pin 0 3 3,4 6-9
609
610           will set both hard and soft affinity, the former to pCPUs 3 and 4,
611           the latter to pCPUs 6,7,8, and 9.
612
613           Specifying -f or --force will remove a temporary pinning done by
614           the operating system (normally this should be done by the operating
615           system).  In case a temporary pinning is active for a vcpu the
616           affinity of this vcpu can't be changed without this option.
617
618       vm-list
619           Prints information about guests. This list excludes information
620           about service or auxiliary domains such as dom0 and stubdoms.
621
622           EXAMPLE
623
624           An example format for the list is as follows:
625
626               UUID                                  ID    name
627               59e1cf6c-6ab9-4879-90e7-adc8d1c63bf5  2    win
628               50bc8f75-81d0-4d53-b2e6-95cb44e2682e  3    linux
629
630       vncviewer [OPTIONS] domain-id
631           Attach to the domain's VNC server, forking a vncviewer process.
632
633           OPTIONS
634
635           --autopass
636               Pass the VNC password to vncviewer via stdin.
637

XEN HOST SUBCOMMANDS

639       debug-keys keys
640           Send debug keys to Xen. It is the same as pressing the Xen
641           "conswitch" (Ctrl-A by default) three times and then pressing
642           "keys".
643
644       set-parameters params
645           Set hypervisor parameters as specified in params. This allows for
646           some boot parameters of the hypervisor to be modified in the
647           running systems.
648
649       dmesg [OPTIONS]
650           Reads the Xen message buffer, similar to dmesg on a Linux system.
651           The buffer contains informational, warning, and error messages
652           created during Xen's boot process.  If you are having problems with
653           Xen, this is one of the first places to look as part of problem
654           determination.
655
656           OPTIONS
657
658           -c, --clear
659               Clears Xen's message buffer.
660
661       info [OPTIONS]
662           Print information about the Xen host in name : value format.  When
663           reporting a Xen bug, please provide this information as part of the
664           bug report. See
665           https://wiki.xenproject.org/wiki/Reporting_Bugs_against_Xen_Project
666           on how to report Xen bugs.
667
668           Sample output looks as follows:
669
670            host                   : scarlett
671            release                : 3.1.0-rc4+
672            version                : #1001 SMP Wed Oct 19 11:09:54 UTC 2011
673            machine                : x86_64
674            nr_cpus                : 4
675            nr_nodes               : 1
676            cores_per_socket       : 4
677            threads_per_core       : 1
678            cpu_mhz                : 2266
679            hw_caps                : bfebfbff:28100800:00000000:00003b40:009ce3bd:00000000:00000001:00000000
680            virt_caps              : hvm hvm_directio
681            total_memory           : 6141
682            free_memory            : 4274
683            free_cpus              : 0
684            outstanding_claims     : 0
685            xen_major              : 4
686            xen_minor              : 2
687            xen_extra              : -unstable
688            xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
689            xen_scheduler          : credit
690            xen_pagesize           : 4096
691            platform_params        : virt_start=0xffff800000000000
692            xen_changeset          : Wed Nov 02 17:09:09 2011 +0000 24066:54a5e994a241
693            xen_commandline        : com1=115200,8n1 guest_loglvl=all dom0_mem=750M console=com1
694            cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
695            cc_compile_by          : sstabellini
696            cc_compile_domain      : uk.xensource.com
697            cc_compile_date        : Tue Nov  8 12:03:05 UTC 2011
698            xend_config_format     : 4
699
700           FIELDS
701
702           Not all fields will be explained here, but some of the less obvious
703           ones deserve explanation:
704
705           hw_caps
706               A vector showing what hardware capabilities are supported by
707               your processor.  This is equivalent to, though more cryptic,
708               the flags field in /proc/cpuinfo on a normal Linux machine:
709               they both derive from the feature bits returned by the cpuid
710               command on x86 platforms.
711
712           free_memory
713               Available memory (in MB) not allocated to Xen, or any other
714               domains, or claimed for domains.
715
716           outstanding_claims
717               When a claim call is done (see xl.conf(5)) a reservation for a
718               specific amount of pages is set and also a global value is
719               incremented. This global value (outstanding_claims) is then
720               reduced as the domain's memory is populated and eventually
721               reaches zero. Most of the time the value will be zero, but if
722               you are launching multiple guests, and claim_mode is enabled,
723               this value can increase/decrease. Note that the value also
724               affects the free_memory - as it will reflect the free memory in
725               the hypervisor minus the outstanding pages claimed for guests.
726               See xl info claims parameter for detailed listing.
727
728           xen_caps
729               The Xen version and architecture.  Architecture values can be
730               one of: x86_32, x86_32p (i.e. PAE enabled), x86_64, ia64.
731
732           xen_changeset
733               The Xen mercurial changeset id.  Very useful for determining
734               exactly what version of code your Xen system was built from.
735
736           OPTIONS
737
738           -n, --numa
739               List host NUMA topology information
740
741       top Executes the xentop(1) command, which provides real time monitoring
742           of domains.  Xentop has a curses interface, and is reasonably self
743           explanatory.
744
745       uptime
746           Prints the current uptime of the domains running.
747
748       claims
749           Prints information about outstanding claims by the guests. This
750           provides the outstanding claims and currently populated memory
751           count for the guests.  These values added up reflect the global
752           outstanding claim value, which is provided via the info argument,
753           outstanding_claims value.  The Mem column has the cumulative value
754           of outstanding claims and the total amount of memory that has been
755           right now allocated to the guest.
756
757           EXAMPLE
758
759           An example format for the list is as follows:
760
761            Name                                        ID   Mem VCPUs      State   Time(s)  Claimed
762            Domain-0                                     0  2047     4     r-----      19.7     0
763            OL5                                          2  2048     1     --p---       0.0   847
764            OL6                                          3  1024     4     r-----       5.9     0
765            Windows_XP                                   4  2047     1     --p---       0.0  1989
766
767           In which it can be seen that the OL5 guest still has 847MB of
768           claimed memory (out of the total 2048MB where 1191MB has been
769           allocated to the guest).
770

SCHEDULER SUBCOMMANDS

772       Xen ships with a number of domain schedulers, which can be set at boot
773       time with the sched= parameter on the Xen command line.  By default
774       credit is used for scheduling.
775
776       sched-credit [OPTIONS]
777           Set or get credit (aka credit1) scheduler parameters.  The credit
778           scheduler is a proportional fair share CPU scheduler built from the
779           ground up to be work conserving on SMP hosts.
780
781           Each domain (including Domain0) is assigned a weight and a cap.
782
783           OPTIONS
784
785           -d DOMAIN, --domain=DOMAIN
786               Specify domain for which scheduler parameters are to be
787               modified or retrieved.  Mandatory for modifying scheduler
788               parameters.
789
790           -w WEIGHT, --weight=WEIGHT
791               A domain with a weight of 512 will get twice as much CPU as a
792               domain with a weight of 256 on a contended host. Legal weights
793               range from 1 to 65535 and the default is 256.
794
795           -c CAP, --cap=CAP
796               The cap optionally fixes the maximum amount of CPU a domain
797               will be able to consume, even if the host system has idle CPU
798               cycles. The cap is expressed in percentage of one physical CPU:
799               100 is 1 physical CPU, 50 is half a CPU, 400 is 4 CPUs, etc.
800               The default, 0, means there is no upper cap.
801
802               NB: Many systems have features that will scale down the
803               computing power of a cpu that is not 100% utilized.  This can
804               be in the operating system, but can also sometimes be below the
805               operating system in the BIOS.  If you set a cap such that
806               individual cores are running at less than 100%, this may have
807               an impact on the performance of your workload over and above
808               the impact of the cap. For example, if your processor runs at
809               2GHz, and you cap a vm at 50%, the power management system may
810               also reduce the clock speed to 1GHz; the effect will be that
811               your VM gets 25% of the available power (50% of 1GHz) rather
812               than 50% (50% of 2GHz).  If you are not getting the performance
813               you expect, look at performance and cpufreq options in your
814               operating system and your BIOS.
815
816           -p CPUPOOL, --cpupool=CPUPOOL
817               Restrict output to domains in the specified cpupool.
818
819           -s, --schedparam
820               Specify to list or set pool-wide scheduler parameters.
821
822           -t TSLICE, --tslice_ms=TSLICE
823               Timeslice tells the scheduler how long to allow VMs to run
824               before pre-empting.  The default is 30ms.  Valid ranges are 1ms
825               to 1000ms.  The length of the timeslice (in ms) must be higher
826               than the length of the ratelimit (see below).
827
828           -r RLIMIT, --ratelimit_us=RLIMIT
829               Ratelimit attempts to limit the number of schedules per second.
830               It sets a minimum amount of time (in microseconds) a VM must
831               run before we will allow a higher-priority VM to pre-empt it.
832               The default value is 1000 microseconds (1ms).  Valid range is
833               100 to 500000 (500ms).  The ratelimit length must be lower than
834               the timeslice length.
835
836           -m DELAY, --migration_delay_us=DELAY
837               Migration delay specifies for how long a vCPU, after it stopped
838               running should be considered "cache-hot". Basically, if less
839               than DELAY us passed since when the vCPU was executing on a
840               CPU, it is likely that most of the vCPU's working set is still
841               in the CPU's cache, and therefore the vCPU is not migrated.
842
843               Default is 0. Maximum is 100 ms. This can be effective at
844               preventing vCPUs to bounce among CPUs too quickly, but, at the
845               same time, the scheduler stops being fully work-conserving.
846
847           COMBINATION
848
849           The following is the effect of combining the above options:
850
851           <nothing>             : List all domain params and sched params
852           from all pools
853           -d [domid]            : List domain params for domain [domid]
854           -d [domid] [params]   : Set domain params for domain [domid]
855           -p [pool]             : list all domains and sched params for
856           [pool]
857           -s                    : List sched params for poolid 0
858           -s [params]           : Set sched params for poolid 0
859           -p [pool] -s          : List sched params for [pool]
860           -p [pool] -s [params] : Set sched params for [pool]
861           -p [pool] -d...       : Illegal
862       sched-credit2 [OPTIONS]
863           Set or get credit2 scheduler parameters.  The credit2 scheduler is
864           a proportional fair share CPU scheduler built from the ground up to
865           be work conserving on SMP hosts.
866
867           Each domain (including Domain0) is assigned a weight.
868
869           OPTIONS
870
871           -d DOMAIN, --domain=DOMAIN
872               Specify domain for which scheduler parameters are to be
873               modified or retrieved.  Mandatory for modifying scheduler
874               parameters.
875
876           -w WEIGHT, --weight=WEIGHT
877               A domain with a weight of 512 will get twice as much CPU as a
878               domain with a weight of 256 on a contended host. Legal weights
879               range from 1 to 65535 and the default is 256.
880
881           -p CPUPOOL, --cpupool=CPUPOOL
882               Restrict output to domains in the specified cpupool.
883
884           -s, --schedparam
885               Specify to list or set pool-wide scheduler parameters.
886
887           -r RLIMIT, --ratelimit_us=RLIMIT
888               Attempts to limit the rate of context switching. It is
889               basically the same as --ratelimit_us in sched-credit
890
891       sched-rtds [OPTIONS]
892           Set or get rtds (Real Time Deferrable Server) scheduler parameters.
893           This rt scheduler applies Preemptive Global Earliest Deadline First
894           real-time scheduling algorithm to schedule VCPUs in the system.
895           Each VCPU has a dedicated period, budget and extratime.  While
896           scheduled, a VCPU burns its budget.  A VCPU has its budget
897           replenished at the beginning of each period; Unused budget is
898           discarded at the end of each period.  A VCPU with extratime set
899           gets extra time from the unreserved system resource.
900
901           OPTIONS
902
903           -d DOMAIN, --domain=DOMAIN
904               Specify domain for which scheduler parameters are to be
905               modified or retrieved.  Mandatory for modifying scheduler
906               parameters.
907
908           -v VCPUID/all, --vcpuid=VCPUID/all
909               Specify vcpu for which scheduler parameters are to be modified
910               or retrieved.
911
912           -p PERIOD, --period=PERIOD
913               Period of time, in microseconds, over which to replenish the
914               budget.
915
916           -b BUDGET, --budget=BUDGET
917               Amount of time, in microseconds, that the VCPU will be allowed
918               to run every period.
919
920           -e Extratime, --extratime=Extratime
921               Binary flag to decide if the VCPU will be allowed to get extra
922               time from the unreserved system resource.
923
924           -c CPUPOOL, --cpupool=CPUPOOL
925               Restrict output to domains in the specified cpupool.
926
927           EXAMPLE
928
929               1) Use -v all to see the budget and period of all the VCPUs of
930               all the domains:
931
932                   xl sched-rtds -v all
933                   Cpupool Pool-0: sched=RTDS
934                   Name                        ID VCPU    Period    Budget  Extratime
935                   Domain-0                     0    0     10000      4000        yes
936                   vm1                          2    0       300       150        yes
937                   vm1                          2    1       400       200        yes
938                   vm1                          2    2     10000      4000        yes
939                   vm1                          2    3      1000       500        yes
940                   vm2                          4    0     10000      4000        yes
941                   vm2                          4    1     10000      4000        yes
942
943               Without any arguments, it will output the default scheduling
944               parameters for each domain:
945
946                   xl sched-rtds
947                   Cpupool Pool-0: sched=RTDS
948                   Name                        ID    Period    Budget  Extratime
949                   Domain-0                     0     10000      4000        yes
950                   vm1                          2     10000      4000        yes
951                   vm2                          4     10000      4000        yes
952
953               2) Use, for instance, -d vm1, -v all to see the budget and
954               period of all VCPUs of a specific domain (vm1):
955
956                   xl sched-rtds -d vm1 -v all
957                   Name                        ID VCPU    Period    Budget  Extratime
958                   vm1                          2    0       300       150        yes
959                   vm1                          2    1       400       200        yes
960                   vm1                          2    2     10000      4000        yes
961                   vm1                          2    3      1000       500        yes
962
963               To see the parameters of a subset of the VCPUs of a domain,
964               use:
965
966                   xl sched-rtds -d vm1 -v 0 -v 3
967                   Name                        ID VCPU    Period    Budget  Extratime
968                   vm1                          2    0       300       150        yes
969                   vm1                          2    3      1000       500        yes
970
971               If no -v is specified, the default scheduling parameters for
972               the domain are shown:
973
974                   xl sched-rtds -d vm1
975                   Name                        ID    Period    Budget  Extratime
976                   vm1                          2     10000      4000        yes
977
978               3) Users can set the budget and period of multiple VCPUs of a
979               specific domain with only one command, e.g., "xl sched-rtds -d
980               vm1 -v 0 -p 100 -b 50 -e 1 -v 3 -p 300 -b 150 -e 0".
981
982               To change the parameters of all the VCPUs of a domain, use -v
983               all, e.g., "xl sched-rtds -d vm1 -v all -p 500 -b 250 -e 1".
984

CPUPOOLS COMMANDS

986       Xen can group the physical cpus of a server in cpu-pools. Each physical
987       CPU is assigned at most to one cpu-pool. Domains are each restricted to
988       a single cpu-pool. Scheduling does not cross cpu-pool boundaries, so
989       each cpu-pool has its own scheduler.  Physical cpus and domains can be
990       moved from one cpu-pool to another only by an explicit command.  Cpu-
991       pools can be specified either by name or by id.
992
993       cpupool-create [OPTIONS] [configfile] [variable=value ...]
994           Create a cpu pool based an config from a configfile or command-line
995           parameters.  Variable settings from the configfile may be altered
996           by specifying new or additional assignments on the command line.
997
998           See the xlcpupool.cfg(5) manpage for more information.
999
1000           OPTIONS
1001
1002           -f=FILE, --defconfig=FILE
1003               Use the given configuration file.
1004
1005       cpupool-list [OPTIONS] [cpu-pool]
1006           List CPU pools on the host.
1007
1008           OPTIONS
1009
1010           -c, --cpus
1011               If this option is specified, xl prints a list of CPUs used by
1012               cpu-pool.
1013
1014       cpupool-destroy cpu-pool
1015           Deactivates a cpu pool.  This is possible only if no domain is
1016           active in the cpu-pool.
1017
1018       cpupool-rename cpu-pool <newname>
1019           Renames a cpu-pool to newname.
1020
1021       cpupool-cpu-add cpu-pool cpus|node:nodes
1022           Adds one or more CPUs or NUMA nodes to cpu-pool. CPUs and NUMA
1023           nodes can be specified as single CPU/node IDs or as ranges.
1024
1025           For example:
1026
1027            (a) xl cpupool-cpu-add mypool 4
1028            (b) xl cpupool-cpu-add mypool 1,5,10-16,^13
1029            (c) xl cpupool-cpu-add mypool node:0,nodes:2-3,^10-12,8
1030
1031           means adding CPU 4 to mypool, in (a); adding CPUs
1032           1,5,10,11,12,14,15 and 16, in (b); and adding all the CPUs of NUMA
1033           nodes 0, 2 and 3, plus CPU 8, but keeping out CPUs 10,11,12, in
1034           (c).
1035
1036           All the specified CPUs that can be added to the cpupool will be
1037           added to it. If some CPU can't (e.g., because they're already part
1038           of another cpupool), an error is reported about each one of them.
1039
1040       cpupool-cpu-remove cpus|node:nodes
1041           Removes one or more CPUs or NUMA nodes from cpu-pool. CPUs and NUMA
1042           nodes can be specified as single CPU/node IDs or as ranges, using
1043           the exact same syntax as in cpupool-cpu-add above.
1044
1045       cpupool-migrate domain-id cpu-pool
1046           Moves a domain specified by domain-id or domain-name into a cpu-
1047           pool.  Domain-0 can't be moved to another cpu-pool.
1048
1049       cpupool-numa-split
1050           Splits up the machine into one cpu-pool per numa node.
1051

VIRTUAL DEVICE COMMANDS

1053       Most virtual devices can be added and removed while guests are running,
1054       assuming that the necessary support exists in the guest OS.  The effect
1055       to the guest OS is much the same as any hotplug event.
1056
1057   BLOCK DEVICES
1058       block-attach domain-id disc-spec-component(s) ...
1059           Create a new virtual block device and attach it to the specified
1060           domain.  A disc specification is in the same format used for the
1061           disk variable in the domain config file. See
1062           xl-disk-configuration(5). This will trigger a hotplug event for the
1063           guest.
1064
1065           Note that only PV block devices are supported by block-attach.
1066           Requests to attach emulated devices (eg, vdev=hdc) will result in
1067           only the PV view being available to the guest.
1068
1069       block-detach domain-id devid [OPTIONS]
1070           Detach a domain's virtual block device. devid may be the symbolic
1071           name or the numeric device id given to the device by domain 0.  You
1072           will need to run xl block-list to determine that number.
1073
1074           Detaching the device requires the cooperation of the domain.  If
1075           the domain fails to release the device (perhaps because the domain
1076           is hung or is still using the device), the detach will fail.
1077
1078           OPTIONS
1079
1080           --force
1081               If this parameter is specified the device will be forcefully
1082               detached, which may cause IO errors in the domain.
1083
1084       block-list domain-id
1085           List virtual block devices for a domain.
1086
1087       cd-insert domain-id virtualdevice target
1088           Insert a cdrom into a guest domain's existing virtual cd drive. The
1089           virtual drive must already exist but can be empty. How the device
1090           should be presented to the guest domain is specified by the
1091           virtualdevice parameter; for example "hdc". Parameter target is the
1092           target path in the backend domain (usually domain 0) to be
1093           exported; can be a block device or a file etc.  See target in
1094           xl-disk-configuration(5).
1095
1096           Only works with HVM domains.
1097
1098       cd-eject domain-id virtualdevice
1099           Eject a cdrom from a guest domain's virtual cd drive, specified by
1100           virtualdevice. Only works with HVM domains.
1101
1102   NETWORK DEVICES
1103       network-attach domain-id network-device
1104           Creates a new network device in the domain specified by domain-id.
1105           network-device describes the device to attach, using the same
1106           format as the vif string in the domain config file. See xl.cfg(5)
1107           and xl-network-configuration(5) for more information.
1108
1109           Note that only attaching PV network interfaces is supported.
1110
1111       network-detach domain-id devid|mac
1112           Removes the network device from the domain specified by domain-id.
1113           devid is the virtual interface device number within the domain
1114           (i.e. the 3 in vif22.3). Alternatively, the mac address can be used
1115           to select the virtual interface to detach.
1116
1117       network-list domain-id
1118           List virtual network interfaces for a domain.
1119
1120   CHANNEL DEVICES
1121       channel-list domain-id
1122           List virtual channel interfaces for a domain.
1123
1124   VIRTUAL TRUSTED PLATFORM MODULE (vTPM) DEVICES
1125       vtpm-attach domain-id vtpm-device
1126           Creates a new vtpm (virtual Trusted Platform Module) device in the
1127           domain specified by domain-id. vtpm-device describes the device to
1128           attach, using the same format as the vtpm string in the domain
1129           config file.  See xl.cfg(5) for more information.
1130
1131       vtpm-detach domain-id devid|uuid
1132           Removes the vtpm device from the domain specified by domain-id.
1133           devid is the numeric device id given to the virtual Trusted
1134           Platform Module device. You will need to run xl vtpm-list to
1135           determine that number. Alternatively, the uuid of the vtpm can be
1136           used to select the virtual device to detach.
1137
1138       vtpm-list domain-id
1139           List virtual Trusted Platform Modules for a domain.
1140
1141   VDISPL DEVICES
1142       vdispl-attach domain-id vdispl-device
1143           Creates a new vdispl device in the domain specified by domain-id.
1144           vdispl-device describes the device to attach, using the same format
1145           as the vdispl string in the domain config file. See xl.cfg(5) for
1146           more information.
1147
1148           NOTES
1149
1150               As in vdispl-device string semicolon is used then put quotes or
1151               escaping when using from the shell.
1152
1153               EXAMPLE
1154
1155                   xl vdispl-attach DomU
1156                   connectors='id0:1920x1080;id1:800x600;id2:640x480'
1157
1158                   or
1159
1160                   xl vdispl-attach DomU
1161                   connectors=id0:1920x1080\;id1:800x600\;id2:640x480
1162
1163       vdispl-detach domain-id dev-id
1164           Removes the vdispl device specified by dev-id from the domain
1165           specified by domain-id.
1166
1167       vdispl-list domain-id
1168           List virtual displays for a domain.
1169
1170   VSND DEVICES
1171       vsnd-attach domain-id vsnd-item vsnd-item ...
1172           Creates a new vsnd device in the domain specified by domain-id.
1173           vsnd-item's describe the vsnd device to attach, using the same
1174           format as the VSND_ITEM_SPEC string in the domain config file. See
1175           xl.cfg(5) for more information.
1176
1177           EXAMPLE
1178
1179               xl vsnd-attach DomU 'CARD, short-name=Main,
1180               sample-formats=s16_le;s8;u32_be' 'PCM, name=Main' 'STREAM,
1181               id=0, type=p' 'STREAM, id=1, type=c, channels-max=2'
1182
1183       vsnd-detach domain-id dev-id
1184           Removes the vsnd device specified by dev-id from the domain
1185           specified by domain-id.
1186
1187       vsnd-list domain-id
1188           List vsnd devices for a domain.
1189
1190   KEYBOARD DEVICES
1191       vkb-attach domain-id vkb-device
1192           Creates a new keyboard device in the domain specified by domain-id.
1193           vkb-device describes the device to attach, using the same format as
1194           the VKB_SPEC_STRING string in the domain config file. See xl.cfg(5)
1195           for more informations.
1196
1197       vkb-detach domain-id devid
1198           Removes the keyboard device from the domain specified by domain-id.
1199           devid is the virtual interface device number within the domain
1200
1201       vkb-list domain-id
1202           List virtual network interfaces for a domain.
1203

PCI PASS-THROUGH

1205       pci-assignable-list
1206           List all the assignable PCI devices.  These are devices in the
1207           system which are configured to be available for passthrough and are
1208           bound to a suitable PCI backend driver in domain 0 rather than a
1209           real driver.
1210
1211       pci-assignable-add BDF
1212           Make the device at PCI Bus/Device/Function BDF assignable to
1213           guests.  This will bind the device to the pciback driver and assign
1214           it to the "quarantine domain".  If it is already bound to a driver,
1215           it will first be unbound, and the original driver stored so that it
1216           can be re-bound to the same driver later if desired.  If the device
1217           is already bound, it will assign it to the quarantine domain and
1218           return success.
1219
1220           CAUTION: This will make the device unusable by Domain 0 until it is
1221           returned with pci-assignable-remove.  Care should therefore be
1222           taken not to do this on a device critical to domain 0's operation,
1223           such as storage controllers, network interfaces, or GPUs that are
1224           currently being used.
1225
1226       pci-assignable-remove [-r] BDF
1227           Make the device at PCI Bus/Device/Function BDF not assignable to
1228           guests.  This will at least unbind the device from pciback, and re-
1229           assign it from the "quarantine domain" back to domain 0.  If the -r
1230           option is specified, it will also attempt to re-bind the device to
1231           its original driver, making it usable by Domain 0 again.  If the
1232           device is not bound to pciback, it will return success.
1233
1234           Note that this functionality will work even for devices which were
1235           not made assignable by pci-assignable-add.  This can be used to
1236           allow dom0 to access devices which were automatically quarantined
1237           by Xen after domain destruction as a result of Xen's
1238           iommu=quarantine command-line default.
1239
1240           As always, this should only be done if you trust the guest, or are
1241           confident that the particular device you're re-assigning to dom0
1242           will cancel all in-flight DMA on FLR.
1243
1244       pci-attach domain-id BDF
1245           Hot-plug a new pass-through pci device to the specified domain.
1246           BDF is the PCI Bus/Device/Function of the physical device to pass-
1247           through.
1248
1249       pci-detach [OPTIONS] domain-id BDF
1250           Hot-unplug a previously assigned pci device from a domain. BDF is
1251           the PCI Bus/Device/Function of the physical device to be removed
1252           from the guest domain.
1253
1254           OPTIONS
1255
1256           -f  If this parameter is specified, xl is going to forcefully
1257               remove the device even without guest domain's collaboration.
1258
1259       pci-list domain-id
1260           List pass-through pci devices for a domain.
1261

USB PASS-THROUGH

1263       usbctrl-attach domain-id usbctrl-device
1264           Create a new USB controller in the domain specified by domain-id,
1265           usbctrl-device describes the device to attach, using form
1266           "KEY=VALUE KEY=VALUE ..." where KEY=VALUE has the same meaning as
1267           the usbctrl description in the domain config file.  See xl.cfg(5)
1268           for more information.
1269
1270       usbctrl-detach domain-id devid
1271           Destroy a USB controller from the specified domain.  devid is devid
1272           of the USB controller.
1273
1274       usbdev-attach domain-id usbdev-device
1275           Hot-plug a new pass-through USB device to the domain specified by
1276           domain-id, usbdev-device describes the device to attach, using form
1277           "KEY=VALUE KEY=VALUE ..." where KEY=VALUE has the same meaning as
1278           the usbdev description in the domain config file.  See xl.cfg(5)
1279           for more information.
1280
1281       usbdev-detach domain-id controller=devid port=number
1282           Hot-unplug a previously assigned USB device from a domain.
1283           controller=devid and port=number is USB controller:port in the
1284           guest domain the USB device is attached to.
1285
1286       usb-list domain-id
1287           List pass-through usb devices for a domain.
1288

DEVICE-MODEL CONTROL

1290       qemu-monitor-command domain-id command
1291           Issue a monitor command to the device model of the domain specified
1292           by domain-id. command can be any valid command qemu understands.
1293           This can be e.g. used to add non-standard devices or devices with
1294           non-standard parameters to a domain. The output of the command is
1295           printed to stdout.
1296
1297           Warning: This qemu monitor access is provided for convenience when
1298           debugging, troubleshooting, and experimenting.  Its use is not
1299           supported by the Xen Project.
1300
1301           Specifically, not all information displayed by the qemu monitor
1302           will necessarily be accurate or complete, because in a Xen system
1303           qemu does not have a complete view of the guest.
1304
1305           Furthermore, modifying the guest's setup via the qemu monitor may
1306           conflict with the Xen toolstack's assumptions.  Resulting problems
1307           may include, but are not limited to: guest crashes; toolstack error
1308           messages; inability to migrate the guest; and security
1309           vulnerabilities which are not covered by the Xen Project security
1310           response policy.
1311
1312           EXAMPLE
1313
1314           Obtain information of USB devices connected as such via the device
1315           model (only!) to a domain:
1316
1317            xl qemu-monitor-command vm1 'info usb'
1318             Device 0.2, Port 5, Speed 480 Mb/s, Product Mass Storage
1319

FLASK

1321       FLASK is a security framework that defines a mandatory access control
1322       policy providing fine-grained controls over Xen domains, allowing the
1323       policy writer to define what interactions between domains, devices, and
1324       the hypervisor are permitted. Some example of what you can do using
1325       XSM/FLASK:
1326        - Prevent two domains from communicating via event channels or grants
1327        - Control which domains can use device passthrough (and which devices)
1328        - Restrict or audit operations performed by privileged domains
1329        - Prevent a privileged domain from arbitrarily mapping pages from
1330       other
1331          domains.
1332
1333       You can find more details on how to use FLASK and an example security
1334       policy here:
1335       <https://xenbits.xenproject.org/docs/unstable/misc/xsm-flask.txt>
1336
1337       getenforce
1338           Determine if the FLASK security module is loaded and enforcing its
1339           policy.
1340
1341       setenforce 1|0|Enforcing|Permissive
1342           Enable or disable enforcing of the FLASK access controls. The
1343           default is permissive, but this can be changed to enforcing by
1344           specifying "flask=enforcing" or "flask=late" on the hypervisor's
1345           command line.
1346
1347       loadpolicy policy-file
1348           Load FLASK policy from the given policy file. The initial policy is
1349           provided to the hypervisor as a multiboot module; this command
1350           allows runtime updates to the policy. Loading new security policy
1351           will reset runtime changes to device labels.
1352

PLATFORM SHARED RESOURCE MONITORING/CONTROL

1354       Intel Haswell and later server platforms offer shared resource
1355       monitoring and control technologies. The availability of these
1356       technologies and the hardware capabilities can be shown with psr-
1357       hwinfo.
1358
1359       See <https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html> for
1360       more information.
1361
1362       psr-hwinfo [OPTIONS]
1363           Show Platform Shared Resource (PSR) hardware information.
1364
1365           OPTIONS
1366
1367           -m, --cmt
1368               Show Cache Monitoring Technology (CMT) hardware information.
1369
1370           -a, --cat
1371               Show Cache Allocation Technology (CAT) hardware information.
1372
1373   CACHE MONITORING TECHNOLOGY
1374       Intel Haswell and later server platforms offer monitoring capability in
1375       each logical processor to measure specific platform shared resource
1376       metric, for example, L3 cache occupancy. In the Xen implementation, the
1377       monitoring granularity is domain level. To monitor a specific domain,
1378       just attach the domain id with the monitoring service. When the domain
1379       doesn't need to be monitored any more, detach the domain id from the
1380       monitoring service.
1381
1382       Intel Broadwell and later server platforms also offer total/local
1383       memory bandwidth monitoring. Xen supports per-domain monitoring for
1384       these two additional monitoring types. Both memory bandwidth monitoring
1385       and L3 cache occupancy monitoring share the same set of underlying
1386       monitoring service. Once a domain is attached to the monitoring
1387       service, monitoring data can be shown for any of these monitoring
1388       types.
1389
1390       There is no cache monitoring and memory bandwidth monitoring on L2
1391       cache so far.
1392
1393       psr-cmt-attach domain-id
1394           attach: Attach the platform shared resource monitoring service to a
1395           domain.
1396
1397       psr-cmt-detach domain-id
1398           detach: Detach the platform shared resource monitoring service from
1399           a domain.
1400
1401       psr-cmt-show psr-monitor-type [domain-id]
1402           Show monitoring data for a certain domain or all domains. Current
1403           supported monitor types are:
1404            - "cache-occupancy": showing the L3 cache occupancy(KB).
1405            - "total-mem-bandwidth": showing the total memory bandwidth(KB/s).
1406            - "local-mem-bandwidth": showing the local memory bandwidth(KB/s).
1407
1408   CACHE ALLOCATION TECHNOLOGY
1409       Intel Broadwell and later server platforms offer capabilities to
1410       configure and make use of the Cache Allocation Technology (CAT)
1411       mechanisms, which enable more cache resources (i.e. L3/L2 cache) to be
1412       made available for high priority applications. In the Xen
1413       implementation, CAT is used to control cache allocation on VM basis. To
1414       enforce cache on a specific domain, just set capacity bitmasks (CBM)
1415       for the domain.
1416
1417       Intel Broadwell and later server platforms also offer Code/Data
1418       Prioritization (CDP) for cache allocations, which support specifying
1419       code or data cache for applications. CDP is used on a per VM basis in
1420       the Xen implementation. To specify code or data CBM for the domain, CDP
1421       feature must be enabled and CBM type options need to be specified when
1422       setting CBM, and the type options (code and data) are mutually
1423       exclusive. There is no CDP support on L2 so far.
1424
1425       psr-cat-set [OPTIONS] domain-id cbm
1426           Set cache capacity bitmasks(CBM) for a domain. For how to specify
1427           cbm please refer to
1428           <https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>.
1429
1430           OPTIONS
1431
1432           -s SOCKET, --socket=SOCKET
1433               Specify the socket to process, otherwise all sockets are
1434               processed.
1435
1436           -l LEVEL, --level=LEVEL
1437               Specify the cache level to process, otherwise the last level
1438               cache (L3) is processed.
1439
1440           -c, --code
1441               Set code CBM when CDP is enabled.
1442
1443           -d, --data
1444               Set data CBM when CDP is enabled.
1445
1446       psr-cat-show [OPTIONS] [domain-id]
1447           Show CAT settings for a certain domain or all domains.
1448
1449           OPTIONS
1450
1451           -l LEVEL, --level=LEVEL
1452               Specify the cache level to process, otherwise the last level
1453               cache (L3) is processed.
1454
1455   Memory Bandwidth Allocation
1456       Intel Skylake and later server platforms offer capabilities to
1457       configure and make use of the Memory Bandwidth Allocation (MBA)
1458       mechanisms, which provides OS/VMMs the ability to slow misbehaving
1459       apps/VMs by using a credit-based throttling mechanism. In the Xen
1460       implementation, MBA is used to control memory bandwidth on VM basis. To
1461       enforce bandwidth on a specific domain, just set throttling value
1462       (THRTL) for the domain.
1463
1464       psr-mba-set [OPTIONS] domain-id thrtl
1465           Set throttling value (THRTL) for a domain. For how to specify thrtl
1466           please refer to
1467           <https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>.
1468
1469           OPTIONS
1470
1471           -s SOCKET, --socket=SOCKET
1472               Specify the socket to process, otherwise all sockets are
1473               processed.
1474
1475       psr-mba-show [domain-id]
1476           Show MBA settings for a certain domain or all domains. For linear
1477           mode, it shows the decimal value. For non-linear mode, it shows
1478           hexadecimal value.
1479

IGNORED FOR COMPATIBILITY WITH XM

1481       xl is mostly command-line compatible with the old xm utility used with
1482       the old Python xend.  For compatibility, the following options are
1483       ignored:
1484
1485       xl migrate --live
1486

SEE ALSO

1488       The following man pages:
1489
1490       xl.cfg(5), xlcpupool.cfg(5), xentop(1), xl-disk-configuration(5)
1491       xl-network-configuration(5)
1492
1493       And the following documents on the xenproject.org website:
1494
1495       <https://xenbits.xenproject.org/docs/unstable/misc/xsm-flask.txt>
1496       <https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>
1497
1498       For systems that don't automatically bring the CPU online:
1499
1500       <https://wiki.xenproject.org/wiki/Paravirt_Linux_CPU_Hotplug>
1501

BUGS

1503       Send bugs to xen-devel@lists.xenproject.org, see
1504       https://wiki.xenproject.org/wiki/Reporting_Bugs_against_Xen_Project on
1505       how to send bug reports.
1506
1507
1508
15094.13.0                            2020-04-14                             xl(1)
Impressum