1xl(1) Xen xl(1)
2
3
4
6 xl - Xen management tool, based on libxenlight
7
9 xl subcommand [args]
10
12 The xl program is the new tool for managing Xen guest domains. The
13 program can be used to create, pause, and shutdown domains. It can also
14 be used to list current domains, enable or pin VCPUs, and attach or
15 detach virtual block devices.
16
17 The basic structure of every xl command is almost always:
18
19 xl subcommand [OPTIONS] domain-id
20
21 Where subcommand is one of the subcommands listed below, domain-id is
22 the numeric domain id, or the domain name (which will be internally
23 translated to domain id), and OPTIONS are subcommand specific options.
24 There are a few exceptions to this rule in the cases where the
25 subcommand in question acts on all domains, the entire machine, or
26 directly on the Xen hypervisor. Those exceptions will be clear for
27 each of those subcommands.
28
30 start the script /etc/init.d/xencommons at boot time
31 Most xl operations rely upon xenstored and xenconsoled: make sure
32 you start the script /etc/init.d/xencommons at boot time to
33 initialize all the daemons needed by xl.
34
35 setup a xenbr0 bridge in dom0
36 In the most common network configuration, you need to setup a
37 bridge in dom0 named xenbr0 in order to have a working network in
38 the guest domains. Please refer to the documentation of your Linux
39 distribution to know how to setup the bridge.
40
41 autoballoon
42 If you specify the amount of memory dom0 has, passing dom0_mem to
43 Xen, it is highly recommended to disable autoballoon. Edit
44 /etc/xen/xl.conf and set it to 0.
45
46 run xl as root
47 Most xl commands require root privileges to run due to the
48 communications channels used to talk to the hypervisor. Running as
49 non root will return an error.
50
52 Some global options are always available:
53
54 -v Verbose.
55
56 -N Dry run: do not actually execute the command.
57
58 -f Force execution: xl will refuse to run some commands if it detects
59 that xend is also running, this option will force the execution of
60 those commands, even though it is unsafe.
61
62 -t Always use carriage-return-based overwriting for displaying
63 progress messages without scrolling the screen. Without -t, this
64 is done only if stderr is a tty.
65
66 -T Include timestamps and pid of the xl process in output.
67
69 The following subcommands manipulate domains directly. As stated
70 previously, most commands take domain-id as the first parameter.
71
72 button-press domain-id button
73 This command is deprecated. Please use "xl trigger" instead.
74
75 Indicate an ACPI button press to the domain, where button can be
76 'power' or 'sleep'. This command is only available for HVM domains.
77
78 create [configfile] [OPTIONS]
79 The create subcommand takes a config file as its first argument:
80 see xl.cfg(5) for full details of the file format and possible
81 options. If configfile is missing xl creates the domain assuming
82 the default values for every option.
83
84 configfile has to be an absolute path to a file.
85
86 Create will return as soon as the domain is started. This does not
87 mean the guest OS in the domain has actually booted, or is
88 available for input.
89
90 If the -F option is specified, create will start the domain and not
91 return until its death.
92
93 OPTIONS
94
95 -q, --quiet
96 No console output.
97
98 -f=FILE, --defconfig=FILE
99 Use the given configuration file.
100
101 -p Leave the domain paused after it is created.
102
103 -F Run in foreground until death of the domain.
104
105 -V, --vncviewer
106 Attach to domain's VNC server, forking a vncviewer process.
107
108 -A, --vncviewer-autopass
109 Pass the VNC password to vncviewer via stdin.
110
111 -c Attach console to the domain as soon as it has started. This
112 is useful for determining issues with crashing domains and just
113 as a general convenience since you often want to watch the
114 domain boot.
115
116 key=value
117 It is possible to pass key=value pairs on the command line to
118 provide options as if they were written in the configuration
119 file; these override whatever is in the configfile.
120
121 NB: Many config options require characters such as quotes or
122 brackets which are interpreted by the shell (and often
123 discarded) before being passed to xl, resulting in xl being
124 unable to parse the value correctly. A simple work-around is
125 to put all extra options within a single set of quotes,
126 separated by semicolons. (See below for an example.)
127
128 EXAMPLES
129
130 with extra parameters
131 xl create hvm.cfg 'cpus="0-3"; pci=["01:05.1","01:05.2"]'
132
133 This creates a domain with the file hvm.cfg, but additionally
134 pins it to cpus 0-3, and passes through two PCI devices.
135
136 config-update domain-id [configfile] [OPTIONS]
137 Update the saved configuration for a running domain. This has no
138 immediate effect but will be applied when the guest is next
139 restarted. This command is useful to ensure that runtime
140 modifications made to the guest will be preserved when the guest is
141 restarted.
142
143 Since Xen 4.5 xl has improved capabilities to handle dynamic domain
144 configuration changes and will preserve any changes made at runtime
145 when necessary. Therefore it should not normally be necessary to
146 use this command any more.
147
148 configfile has to be an absolute path to a file.
149
150 OPTIONS
151
152 -f=FILE, --defconfig=FILE
153 Use the given configuration file.
154
155 key=value
156 It is possible to pass key=value pairs on the command line to
157 provide options as if they were written in the configuration
158 file; these override whatever is in the configfile. Please see
159 the note under create on handling special characters when
160 passing key=value pairs on the command line.
161
162 console [OPTIONS] domain-id
163 Attach to the console of a domain specified by domain-id. If
164 you've set up your domains to have a traditional login console this
165 will look much like a normal text login screen.
166
167 Use the key combination Ctrl+] to detach from the domain console.
168
169 OPTIONS
170
171 -t [pv|serial]
172 Connect to a PV console or connect to an emulated serial
173 console. PV consoles are the only consoles available for PV
174 domains while HVM domains can have both. If this option is not
175 specified it defaults to emulated serial for HVM guests and PV
176 console for PV guests.
177
178 -n NUM
179 Connect to console number NUM. Console numbers start from 0.
180
181 destroy [OPTIONS] domain-id
182 Immediately terminate the domain specified by domain-id. This
183 doesn't give the domain OS any chance to react, and is the
184 equivalent of ripping the power cord out on a physical machine. In
185 most cases you will want to use the shutdown command instead.
186
187 OPTIONS
188
189 -f Allow domain 0 to be destroyed. Because a domain cannot
190 destroy itself, this is only possible when using a
191 disaggregated toolstack, and is most useful when using a
192 hardware domain separated from domain 0.
193
194 domid domain-name
195 Converts a domain name to a domain id.
196
197 domname domain-id
198 Converts a domain id to a domain name.
199
200 rename domain-id new-name
201 Change the domain name of a domain specified by domain-id to new-
202 name.
203
204 dump-core domain-id [filename]
205 Dumps the virtual machine's memory for the specified domain to the
206 filename specified, without pausing the domain. The dump file will
207 be written to a distribution specific directory for dump files, for
208 example: /var/lib/xen/dump/dump.
209
210 help [--long]
211 Displays the short help message (i.e. common commands) by default.
212
213 If the --long option is specified, it displays the complete set of
214 xl subcommands, grouped by function.
215
216 list [OPTIONS] [domain-id ...]
217 Displays information about one or more domains. If no domains are
218 specified it displays information about all domains.
219
220 OPTIONS
221
222 -l, --long
223 The output for xl list is not the table view shown below, but
224 instead presents the data as a JSON data structure.
225
226 -Z, --context
227 Also displays the security labels.
228
229 -v, --verbose
230 Also displays the domain UUIDs, the shutdown reason and
231 security labels.
232
233 -c, --cpupool
234 Also displays the cpupool the domain belongs to.
235
236 -n, --numa
237 Also displays the domain NUMA node affinity.
238
239 EXAMPLE
240
241 An example format for the list is as follows:
242
243 Name ID Mem VCPUs State Time(s)
244 Domain-0 0 750 4 r----- 11794.3
245 win 1 1019 1 r----- 0.3
246 linux 2 2048 2 r----- 5624.2
247
248 Name is the name of the domain. ID the numeric domain id. Mem is
249 the desired amount of memory to allocate to the domain (although it
250 may not be the currently allocated amount). VCPUs is the number of
251 virtual CPUs allocated to the domain. State is the run state (see
252 below). Time is the total run time of the domain as accounted for
253 by Xen.
254
255 STATES
256
257 The State field lists 6 states for a Xen domain, and which ones the
258 current domain is in.
259
260 r - running
261 The domain is currently running on a CPU.
262
263 b - blocked
264 The domain is blocked, and not running or runnable. This can
265 be because the domain is waiting on IO (a traditional wait
266 state) or has gone to sleep because there was nothing else for
267 it to do.
268
269 p - paused
270 The domain has been paused, usually occurring through the
271 administrator running xl pause. When in a paused state the
272 domain will still consume allocated resources (like memory),
273 but will not be eligible for scheduling by the Xen hypervisor.
274
275 s - shutdown
276 The guest OS has shut down (SCHEDOP_shutdown has been called)
277 but the domain is not dying yet.
278
279 c - crashed
280 The domain has crashed, which is always a violent ending.
281 Usually this state only occurs if the domain has been
282 configured not to restart on a crash. See xl.cfg(5) for more
283 info.
284
285 d - dying
286 The domain is in the process of dying, but hasn't completely
287 shut down or crashed.
288
289 NOTES
290
291 The Time column is deceptive. Virtual IO (network and block
292 devices) used by the domains requires coordination by Domain0,
293 which means that Domain0 is actually charged for much of the
294 time that a DomainU is doing IO. Use of this time value to
295 determine relative utilizations by domains is thus very
296 unreliable, as a high IO workload may show as less utilized
297 than a high CPU workload. Consider yourself warned.
298
299 mem-set domain-id mem
300 Set the target for the domain's balloon driver.
301
302 The default unit is kiB. Add 't' for TiB, 'g' for GiB, 'm' for
303 MiB, 'k' for kiB, and 'b' for bytes (e.g., `2048m` for 2048 MiB).
304
305 This must be less than the initial maxmem parameter in the domain's
306 configuration.
307
308 Note that this operation requests the guest operating system's
309 balloon driver to reach the target amount of memory. The guest may
310 fail to reach that amount of memory for any number of reasons,
311 including:
312
313 • The guest doesn't have a balloon driver installed
314
315 • The guest's balloon driver is buggy
316
317 • The guest's balloon driver cannot create free guest memory due
318 to guest memory pressure
319
320 • The guest's balloon driver cannot allocate memory from Xen
321 because of hypervisor memory pressure
322
323 • The guest administrator has disabled the balloon driver
324
325 Warning: There is no good way to know in advance how small of a
326 mem-set will make a domain unstable and cause it to crash. Be very
327 careful when using this command on running domains.
328
329 mem-max domain-id mem
330 Specify the limit Xen will place on the amount of memory a guest
331 may allocate.
332
333 The default unit is kiB. Add 't' for TiB, 'g' for GiB, 'm' for
334 MiB, 'k' for kiB, and 'b' for bytes (e.g., `2048m` for 2048 MiB).
335
336 NB that users normally shouldn't need this command; xl mem-set will
337 set this as appropriate automatically.
338
339 mem can't be set lower than the current memory target for domain-
340 id. It is allowed to be higher than the configured maximum memory
341 size of the domain (maxmem parameter in the domain's
342 configuration). Note however that the initial maxmem value is still
343 used as an upper limit for xl mem-set. Also note that calling xl
344 mem-set will reset this value.
345
346 The domain will not receive any signal regarding the changed memory
347 limit.
348
349 migrate [OPTIONS] domain-id host
350 Migrate a domain to another host machine. By default xl relies on
351 ssh as a transport mechanism between the two hosts.
352
353 OPTIONS
354
355 -s sshcommand
356 Use <sshcommand> instead of ssh. String will be passed to sh.
357 If empty, run <host> instead of ssh <host> xl migrate-receive
358 [-d -e].
359
360 -e On the new <host>, do not wait in the background for the death
361 of the domain. See the corresponding option of the create
362 subcommand.
363
364 -C config
365 Send the specified <config> file instead of the file used on
366 creation of the domain.
367
368 --debug
369 Display huge (!) amount of debug information during the
370 migration process.
371
372 -p Leave the domain on the receive side paused after migration.
373
374 -D Preserve the domain-id in the domain coniguration that is
375 transferred such that it will be identical on the destination
376 host, unless that configuration is overridden using the -C
377 option. Note that it is not possible to use this option for a
378 'localhost' migration.
379
380 remus [OPTIONS] domain-id host
381 Enable Remus HA or COLO HA for domain. By default xl relies on ssh
382 as a transport mechanism between the two hosts.
383
384 NOTES
385
386 Remus support in xl is still in experimental (proof-of-concept)
387 phase. Disk replication support is limited to DRBD disks.
388
389 COLO support in xl is still in experimental (proof-of-concept)
390 phase. All options are subject to change in the future.
391
392 COLO disk configuration looks like:
393
394 disk = ['...,colo,colo-host=xxx,colo-port=xxx,colo-export=xxx,active-disk=xxx,hidden-disk=xxx...']
395
396 The supported options are:
397
398 colo-host : Secondary host's ip address.
399 colo-port : Secondary host's port, we will run a nbd server on
400 the secondary host, and the nbd server will listen on this port.
401 colo-export : Nbd server's disk export name of the secondary host.
402 active-disk : Secondary's guest write will be buffered to this
403 disk, and it's used by the secondary.
404 hidden-disk : Primary's modified contents will be buffered in this
405 disk, and it's used by the secondary.
406
407 COLO network configuration looks like:
408
409 vif = [ '...,forwarddev=xxx,...']
410
411 The supported options are:
412
413 forwarddev : Forward devices for the primary and the secondary,
414 they are directly connected.
415
416 OPTIONS
417
418 -i MS
419 Checkpoint domain memory every MS milliseconds (default 200ms).
420
421 -u Disable memory checkpoint compression.
422
423 -s sshcommand
424 Use <sshcommand> instead of ssh. String will be passed to sh.
425 If empty, run <host> instead of ssh <host> xl migrate-receive
426 -r [-e].
427
428 -e On the new <host>, do not wait in the background for the death
429 of the domain. See the corresponding option of the create
430 subcommand.
431
432 -N netbufscript
433 Use <netbufscript> to setup network buffering instead of the
434 default script (/etc/xen/scripts/remus-netbuf-setup).
435
436 -F Run Remus in unsafe mode. Use this option with caution as
437 failover may not work as intended.
438
439 -b Replicate memory checkpoints to /dev/null (blackhole).
440 Generally useful for debugging. Requires enabling unsafe mode.
441
442 -n Disable network output buffering. Requires enabling unsafe
443 mode.
444
445 -d Disable disk replication. Requires enabling unsafe mode.
446
447 -c Enable COLO HA. This conflicts with -i and -b, and memory
448 checkpoint compression must be disabled.
449
450 -p Use userspace COLO Proxy. This option must be used in
451 conjunction with -c.
452
453 pause domain-id
454 Pause a domain. When in a paused state the domain will still
455 consume allocated resources (such as memory), but will not be
456 eligible for scheduling by the Xen hypervisor.
457
458 reboot [OPTIONS] domain-id
459 Reboot a domain. This acts just as if the domain had the reboot
460 command run from the console. The command returns as soon as it
461 has executed the reboot action, which may be significantly earlier
462 than when the domain actually reboots.
463
464 For HVM domains this requires PV drivers to be installed in your
465 guest OS. If PV drivers are not present but you have configured the
466 guest OS to behave appropriately you may be able to use the -F
467 option to trigger a reset button press.
468
469 The behavior of what happens to a domain when it reboots is set by
470 the on_reboot parameter of the domain configuration file when the
471 domain was created.
472
473 OPTIONS
474
475 -F If the guest does not support PV reboot control then fallback
476 to sending an ACPI power event (equivalent to the reset option
477 to trigger).
478
479 You should ensure that the guest is configured to behave as
480 expected in response to this event.
481
482 restore [OPTIONS] [configfile] checkpointfile
483 Build a domain from an xl save state file. See save for more info.
484
485 OPTIONS
486
487 -p Do not unpause the domain after restoring it.
488
489 -e Do not wait in the background for the death of the domain on
490 the new host. See the corresponding option of the create
491 subcommand.
492
493 -d Enable debug messages.
494
495 -V, --vncviewer
496 Attach to the domain's VNC server, forking a vncviewer process.
497
498 -A, --vncviewer-autopass
499 Pass the VNC password to vncviewer via stdin.
500
501 save [OPTIONS] domain-id checkpointfile [configfile]
502 Saves a running domain to a state file so that it can be restored
503 later. Once saved, the domain will no longer be running on the
504 system, unless the -c or -p options are used. xl restore restores
505 from this checkpoint file. Passing a config file argument allows
506 the user to manually select the VM config file used to create the
507 domain.
508
509 -c Leave the domain running after creating the snapshot.
510
511 -p Leave the domain paused after creating the snapshot.
512
513 -D Preserve the domain-id in the domain coniguration that is
514 embedded in the state file such that it will be identical when
515 the domain is restored, unless that configuration is
516 overridden. (See the restore operation above).
517
518 sharing [domain-id]
519 Display the number of shared pages for a specified domain. If no
520 domain is specified it displays information about all domains.
521
522 shutdown [OPTIONS] -a|domain-id
523 Gracefully shuts down a domain. This coordinates with the domain
524 OS to perform graceful shutdown, so there is no guarantee that it
525 will succeed, and may take a variable length of time depending on
526 what services must be shut down in the domain.
527
528 For HVM domains this requires PV drivers to be installed in your
529 guest OS. If PV drivers are not present but you have configured the
530 guest OS to behave appropriately you may be able to use the -F
531 option to trigger a power button press.
532
533 The command returns immediately after signaling the domain unless
534 the -w flag is used.
535
536 The behavior of what happens to a domain when it reboots is set by
537 the on_shutdown parameter of the domain configuration file when the
538 domain was created.
539
540 OPTIONS
541
542 -a, --all
543 Shutdown all guest domains. Often used when doing a complete
544 shutdown of a Xen system.
545
546 -w, --wait
547 Wait for the domain to complete shutdown before returning. If
548 given once, the wait is for domain shutdown or domain death.
549 If given multiple times, the wait is for domain death only.
550
551 -F If the guest does not support PV shutdown control then fallback
552 to sending an ACPI power event (equivalent to the power option
553 to trigger).
554
555 You should ensure that the guest is configured to behave as
556 expected in response to this event.
557
558 sysrq domain-id letter
559 Send a <Magic System Request> to the domain, each type of request
560 is represented by a different letter. It can be used to send SysRq
561 requests to Linux guests, see sysrq.txt in your Linux Kernel
562 sources for more information. It requires PV drivers to be
563 installed in your guest OS.
564
565 trigger domain-id nmi|reset|init|power|sleep|s3resume [VCPU]
566 Send a trigger to a domain, where the trigger can be: nmi, reset,
567 init, power or sleep. Optionally a specific vcpu number can be
568 passed as an argument. This command is only available for HVM
569 domains.
570
571 unpause domain-id
572 Moves a domain out of the paused state. This will allow a
573 previously paused domain to now be eligible for scheduling by the
574 Xen hypervisor.
575
576 vcpu-set domain-id vcpu-count
577 Enables the vcpu-count virtual CPUs for the domain in question.
578 Like mem-set, this command can only allocate up to the maximum
579 virtual CPU count configured at boot for the domain.
580
581 If the vcpu-count is smaller than the current number of active
582 VCPUs, the highest number VCPUs will be hotplug removed. This may
583 be important for pinning purposes.
584
585 Attempting to set the VCPUs to a number larger than the initially
586 configured VCPU count is an error. Trying to set VCPUs to < 1 will
587 be quietly ignored.
588
589 Some guests may need to actually bring the newly added CPU online
590 after vcpu-set, go to SEE ALSO section for information.
591
592 vcpu-list [domain-id]
593 Lists VCPU information for a specific domain. If no domain is
594 specified, VCPU information for all domains will be provided.
595
596 vcpu-pin [-f|--force] domain-id vcpu cpus hard cpus soft
597 Set hard and soft affinity for a vcpu of <domain-id>. Normally
598 VCPUs can float between available CPUs whenever Xen deems a
599 different run state is appropriate.
600
601 Hard affinity can be used to restrict this, by ensuring certain
602 VCPUs can only run on certain physical CPUs. Soft affinity
603 specifies a preferred set of CPUs. Soft affinity needs special
604 support in the scheduler, which is only provided in credit1.
605
606 The keyword all can be used to apply the hard and soft affinity
607 masks to all the VCPUs in the domain. The symbol '-' can be used to
608 leave either hard or soft affinity alone.
609
610 For example:
611
612 xl vcpu-pin 0 3 - 6-9
613
614 will set soft affinity for vCPU 3 of domain 0 to pCPUs 6,7,8 and 9,
615 leaving its hard affinity untouched. On the other hand:
616
617 xl vcpu-pin 0 3 3,4 6-9
618
619 will set both hard and soft affinity, the former to pCPUs 3 and 4,
620 the latter to pCPUs 6,7,8, and 9.
621
622 Specifying -f or --force will remove a temporary pinning done by
623 the operating system (normally this should be done by the operating
624 system). In case a temporary pinning is active for a vcpu the
625 affinity of this vcpu can't be changed without this option.
626
627 vm-list
628 Prints information about guests. This list excludes information
629 about service or auxiliary domains such as dom0 and stubdoms.
630
631 EXAMPLE
632
633 An example format for the list is as follows:
634
635 UUID ID name
636 59e1cf6c-6ab9-4879-90e7-adc8d1c63bf5 2 win
637 50bc8f75-81d0-4d53-b2e6-95cb44e2682e 3 linux
638
639 vncviewer [OPTIONS] domain-id
640 Attach to the domain's VNC server, forking a vncviewer process.
641
642 OPTIONS
643
644 --autopass
645 Pass the VNC password to vncviewer via stdin.
646
648 debug-keys keys
649 Send debug keys to Xen. It is the same as pressing the Xen
650 "conswitch" (Ctrl-A by default) three times and then pressing
651 "keys".
652
653 set-parameters params
654 Set hypervisor parameters as specified in params. This allows for
655 some boot parameters of the hypervisor to be modified in the
656 running systems.
657
658 dmesg [OPTIONS]
659 Reads the Xen message buffer, similar to dmesg on a Linux system.
660 The buffer contains informational, warning, and error messages
661 created during Xen's boot process. If you are having problems with
662 Xen, this is one of the first places to look as part of problem
663 determination.
664
665 OPTIONS
666
667 -c, --clear
668 Clears Xen's message buffer.
669
670 info [OPTIONS]
671 Print information about the Xen host in name : value format. When
672 reporting a Xen bug, please provide this information as part of the
673 bug report. See
674 https://wiki.xenproject.org/wiki/Reporting_Bugs_against_Xen_Project
675 on how to report Xen bugs.
676
677 Sample output looks as follows:
678
679 host : scarlett
680 release : 3.1.0-rc4+
681 version : #1001 SMP Wed Oct 19 11:09:54 UTC 2011
682 machine : x86_64
683 nr_cpus : 4
684 nr_nodes : 1
685 cores_per_socket : 4
686 threads_per_core : 1
687 cpu_mhz : 2266
688 hw_caps : bfebfbff:28100800:00000000:00003b40:009ce3bd:00000000:00000001:00000000
689 virt_caps : hvm hvm_directio
690 total_memory : 6141
691 free_memory : 4274
692 free_cpus : 0
693 outstanding_claims : 0
694 xen_major : 4
695 xen_minor : 2
696 xen_extra : -unstable
697 xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
698 xen_scheduler : credit
699 xen_pagesize : 4096
700 platform_params : virt_start=0xffff800000000000
701 xen_changeset : Wed Nov 02 17:09:09 2011 +0000 24066:54a5e994a241
702 xen_commandline : com1=115200,8n1 guest_loglvl=all dom0_mem=750M console=com1
703 cc_compiler : gcc version 4.4.5 (Debian 4.4.5-8)
704 cc_compile_by : sstabellini
705 cc_compile_domain : uk.xensource.com
706 cc_compile_date : Tue Nov 8 12:03:05 UTC 2011
707 xend_config_format : 4
708
709 FIELDS
710
711 Not all fields will be explained here, but some of the less obvious
712 ones deserve explanation:
713
714 hw_caps
715 A vector showing what hardware capabilities are supported by
716 your processor. This is equivalent to, though more cryptic,
717 the flags field in /proc/cpuinfo on a normal Linux machine:
718 they both derive from the feature bits returned by the cpuid
719 command on x86 platforms.
720
721 free_memory
722 Available memory (in MB) not allocated to Xen, or any other
723 domains, or claimed for domains.
724
725 outstanding_claims
726 When a claim call is done (see xl.conf(5)) a reservation for a
727 specific amount of pages is set and also a global value is
728 incremented. This global value (outstanding_claims) is then
729 reduced as the domain's memory is populated and eventually
730 reaches zero. Most of the time the value will be zero, but if
731 you are launching multiple guests, and claim_mode is enabled,
732 this value can increase/decrease. Note that the value also
733 affects the free_memory - as it will reflect the free memory in
734 the hypervisor minus the outstanding pages claimed for guests.
735 See xl info claims parameter for detailed listing.
736
737 xen_caps
738 The Xen version and architecture. Architecture values can be
739 one of: x86_32, x86_32p (i.e. PAE enabled), x86_64, ia64.
740
741 xen_changeset
742 The Xen mercurial changeset id. Very useful for determining
743 exactly what version of code your Xen system was built from.
744
745 OPTIONS
746
747 -n, --numa
748 List host NUMA topology information
749
750 top Executes the xentop(1) command, which provides real time monitoring
751 of domains. Xentop has a curses interface, and is reasonably self
752 explanatory.
753
754 uptime
755 Prints the current uptime of the domains running.
756
757 claims
758 Prints information about outstanding claims by the guests. This
759 provides the outstanding claims and currently populated memory
760 count for the guests. These values added up reflect the global
761 outstanding claim value, which is provided via the info argument,
762 outstanding_claims value. The Mem column has the cumulative value
763 of outstanding claims and the total amount of memory that has been
764 right now allocated to the guest.
765
766 EXAMPLE
767
768 An example format for the list is as follows:
769
770 Name ID Mem VCPUs State Time(s) Claimed
771 Domain-0 0 2047 4 r----- 19.7 0
772 OL5 2 2048 1 --p--- 0.0 847
773 OL6 3 1024 4 r----- 5.9 0
774 Windows_XP 4 2047 1 --p--- 0.0 1989
775
776 In which it can be seen that the OL5 guest still has 847MB of
777 claimed memory (out of the total 2048MB where 1191MB has been
778 allocated to the guest).
779
781 Xen ships with a number of domain schedulers, which can be set at boot
782 time with the sched= parameter on the Xen command line. By default
783 credit is used for scheduling.
784
785 sched-credit [OPTIONS]
786 Set or get credit (aka credit1) scheduler parameters. The credit
787 scheduler is a proportional fair share CPU scheduler built from the
788 ground up to be work conserving on SMP hosts.
789
790 Each domain (including Domain0) is assigned a weight and a cap.
791
792 OPTIONS
793
794 -d DOMAIN, --domain=DOMAIN
795 Specify domain for which scheduler parameters are to be
796 modified or retrieved. Mandatory for modifying scheduler
797 parameters.
798
799 -w WEIGHT, --weight=WEIGHT
800 A domain with a weight of 512 will get twice as much CPU as a
801 domain with a weight of 256 on a contended host. Legal weights
802 range from 1 to 65535 and the default is 256.
803
804 -c CAP, --cap=CAP
805 The cap optionally fixes the maximum amount of CPU a domain
806 will be able to consume, even if the host system has idle CPU
807 cycles. The cap is expressed in percentage of one physical CPU:
808 100 is 1 physical CPU, 50 is half a CPU, 400 is 4 CPUs, etc.
809 The default, 0, means there is no upper cap.
810
811 NB: Many systems have features that will scale down the
812 computing power of a cpu that is not 100% utilized. This can
813 be in the operating system, but can also sometimes be below the
814 operating system in the BIOS. If you set a cap such that
815 individual cores are running at less than 100%, this may have
816 an impact on the performance of your workload over and above
817 the impact of the cap. For example, if your processor runs at
818 2GHz, and you cap a vm at 50%, the power management system may
819 also reduce the clock speed to 1GHz; the effect will be that
820 your VM gets 25% of the available power (50% of 1GHz) rather
821 than 50% (50% of 2GHz). If you are not getting the performance
822 you expect, look at performance and cpufreq options in your
823 operating system and your BIOS.
824
825 -p CPUPOOL, --cpupool=CPUPOOL
826 Restrict output to domains in the specified cpupool.
827
828 -s, --schedparam
829 Specify to list or set pool-wide scheduler parameters.
830
831 -t TSLICE, --tslice_ms=TSLICE
832 Timeslice tells the scheduler how long to allow VMs to run
833 before pre-empting. The default is 30ms. Valid ranges are 1ms
834 to 1000ms. The length of the timeslice (in ms) must be higher
835 than the length of the ratelimit (see below).
836
837 -r RLIMIT, --ratelimit_us=RLIMIT
838 Ratelimit attempts to limit the number of schedules per second.
839 It sets a minimum amount of time (in microseconds) a VM must
840 run before we will allow a higher-priority VM to pre-empt it.
841 The default value is 1000 microseconds (1ms). Valid range is
842 100 to 500000 (500ms). The ratelimit length must be lower than
843 the timeslice length.
844
845 -m DELAY, --migration_delay_us=DELAY
846 Migration delay specifies for how long a vCPU, after it stopped
847 running should be considered "cache-hot". Basically, if less
848 than DELAY us passed since when the vCPU was executing on a
849 CPU, it is likely that most of the vCPU's working set is still
850 in the CPU's cache, and therefore the vCPU is not migrated.
851
852 Default is 0. Maximum is 100 ms. This can be effective at
853 preventing vCPUs to bounce among CPUs too quickly, but, at the
854 same time, the scheduler stops being fully work-conserving.
855
856 COMBINATION
857
858 The following is the effect of combining the above options:
859
860 <nothing> : List all domain params and sched params
861 from all pools
862 -d [domid] : List domain params for domain [domid]
863 -d [domid] [params] : Set domain params for domain [domid]
864 -p [pool] : list all domains and sched params for
865 [pool]
866 -s : List sched params for poolid 0
867 -s [params] : Set sched params for poolid 0
868 -p [pool] -s : List sched params for [pool]
869 -p [pool] -s [params] : Set sched params for [pool]
870 -p [pool] -d... : Illegal
871 sched-credit2 [OPTIONS]
872 Set or get credit2 scheduler parameters. The credit2 scheduler is
873 a proportional fair share CPU scheduler built from the ground up to
874 be work conserving on SMP hosts.
875
876 Each domain (including Domain0) is assigned a weight.
877
878 OPTIONS
879
880 -d DOMAIN, --domain=DOMAIN
881 Specify domain for which scheduler parameters are to be
882 modified or retrieved. Mandatory for modifying scheduler
883 parameters.
884
885 -w WEIGHT, --weight=WEIGHT
886 A domain with a weight of 512 will get twice as much CPU as a
887 domain with a weight of 256 on a contended host. Legal weights
888 range from 1 to 65535 and the default is 256.
889
890 -p CPUPOOL, --cpupool=CPUPOOL
891 Restrict output to domains in the specified cpupool.
892
893 -s, --schedparam
894 Specify to list or set pool-wide scheduler parameters.
895
896 -r RLIMIT, --ratelimit_us=RLIMIT
897 Attempts to limit the rate of context switching. It is
898 basically the same as --ratelimit_us in sched-credit
899
900 sched-rtds [OPTIONS]
901 Set or get rtds (Real Time Deferrable Server) scheduler parameters.
902 This rt scheduler applies Preemptive Global Earliest Deadline First
903 real-time scheduling algorithm to schedule VCPUs in the system.
904 Each VCPU has a dedicated period, budget and extratime. While
905 scheduled, a VCPU burns its budget. A VCPU has its budget
906 replenished at the beginning of each period; Unused budget is
907 discarded at the end of each period. A VCPU with extratime set
908 gets extra time from the unreserved system resource.
909
910 OPTIONS
911
912 -d DOMAIN, --domain=DOMAIN
913 Specify domain for which scheduler parameters are to be
914 modified or retrieved. Mandatory for modifying scheduler
915 parameters.
916
917 -v VCPUID/all, --vcpuid=VCPUID/all
918 Specify vcpu for which scheduler parameters are to be modified
919 or retrieved.
920
921 -p PERIOD, --period=PERIOD
922 Period of time, in microseconds, over which to replenish the
923 budget.
924
925 -b BUDGET, --budget=BUDGET
926 Amount of time, in microseconds, that the VCPU will be allowed
927 to run every period.
928
929 -e Extratime, --extratime=Extratime
930 Binary flag to decide if the VCPU will be allowed to get extra
931 time from the unreserved system resource.
932
933 -c CPUPOOL, --cpupool=CPUPOOL
934 Restrict output to domains in the specified cpupool.
935
936 EXAMPLE
937
938 1) Use -v all to see the budget and period of all the VCPUs of
939 all the domains:
940
941 xl sched-rtds -v all
942 Cpupool Pool-0: sched=RTDS
943 Name ID VCPU Period Budget Extratime
944 Domain-0 0 0 10000 4000 yes
945 vm1 2 0 300 150 yes
946 vm1 2 1 400 200 yes
947 vm1 2 2 10000 4000 yes
948 vm1 2 3 1000 500 yes
949 vm2 4 0 10000 4000 yes
950 vm2 4 1 10000 4000 yes
951
952 Without any arguments, it will output the default scheduling
953 parameters for each domain:
954
955 xl sched-rtds
956 Cpupool Pool-0: sched=RTDS
957 Name ID Period Budget Extratime
958 Domain-0 0 10000 4000 yes
959 vm1 2 10000 4000 yes
960 vm2 4 10000 4000 yes
961
962 2) Use, for instance, -d vm1, -v all to see the budget and
963 period of all VCPUs of a specific domain (vm1):
964
965 xl sched-rtds -d vm1 -v all
966 Name ID VCPU Period Budget Extratime
967 vm1 2 0 300 150 yes
968 vm1 2 1 400 200 yes
969 vm1 2 2 10000 4000 yes
970 vm1 2 3 1000 500 yes
971
972 To see the parameters of a subset of the VCPUs of a domain,
973 use:
974
975 xl sched-rtds -d vm1 -v 0 -v 3
976 Name ID VCPU Period Budget Extratime
977 vm1 2 0 300 150 yes
978 vm1 2 3 1000 500 yes
979
980 If no -v is specified, the default scheduling parameters for
981 the domain are shown:
982
983 xl sched-rtds -d vm1
984 Name ID Period Budget Extratime
985 vm1 2 10000 4000 yes
986
987 3) Users can set the budget and period of multiple VCPUs of a
988 specific domain with only one command, e.g., "xl sched-rtds -d
989 vm1 -v 0 -p 100 -b 50 -e 1 -v 3 -p 300 -b 150 -e 0".
990
991 To change the parameters of all the VCPUs of a domain, use -v
992 all, e.g., "xl sched-rtds -d vm1 -v all -p 500 -b 250 -e 1".
993
995 Xen can group the physical cpus of a server in cpu-pools. Each physical
996 CPU is assigned at most to one cpu-pool. Domains are each restricted to
997 a single cpu-pool. Scheduling does not cross cpu-pool boundaries, so
998 each cpu-pool has its own scheduler. Physical cpus and domains can be
999 moved from one cpu-pool to another only by an explicit command. Cpu-
1000 pools can be specified either by name or by id.
1001
1002 cpupool-create [OPTIONS] [configfile] [variable=value ...]
1003 Create a cpu pool based an config from a configfile or command-line
1004 parameters. Variable settings from the configfile may be altered
1005 by specifying new or additional assignments on the command line.
1006
1007 See the xlcpupool.cfg(5) manpage for more information.
1008
1009 OPTIONS
1010
1011 -f=FILE, --defconfig=FILE
1012 Use the given configuration file.
1013
1014 cpupool-list [OPTIONS] [cpu-pool]
1015 List CPU pools on the host.
1016
1017 OPTIONS
1018
1019 -c, --cpus
1020 If this option is specified, xl prints a list of CPUs used by
1021 cpu-pool.
1022
1023 cpupool-destroy cpu-pool
1024 Deactivates a cpu pool. This is possible only if no domain is
1025 active in the cpu-pool.
1026
1027 cpupool-rename cpu-pool <newname>
1028 Renames a cpu-pool to newname.
1029
1030 cpupool-cpu-add cpu-pool cpus|node:nodes
1031 Adds one or more CPUs or NUMA nodes to cpu-pool. CPUs and NUMA
1032 nodes can be specified as single CPU/node IDs or as ranges.
1033
1034 For example:
1035
1036 (a) xl cpupool-cpu-add mypool 4
1037 (b) xl cpupool-cpu-add mypool 1,5,10-16,^13
1038 (c) xl cpupool-cpu-add mypool node:0,nodes:2-3,^10-12,8
1039
1040 means adding CPU 4 to mypool, in (a); adding CPUs
1041 1,5,10,11,12,14,15 and 16, in (b); and adding all the CPUs of NUMA
1042 nodes 0, 2 and 3, plus CPU 8, but keeping out CPUs 10,11,12, in
1043 (c).
1044
1045 All the specified CPUs that can be added to the cpupool will be
1046 added to it. If some CPU can't (e.g., because they're already part
1047 of another cpupool), an error is reported about each one of them.
1048
1049 cpupool-cpu-remove cpu-pool cpus|node:nodes
1050 Removes one or more CPUs or NUMA nodes from cpu-pool. CPUs and NUMA
1051 nodes can be specified as single CPU/node IDs or as ranges, using
1052 the exact same syntax as in cpupool-cpu-add above.
1053
1054 cpupool-migrate domain-id cpu-pool
1055 Moves a domain specified by domain-id or domain-name into a cpu-
1056 pool. Domain-0 can't be moved to another cpu-pool.
1057
1058 cpupool-numa-split
1059 Splits up the machine into one cpu-pool per numa node.
1060
1062 Most virtual devices can be added and removed while guests are running,
1063 assuming that the necessary support exists in the guest OS. The effect
1064 to the guest OS is much the same as any hotplug event.
1065
1066 BLOCK DEVICES
1067 block-attach domain-id disc-spec-component(s) ...
1068 Create a new virtual block device and attach it to the specified
1069 domain. A disc specification is in the same format used for the
1070 disk variable in the domain config file. See
1071 xl-disk-configuration(5). This will trigger a hotplug event for the
1072 guest.
1073
1074 Note that only PV block devices are supported by block-attach.
1075 Requests to attach emulated devices (eg, vdev=hdc) will result in
1076 only the PV view being available to the guest.
1077
1078 block-detach [OPTIONS] domain-id devid
1079 Detach a domain's virtual block device. devid may be the symbolic
1080 name or the numeric device id given to the device by domain 0. You
1081 will need to run xl block-list to determine that number.
1082
1083 Detaching the device requires the cooperation of the domain. If
1084 the domain fails to release the device (perhaps because the domain
1085 is hung or is still using the device), the detach will fail.
1086
1087 OPTIONS
1088
1089 --force
1090 If this parameter is specified the device will be forcefully
1091 detached, which may cause IO errors in the domain and possibly
1092 a guest crash
1093
1094 block-list domain-id
1095 List virtual block devices for a domain.
1096
1097 cd-insert domain-id virtualdevice target
1098 Insert a cdrom into a guest domain's existing virtual cd drive. The
1099 virtual drive must already exist but can be empty. How the device
1100 should be presented to the guest domain is specified by the
1101 virtualdevice parameter; for example "hdc". Parameter target is the
1102 target path in the backend domain (usually domain 0) to be
1103 exported; can be a block device or a file etc. See target in
1104 xl-disk-configuration(5).
1105
1106 Only works with HVM domains.
1107
1108 cd-eject domain-id virtualdevice
1109 Eject a cdrom from a guest domain's virtual cd drive, specified by
1110 virtualdevice. Only works with HVM domains.
1111
1112 NETWORK DEVICES
1113 network-attach domain-id network-device
1114 Creates a new network device in the domain specified by domain-id.
1115 network-device describes the device to attach, using the same
1116 format as the vif string in the domain config file. See xl.cfg(5)
1117 and xl-network-configuration(5) for more information.
1118
1119 Note that only attaching PV network interfaces is supported.
1120
1121 network-detach domain-id devid|mac
1122 Removes the network device from the domain specified by domain-id.
1123 devid is the virtual interface device number within the domain
1124 (i.e. the 3 in vif22.3). Alternatively, the mac address can be used
1125 to select the virtual interface to detach.
1126
1127 network-list domain-id
1128 List virtual network interfaces for a domain.
1129
1130 CHANNEL DEVICES
1131 channel-list domain-id
1132 List virtual channel interfaces for a domain.
1133
1134 VIRTUAL TRUSTED PLATFORM MODULE (vTPM) DEVICES
1135 vtpm-attach domain-id vtpm-device
1136 Creates a new vtpm (virtual Trusted Platform Module) device in the
1137 domain specified by domain-id. vtpm-device describes the device to
1138 attach, using the same format as the vtpm string in the domain
1139 config file. See xl.cfg(5) for more information.
1140
1141 vtpm-detach domain-id devid|uuid
1142 Removes the vtpm device from the domain specified by domain-id.
1143 devid is the numeric device id given to the virtual Trusted
1144 Platform Module device. You will need to run xl vtpm-list to
1145 determine that number. Alternatively, the uuid of the vtpm can be
1146 used to select the virtual device to detach.
1147
1148 vtpm-list domain-id
1149 List virtual Trusted Platform Modules for a domain.
1150
1151 VDISPL DEVICES
1152 vdispl-attach domain-id vdispl-device
1153 Creates a new vdispl device in the domain specified by domain-id.
1154 vdispl-device describes the device to attach, using the same format
1155 as the vdispl string in the domain config file. See xl.cfg(5) for
1156 more information.
1157
1158 NOTES
1159
1160 As in vdispl-device string semicolon is used then put quotes or
1161 escaping when using from the shell.
1162
1163 EXAMPLE
1164
1165 xl vdispl-attach DomU
1166 connectors='id0:1920x1080;id1:800x600;id2:640x480'
1167
1168 or
1169
1170 xl vdispl-attach DomU
1171 connectors=id0:1920x1080\;id1:800x600\;id2:640x480
1172
1173 vdispl-detach domain-id dev-id
1174 Removes the vdispl device specified by dev-id from the domain
1175 specified by domain-id.
1176
1177 vdispl-list domain-id
1178 List virtual displays for a domain.
1179
1180 VSND DEVICES
1181 vsnd-attach domain-id vsnd-item vsnd-item ...
1182 Creates a new vsnd device in the domain specified by domain-id.
1183 vsnd-item's describe the vsnd device to attach, using the same
1184 format as the VSND_ITEM_SPEC string in the domain config file. See
1185 xl.cfg(5) for more information.
1186
1187 EXAMPLE
1188
1189 xl vsnd-attach DomU 'CARD, short-name=Main,
1190 sample-formats=s16_le;s8;u32_be' 'PCM, name=Main' 'STREAM,
1191 id=0, type=p' 'STREAM, id=1, type=c, channels-max=2'
1192
1193 vsnd-detach domain-id dev-id
1194 Removes the vsnd device specified by dev-id from the domain
1195 specified by domain-id.
1196
1197 vsnd-list domain-id
1198 List vsnd devices for a domain.
1199
1200 KEYBOARD DEVICES
1201 vkb-attach domain-id vkb-device
1202 Creates a new keyboard device in the domain specified by domain-id.
1203 vkb-device describes the device to attach, using the same format as
1204 the VKB_SPEC_STRING string in the domain config file. See xl.cfg(5)
1205 for more information.
1206
1207 vkb-detach domain-id devid
1208 Removes the keyboard device from the domain specified by domain-id.
1209 devid is the virtual interface device number within the domain
1210
1211 vkb-list domain-id
1212 List virtual network interfaces for a domain.
1213
1215 pci-assignable-list [-n]
1216 List all the BDF of assignable PCI devices. See
1217 xl-pci-configuration(5) for more information. If the -n option is
1218 specified then any name supplied when the device was made
1219 assignable will also be displayed.
1220
1221 These are devices in the system which are configured to be
1222 available for passthrough and are bound to a suitable PCI backend
1223 driver in domain 0 rather than a real driver.
1224
1225 pci-assignable-add [-n NAME] BDF
1226 Make the device at BDF assignable to guests. See
1227 xl-pci-configuration(5) for more information. If the -n option is
1228 supplied then the assignable device entry will the named with the
1229 given NAME.
1230
1231 This will bind the device to the pciback driver and assign it to
1232 the "quarantine domain". If it is already bound to a driver, it
1233 will first be unbound, and the original driver stored so that it
1234 can be re-bound to the same driver later if desired. If the device
1235 is already bound, it will assign it to the quarantine domain and
1236 return success.
1237
1238 CAUTION: This will make the device unusable by Domain 0 until it is
1239 returned with pci-assignable-remove. Care should therefore be
1240 taken not to do this on a device critical to domain 0's operation,
1241 such as storage controllers, network interfaces, or GPUs that are
1242 currently being used.
1243
1244 pci-assignable-remove [-r] BDF|NAME
1245 Make a device non-assignable to guests. The device may be
1246 identified either by its BDF or the NAME supplied when the device
1247 was made assignable. See xl-pci-configuration(5) for more
1248 information.
1249
1250 This will at least unbind the device from pciback, and re-assign it
1251 from the "quarantine domain" back to domain 0. If the -r option is
1252 specified, it will also attempt to re-bind the device to its
1253 original driver, making it usable by Domain 0 again. If the device
1254 is not bound to pciback, it will return success.
1255
1256 Note that this functionality will work even for devices which were
1257 not made assignable by pci-assignable-add. This can be used to
1258 allow dom0 to access devices which were automatically quarantined
1259 by Xen after domain destruction as a result of Xen's
1260 iommu=quarantine command-line default.
1261
1262 As always, this should only be done if you trust the guest, or are
1263 confident that the particular device you're re-assigning to dom0
1264 will cancel all in-flight DMA on FLR.
1265
1266 pci-attach domain-id PCI_SPEC_STRING
1267 Hot-plug a new pass-through pci device to the specified domain. See
1268 xl-pci-configuration(5) for more information.
1269
1270 pci-detach [OPTIONS] domain-id PCI_SPEC_STRING
1271 Hot-unplug a pci device that was previously passed through to a
1272 domain. See xl-pci-configuration(5) for more information.
1273
1274 OPTIONS
1275
1276 -f If this parameter is specified, xl is going to forcefully
1277 remove the device even without guest domain's collaboration.
1278
1279 pci-list domain-id
1280 List the BDF of pci devices passed through to a domain.
1281
1283 usbctrl-attach domain-id usbctrl-device
1284 Create a new USB controller in the domain specified by domain-id,
1285 usbctrl-device describes the device to attach, using form
1286 "KEY=VALUE KEY=VALUE ..." where KEY=VALUE has the same meaning as
1287 the usbctrl description in the domain config file. See xl.cfg(5)
1288 for more information.
1289
1290 usbctrl-detach domain-id devid
1291 Destroy a USB controller from the specified domain. devid is devid
1292 of the USB controller.
1293
1294 usbdev-attach domain-id usbdev-device
1295 Hot-plug a new pass-through USB device to the domain specified by
1296 domain-id, usbdev-device describes the device to attach, using form
1297 "KEY=VALUE KEY=VALUE ..." where KEY=VALUE has the same meaning as
1298 the usbdev description in the domain config file. See xl.cfg(5)
1299 for more information.
1300
1301 usbdev-detach domain-id controller=devid port=number
1302 Hot-unplug a previously assigned USB device from a domain.
1303 controller=devid and port=number is USB controller:port in the
1304 guest domain the USB device is attached to.
1305
1306 usb-list domain-id
1307 List pass-through usb devices for a domain.
1308
1310 qemu-monitor-command domain-id command
1311 Issue a monitor command to the device model of the domain specified
1312 by domain-id. command can be any valid command qemu understands.
1313 This can be e.g. used to add non-standard devices or devices with
1314 non-standard parameters to a domain. The output of the command is
1315 printed to stdout.
1316
1317 Warning: This qemu monitor access is provided for convenience when
1318 debugging, troubleshooting, and experimenting. Its use is not
1319 supported by the Xen Project.
1320
1321 Specifically, not all information displayed by the qemu monitor
1322 will necessarily be accurate or complete, because in a Xen system
1323 qemu does not have a complete view of the guest.
1324
1325 Furthermore, modifying the guest's setup via the qemu monitor may
1326 conflict with the Xen toolstack's assumptions. Resulting problems
1327 may include, but are not limited to: guest crashes; toolstack error
1328 messages; inability to migrate the guest; and security
1329 vulnerabilities which are not covered by the Xen Project security
1330 response policy.
1331
1332 EXAMPLE
1333
1334 Obtain information of USB devices connected as such via the device
1335 model (only!) to a domain:
1336
1337 xl qemu-monitor-command vm1 'info usb'
1338 Device 0.2, Port 5, Speed 480 Mb/s, Product Mass Storage
1339
1341 FLASK is a security framework that defines a mandatory access control
1342 policy providing fine-grained controls over Xen domains, allowing the
1343 policy writer to define what interactions between domains, devices, and
1344 the hypervisor are permitted. Some example of what you can do using
1345 XSM/FLASK:
1346 - Prevent two domains from communicating via event channels or grants
1347 - Control which domains can use device passthrough (and which devices)
1348 - Restrict or audit operations performed by privileged domains
1349 - Prevent a privileged domain from arbitrarily mapping pages from
1350 other
1351 domains.
1352
1353 You can find more details on how to use FLASK and an example security
1354 policy here:
1355 <https://xenbits.xenproject.org/docs/unstable/misc/xsm-flask.txt>
1356
1357 getenforce
1358 Determine if the FLASK security module is loaded and enforcing its
1359 policy.
1360
1361 setenforce 1|0|Enforcing|Permissive
1362 Enable or disable enforcing of the FLASK access controls. The
1363 default is permissive, but this can be changed to enforcing by
1364 specifying "flask=enforcing" or "flask=late" on the hypervisor's
1365 command line.
1366
1367 loadpolicy policy-file
1368 Load FLASK policy from the given policy file. The initial policy is
1369 provided to the hypervisor as a multiboot module; this command
1370 allows runtime updates to the policy. Loading new security policy
1371 will reset runtime changes to device labels.
1372
1374 Intel Haswell and later server platforms offer shared resource
1375 monitoring and control technologies. The availability of these
1376 technologies and the hardware capabilities can be shown with psr-
1377 hwinfo.
1378
1379 See <https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html> for
1380 more information.
1381
1382 psr-hwinfo [OPTIONS]
1383 Show Platform Shared Resource (PSR) hardware information.
1384
1385 OPTIONS
1386
1387 -m, --cmt
1388 Show Cache Monitoring Technology (CMT) hardware information.
1389
1390 -a, --cat
1391 Show Cache Allocation Technology (CAT) hardware information.
1392
1393 CACHE MONITORING TECHNOLOGY
1394 Intel Haswell and later server platforms offer monitoring capability in
1395 each logical processor to measure specific platform shared resource
1396 metric, for example, L3 cache occupancy. In the Xen implementation, the
1397 monitoring granularity is domain level. To monitor a specific domain,
1398 just attach the domain id with the monitoring service. When the domain
1399 doesn't need to be monitored any more, detach the domain id from the
1400 monitoring service.
1401
1402 Intel Broadwell and later server platforms also offer total/local
1403 memory bandwidth monitoring. Xen supports per-domain monitoring for
1404 these two additional monitoring types. Both memory bandwidth monitoring
1405 and L3 cache occupancy monitoring share the same set of underlying
1406 monitoring service. Once a domain is attached to the monitoring
1407 service, monitoring data can be shown for any of these monitoring
1408 types.
1409
1410 There is no cache monitoring and memory bandwidth monitoring on L2
1411 cache so far.
1412
1413 psr-cmt-attach domain-id
1414 attach: Attach the platform shared resource monitoring service to a
1415 domain.
1416
1417 psr-cmt-detach domain-id
1418 detach: Detach the platform shared resource monitoring service from
1419 a domain.
1420
1421 psr-cmt-show psr-monitor-type [domain-id]
1422 Show monitoring data for a certain domain or all domains. Current
1423 supported monitor types are:
1424 - "cache-occupancy": showing the L3 cache occupancy(KB).
1425 - "total-mem-bandwidth": showing the total memory bandwidth(KB/s).
1426 - "local-mem-bandwidth": showing the local memory bandwidth(KB/s).
1427
1428 CACHE ALLOCATION TECHNOLOGY
1429 Intel Broadwell and later server platforms offer capabilities to
1430 configure and make use of the Cache Allocation Technology (CAT)
1431 mechanisms, which enable more cache resources (i.e. L3/L2 cache) to be
1432 made available for high priority applications. In the Xen
1433 implementation, CAT is used to control cache allocation on VM basis. To
1434 enforce cache on a specific domain, just set capacity bitmasks (CBM)
1435 for the domain.
1436
1437 Intel Broadwell and later server platforms also offer Code/Data
1438 Prioritization (CDP) for cache allocations, which support specifying
1439 code or data cache for applications. CDP is used on a per VM basis in
1440 the Xen implementation. To specify code or data CBM for the domain, CDP
1441 feature must be enabled and CBM type options need to be specified when
1442 setting CBM, and the type options (code and data) are mutually
1443 exclusive. There is no CDP support on L2 so far.
1444
1445 psr-cat-set [OPTIONS] domain-id cbm
1446 Set cache capacity bitmasks(CBM) for a domain. For how to specify
1447 cbm please refer to
1448 <https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>.
1449
1450 OPTIONS
1451
1452 -s SOCKET, --socket=SOCKET
1453 Specify the socket to process, otherwise all sockets are
1454 processed.
1455
1456 -l LEVEL, --level=LEVEL
1457 Specify the cache level to process, otherwise the last level
1458 cache (L3) is processed.
1459
1460 -c, --code
1461 Set code CBM when CDP is enabled.
1462
1463 -d, --data
1464 Set data CBM when CDP is enabled.
1465
1466 psr-cat-show [OPTIONS] [domain-id]
1467 Show CAT settings for a certain domain or all domains.
1468
1469 OPTIONS
1470
1471 -l LEVEL, --level=LEVEL
1472 Specify the cache level to process, otherwise the last level
1473 cache (L3) is processed.
1474
1475 Memory Bandwidth Allocation
1476 Intel Skylake and later server platforms offer capabilities to
1477 configure and make use of the Memory Bandwidth Allocation (MBA)
1478 mechanisms, which provides OS/VMMs the ability to slow misbehaving
1479 apps/VMs by using a credit-based throttling mechanism. In the Xen
1480 implementation, MBA is used to control memory bandwidth on VM basis. To
1481 enforce bandwidth on a specific domain, just set throttling value
1482 (THRTL) for the domain.
1483
1484 psr-mba-set [OPTIONS] domain-id thrtl
1485 Set throttling value (THRTL) for a domain. For how to specify thrtl
1486 please refer to
1487 <https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>.
1488
1489 OPTIONS
1490
1491 -s SOCKET, --socket=SOCKET
1492 Specify the socket to process, otherwise all sockets are
1493 processed.
1494
1495 psr-mba-show [domain-id]
1496 Show MBA settings for a certain domain or all domains. For linear
1497 mode, it shows the decimal value. For non-linear mode, it shows
1498 hexadecimal value.
1499
1501 LIBXL_DISK_BACKEND_UNTRUSTED
1502 Set this environment variable to "1" to suggest to the guest that
1503 the disk backend shouldn't be trusted. If the variable is absent or
1504 set to "0", the backend will be trusted.
1505
1506 LIBXL_NIC_BACKEND_UNTRUSTED
1507 Set this environment variable to "1" to suggest to the guest that
1508 the network backend shouldn't be trusted. If the variable is absent
1509 or set to "0", the backend will be trusted.
1510
1512 xl is mostly command-line compatible with the old xm utility used with
1513 the old Python xend. For compatibility, the following options are
1514 ignored:
1515
1516 xl migrate --live
1517
1519 The following man pages:
1520
1521 xl.cfg(5), xlcpupool.cfg(5), xentop(1), xl-disk-configuration(5)
1522 xl-network-configuration(5)
1523
1524 And the following documents on the xenproject.org website:
1525
1526 <https://xenbits.xenproject.org/docs/unstable/misc/xsm-flask.txt>
1527 <https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>
1528
1529 For systems that don't automatically bring the CPU online:
1530
1531 <https://wiki.xenproject.org/wiki/Paravirt_Linux_CPU_Hotplug>
1532
1534 Send bugs to xen-devel@lists.xenproject.org, see
1535 https://wiki.xenproject.org/wiki/Reporting_Bugs_against_Xen_Project on
1536 how to send bug reports.
1537
1538
1539
15404.16.3 2022-12-19 xl(1)