1xl.conf(5) Xen xl.conf(5)
2
3
4
6 /etc/xen/xl.conf - XL Global/Host Configuration
7
9 The xl.conf file allows configuration of hostwide "xl" toolstack
10 options.
11
12 For details of per-domain configuration options please see xl.cfg(5).
13
15 The config file consists of a series of "KEY=VALUE" pairs.
16
17 A value "VALUE" is one of:
18
19 "STRING"
20 A string, surrounded by either single or double quotes.
21
22 NUMBER
23 A number, in either decimal, octal (using a 0 prefix) or
24 hexadecimal (using an "0x" prefix).
25
26 BOOLEAN
27 A "NUMBER" interpreted as "False" (0) or "True" (any other value).
28
29 [ VALUE, VALUE, ... ]
30 A list of "VALUES" of the above types. Lists are homogeneous and
31 are not nested.
32
33 The semantics of each "KEY" defines which form of "VALUE" is required.
34
36 autoballoon="off"|"on"|"auto"
37 If set to "on" then "xl" will automatically reduce the amount of
38 memory assigned to domain 0 in order to free memory for new
39 domains.
40
41 If set to "off" then "xl" will not automatically reduce the amount
42 of domain 0 memory.
43
44 If set to "auto" then auto-ballooning will be disabled if the
45 "dom0_mem" option was provided on the Xen command line.
46
47 You are strongly recommended to set this to "off" (or "auto") if
48 you use the "dom0_mem" hypervisor command line to reduce the amount
49 of memory given to domain 0 by default.
50
51 Default: "auto"
52
53 run_hotplug_scripts=BOOLEAN
54 If disabled hotplug scripts will be called from udev, as it used to
55 be in the previous releases. With the default option, hotplug
56 scripts will be launched by xl directly.
57
58 Default: 1
59
60 lockfile="PATH"
61 Sets the path to the lock file used by xl to serialise certain
62 operations (primarily domain creation).
63
64 Default: "/var/lock/xl"
65
66 max_grant_frames=NUMBER
67 Sets the default value for the "max_grant_frames" domain config
68 value.
69
70 Default: 32 on hosts up to 16TB of memory, 64 on hosts larger than
71 16TB
72
73 max_maptrack_frames=NUMBER
74 Sets the default value for the "max_maptrack_frames" domain config
75 value.
76
77 Default: 1024
78
79 vif.default.script="PATH"
80 Configures the default hotplug script used by virtual network
81 devices.
82
83 The old vifscript option is deprecated and should not be used.
84
85 Default: "/etc/xen/scripts/vif-bridge"
86
87 vif.default.bridge="NAME"
88 Configures the default bridge to set for virtual network devices.
89
90 The old defaultbridge option is deprecated and should not be used.
91
92 Default: "xenbr0"
93
94 vif.default.backend="NAME"
95 Configures the default backend to set for virtual network devices.
96
97 Default: 0
98
99 vif.default.gatewaydev="NAME"
100 Configures the default gateway device to set for virtual network
101 devices.
102
103 Default: "None"
104
105 remus.default.netbufscript="PATH"
106 Configures the default script used by Remus to setup network
107 buffering.
108
109 Default: "/etc/xen/scripts/remus-netbuf-setup"
110
111 colo.default.proxyscript="PATH"
112 Configures the default script used by COLO to setup colo-proxy.
113
114 Default: "/etc/xen/scripts/colo-proxy-setup"
115
116 output_format="json|sxp"
117 Configures the default output format used by xl when printing
118 "machine readable" information. The default is to use the "JSON"
119 <http://www.json.org/> syntax. However for compatibility with the
120 previous "xm" toolstack this can be configured to use the old "SXP"
121 (S-Expression-like) syntax instead.
122
123 Default: "json"
124
125 blkdev_start="NAME"
126 Configures the name of the first block device to be used for
127 temporary block device allocations by the toolstack. The default
128 choice is "xvda".
129
130 claim_mode=BOOLEAN
131 If this option is enabled then when a guest is created there will
132 be an guarantee that there is memory available for the guest. This
133 is an particularly acute problem on hosts with memory over-
134 provisioned guests that use tmem and have self-balloon enabled
135 (which is the default option). The self-balloon mechanism can
136 deflate/inflate the balloon quickly and the amount of free memory
137 (which "xl info" can show) is stale the moment it is printed. When
138 claim is enabled a reservation for the amount of memory (see
139 'memory' in xl.conf(5)) is set, which is then reduced as the
140 domain's memory is populated and eventually reaches zero. The free
141 memory in "xl info" is the combination of the hypervisor's free
142 heap memory minus the outstanding claims value.
143
144 If the reservation cannot be meet the guest creation fails
145 immediately instead of taking seconds/minutes (depending on the
146 size of the guest) while the guest is populated.
147
148 Note that to enable tmem type guests, one needs to provide "tmem"
149 on the Xen hypervisor argument and as well on the Linux kernel
150 command line.
151
152 Default: 1
153
154 0 No claim is made. Memory population during guest creation will
155 be attempted as normal and may fail due to memory exhaustion.
156
157 1 Normal memory and freeable pool of ephemeral pages (tmem) is
158 used when calculating whether there is enough memory free to
159 launch a guest. This guarantees immediate feedback whether the
160 guest can be launched due to memory exhaustion (which can take
161 a long time to find out if launching massively huge guests).
162
163 vm.cpumask="CPULIST"
164 vm.hvm.cpumask="CPULIST"
165 vm.pv.cpumask="CPULIST"
166 Global masks that are applied when creating guests and pinning
167 vcpus to indicate which cpus they are allowed to run on.
168 Specifically, "vm.cpumask" applies to all guest types,
169 "vm.hvm.cpumask" applies to both HVM and PVH guests and
170 "vm.pv.cpumask" applies to PV guests.
171
172 The hard affinity of guest's vcpus are logical-AND'ed with
173 respective masks. If the resulting affinity mask is empty,
174 operation will fail.
175
176 Use --ignore-global-affinity-masks to skip applying global masks.
177
178 The default value for these masks are all 1's, i.e. all cpus are
179 allowed.
180
181 Due to bug(s), these options may not interact well with other
182 options concerning CPU affinity. One example is CPU pools. Users
183 should always double check that the required affinity has taken
184 effect.
185
187 xl(1)
188 xl.cfg(5)
189 http://www.json.org/
190
191
192
1934.12.1 2019-12-11 xl.conf(5)