1xVM(5)                Standards, Environments, and Macros               xVM(5)
2
3
4

NAME

6       xVM, xvm - Solaris x86 virtual machine monitor
7

DESCRIPTION

9       The Solaris xVM software (hereafter xVM) is a virtualization system, or
10       virtual machine monitor, for the Solaris operating system on  x86  sys‐
11       tems. Solaris xVM enables you to run multiple virtual machines simulta‐
12       neously on a single physical machine, often referred to  as  the  host.
13       Each virtual machine, known as a domain, runs its own complete and dis‐
14       tinct operating system instance, which includes its  own  I/O  devices.
15       This  differs from zone-based virtualization, in which all zones shares
16       the same kernel. (See zones(5).)
17
18
19       Each domain has a name and a UUID. Domains can be renamed but typically
20       retain  the  same UUID. A domain ID is an integer that is specific to a
21       running instance. It changes on every boot of the  guest  domain.  Non-
22       running domains do not have a domain ID.
23
24
25       The xVM hypervisor is responsible for controlling and executing each of
26       the domains and runs with full privileges. xVM  also  includes  control
27       tools  for  management of the domains. These tools run under a special‐
28       ized control domain, often referred to as domain 0.
29
30
31       To run Solaris xVM, select  the  entry  labelled  Solaris  xVM  in  the
32       grub(5)  menu  at  boot  time. This boots the hypervisor, which in turn
33       boots the control domain. The control domain is a  version  of  Solaris
34       modified  to  run  under  the xVM hypervisor. At the point at which the
35       control domain is running, the control tools are enabled. In most other
36       respects,  the  domain 0 instance behaves just like a "normal" instance
37       of the Solaris operating system.
38
39
40       The xVM hypervisor delegates control of the physical devices present on
41       the  machine to domain 0. Thus, by default, only domain 0 has access to
42       such devices. The other guest domains running on the host are presented
43       with  virtualized devices with which they interact as they would physi‐
44       cal devices.
45
46
47       The xVM hypervisor schedules running domains (including domain 0)  onto
48       the  set  of  physical  CPUs  as  configured. The xVM scheduler is con‐
49       strained by domain configuration (the number of virtual CPUs allocated,
50       and  so  forth)  as well as any run-time configuration, such as pinning
51       virtual CPUs to physical CPUs.
52
53
54       The default domain scheduler in xVM is the credit scheduler. This is  a
55       work-conserving,  fair-share domain scheduler; it balances virtual CPUs
56       of domains across the allowed set of physical CPUs according  to  work‐
57       load.
58
59
60       xVM  supports  two modes of virtualization. In the first, the operating
61       system is aware that it is running under xVM. This mode is called  par‐
62       avirtualization.  In  the  second  mode,  called fully virtualized, the
63       guest operating system is not aware that it is  running  under  xVM.  A
64       fully  virtualized guest domain is sometimes referred to as a Hardware-
65       assisted Virtual Machine (HVM). A variation on a Hardware-assisted Vir‐
66       tual  Machine  is  to  use paravirtualized drivers for improved perfor‐
67       mance. This variation is abbreviated as HVM + PV.
68
69
70       With paravirtualization, each device, such as a  networking  interface,
71       is  presented  as  a  fully virtual interface, and specific drivers are
72       required for it. Each virtual device  is  associated  with  a  physical
73       device  and the driver is split into two. A frontend driver runs in the
74       guest domain and communicates over a virtual data interface to a  back‐
75       end  driver. The backend driver currently runs in domain 0 and communi‐
76       cates with both the frontend driver and the physical device  underneath
77       it.  Thus,  a  guest domain can make use of a network card on the host,
78       store data on a host disk drive, and so forth.
79
80
81       Solaris xVM currently supports two main split  drivers  used  for  I/O.
82       Networking  is  done by means of the xnb (xVM Networking Backend) driv‐
83       ers. Guest domains, whether Solaris or another  operating  system,  use
84       xnb  to  transmit and receive networking traffic. Typically, a physical
85       NIC is used for communicating with the guest domains, either shared  or
86       dedicated.  Solaris  xVM provides networking access to guest domains by
87       means of MAC-based virtual network switching.
88
89
90       Block I/O is provided by the xdb (xVM Disk Backend) driver, which  pro‐
91       vides  virtual disk access to guest domains. In the control domain, the
92       disk storage can be in a file, a ZFS volume, or a physical device.
93
94
95       Using ZFS volumes as the virtual disk for your guest domains allows you
96       to  take  a  snapshot  of the storage. As such, you can keep known-good
97       snapshots of the guest domain OS installation, and revert the  snapshot
98       (using zfs rollback) if the domain has a problem. The zfs clone command
99       can be used for quick provisioning of new  domains.  For  example,  you
100       might  install  Solaris  as  a guest domain, run sys-unconfig(1M), then
101       clone that disk image  for  use  in  new  Solaris  domains.  Installing
102       Solaris  in  this way requires only a configuration step, rather than a
103       full install.
104
105
106       When running as a guest domain, Solaris xVM uses the xnf and xdf  driv‐
107       ers to talk to the relevant backend drivers. In addition to these driv‐
108       ers, the Solaris console is virtualized when running as a guest domain;
109       this driver interacts with the xenconsoled(1M) daemon running in domain
110       0 to provide console access.
111
112
113       A given system can have  both  paravirtualized  and  fully  virtualized
114       domains  running  simultaneously. The control domain must always run in
115       paravirtualized mode, because it must work closely with the  hypervisor
116       layer.
117
118
119       As guest domains do not share a kernel, xVM does not require that every
120       guest domain be Solaris. For paravirtualized mode and for all types  of
121       operating systems, the only requirement is that the operating system be
122       modified to support the virtual device interfaces.
123
124
125       Fully-virtualized guest domains are supported under xVM with the assis‐
126       tance  of  virtualization  extensions available on some x86 CPUs. These
127       extensions must be present and enabled; some BIOS versions disable  the
128       extensions by default.
129
130
131       In  paravirtualized mode, Solaris identifies the platform as i86xpv, as
132       seen in the return of the following uname(1) command:
133
134         % uname -i
135         i86xpv
136
137
138
139
140       Generally, applications do not need to be aware of the  platform  type.
141       It  is  recommended that any ISA identification required by an applica‐
142       tion use the -p option (for ISA or processor  type)  to  uname,  rather
143       than  the  -i  option,  as shown above. On x86 platforms, regardless of
144       whether Solaris is running under xVM, uname -p always returns i386.
145
146
147       You can examine the domcaps  driver  to  determine  whether  a  Solaris
148       instance is running as domain 0:
149
150         # cat /dev/xen/domcaps
151         control_d
152
153
154
155
156       Note  that  the domcaps file might also contain additional information.
157       However, the first token is always control_d when running as domain 0.
158
159
160       xVM hosts provide a service  management  facility  (SMF)  service  (see
161       smf(5)) with the FMRI:
162
163         svc:/system/xvm/domains:default
164
165
166
167
168       ...to  control auto-shutdown and auto-start of domains. By default, all
169       running domains are shutdown when the host is brought down by means  of
170       init 6 or a similar command. This shutdown is analogous to entering:
171
172         # virsh shutdown mydomain
173
174
175
176
177       A domain can have the setting:
178
179         on_xend_stop=ignore
180
181
182
183
184       ...in  which case the domain is not shut down even if the host is. Such
185       a setting is the effective equivalent of:
186
187         # virsh destroy mydomain
188
189
190
191
192       If a domain has the setting:
193
194         on_xend_start=start
195
196
197
198
199       ...then the domain is automatically booted when  the  xVM  host  boots.
200       Disabling the SMF service by means of svcadm(1M) disables this behavior
201       for all domains.
202
203
204       Solaris xVM is partly based on the work of the open source  Xen  commu‐
205       nity.
206
207   Control Tools
208       The  control  tools  are  the  utilities  shipped with Solaris xVM that
209       enable you to manage xVM domains. These tools interact with the daemons
210       that  support  xVM:  xend(1M), xenconsoled(1M), and xenstored(1M), each
211       described in its own man page. The daemons are, in turn, managed by the
212       SMF.
213
214
215       You install new guest domains with the command line virt-install(1M) or
216       the graphical interface, virt-manager.
217
218
219       The main interface for command  and  control  of  both  xVM  and  guest
220       domains  is virsh(1M). Users should use virsh(1M) wherever possible, as
221       it provides a generic and stable interface for controlling  virtualized
222       operating systems. However, some xVM operations are not yet implemented
223       by virsh. In those cases, the legacy utility xm(1M)  can  be  used  for
224       detailed control.
225
226
227       The  configuration  of  each  domain is stored by xend(1M) after it has
228       been created by means of virt-install, and  can  be  viewed  using  the
229       virsh  dumpxml or xm list -l commands. Direct modification of this con‐
230       figuration data is not recommended; the command-line utility interfaces
231       should be used instead.
232
233
234       Solaris  xVM  supports  live  migration  of  domains by means of the xm
235       migrate command. This allows a guest domain to keep running  while  the
236       host  running  the  domain transfers ownership to another physical host
237       over the network. The remote host must also be running xVM or a compat‐
238       ible  version  of  Xen,  and must accept migration connections from the
239       current host (see xend(1M)).
240
241
242       For migration to work, both hosts must be able to  access  the  storage
243       used  by  the  domU  (for  example, over iSCSI or NFS), and have enough
244       resources available to run the migrated domain. Both  hosts  must  cur‐
245       rently  reside  on  the  same subnet for the guest domain networking to
246       continue working properly. Also, both hosts should have  similar  hard‐
247       ware.  For  example,  migrating  from a host with AMD processors to one
248       with Intel processors can cause problems.
249
250
251       Note that the communication channel for migration is not a secure  con‐
252       nection.
253

GLOSSARY

255       backend
256
257           Describes  the  other  half  of  a virtual driver from the frontend
258           driver. The backend driver provides an interface between  the  vir‐
259           tual device and an underlying physical device.
260
261
262       control domain
263
264           The special guest domain running the control tools and given direct
265           hardware access by the hypervisor. Often referred to as domain 0.
266
267
268       domain 0
269
270           See control domain.
271
272
273       frontend
274
275           Describes a virtual device and its associated  driver  in  a  guest
276           domain. A frontend driver communicates with a backend driver hosted
277           in a different guest domain.
278
279
280       full virtualization
281
282           Running an unmodified operating system with hardware assistance.
283
284
285       guest domain
286
287           A virtual machine instance running on a host. Unless the  guest  is
288           domain  0,  such a domain is also called a domain-U, where U stands
289           for "unprivileged".
290
291
292       host
293
294           The physical machine running xVM.
295
296
297       Hardware-assisted Virtual Machine (HVM)
298
299           A fully-virtualized guest domain.
300
301
302       node
303
304           The name used by the virsh(1M) utility for a host.
305
306
307       virtual machine monitor (VMM)
308
309           Hypervisor software, such as xVM, that manages multiple domains.
310
311

SEE ALSO

313       uname(1), dladm(1M),  svcadm(1M),  sys-unconfig(1M),  virsh(1M),  virt-
314       install(1M), xend(1M), xenconsoled(1M), xenstored(1M), xm(1M), zfs(1M),
315       attributes(5), grub(5), live_upgrade(5), smf(5), zones(5)
316

NOTES

318       Any physical Ethernet datalink (as shown by  dladm  show-phys)  can  be
319       used to network guest domains.
320
321
322
323SunOS 5.11                        14 Jan 2009                           xVM(5)
Impressum