commit 10e724d5a1ba43b93cbfada88116f0410d4bb8ba
Author: W. David Ashley <w.david.ashley(a)gmail.com>
Date: Sun Jul 5 12:42:46 2015 -0500
Domains Chapter
General
- fixed some spelling and indentation issues
Boot Modes section
- converted to python
- added example 26
en-US/Guest_Domains.xml | 827 ++++++++++++++++++-----------------
en-US/extras/Domains-Example-26.xml | 12 +
2 files changed, 430 insertions(+), 409 deletions(-)
---
diff --git a/en-US/Guest_Domains.xml b/en-US/Guest_Domains.xml
index 8039ccb..ce78e1e 100644
--- a/en-US/Guest_Domains.xml
+++ b/en-US/Guest_Domains.xml
@@ -77,17 +77,17 @@
</para>
<example>
- <title>Fetching a domain object from an ID</title>
- <programlisting language="Python"><xi:include
href="extras/Domains-Example-1.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ <title>Fetching a domain object from an ID</title>
+ <programlisting language="Python"><xi:include
href="extras/Domains-Example-1.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
</example>
<example>
- <title>Fetching a domain object from an name</title>
+ <title>Fetching a domain object from a name</title>
<programlisting language="Python"><xi:include
href="extras/Domains-Example-2.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
</example>
<example>
- <title>Fetching a domain object from an UUID</title>
+ <title>Fetching a domain object from a UUID</title>
<programlisting language="Python"><xi:include
href="extras/Domains-Example-3.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
</example>
@@ -101,7 +101,7 @@
<title>Listing Domains</title>
<para>
- The libvirt cklasses expose two lists of domains, the first
+ The libvirt classes expose two lists of domains, the first
contains running domains, while the second contains
inactive, persistent domains. The lists are intended to
be non-overlapping, exclusive sets, though there is always
@@ -298,458 +298,467 @@
</section>
<section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Provisioning-apis-persistent">
- <title>Defining and Booting a Persistent Guest
Domain</title>
-
- <para>
- Before a persistent domain can be booted, it must have its
configuration
- defined. This again requires a connection to libvirt and a string
containing
- the XML document describing the required guest configuration. The
- <literal>virDomain</literal> object obtained from
defining the guest,
- can then be used to boot it.
- </para>
-
- <example>
- <title>Defining and Booting a Persistent Guest
Domain</title>
- <programlisting language="Python"><xi:include
href="extras/Domains-Example-8.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
- </example>
-
- </section>
-
- </section>
-
- <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Provisioning-Provisioning_Techniques">
- <title>New Guest Domain Provisioning Techniques</title>
-
- <para>
- This section will first illustrate two configurations that
- allow for a provisioning approach that is comparable to those
- used for physical machines. It then outlines a third option
- which is specific to virtualized hardware, but has some
- interesting benefits. For the purposes of illustration, the
- examples that follow will use an XML configuration that sets
- up a KVM fully virtualized guest, with a single disk and
- network interface and a video card using VNC for display.
- </para>
-
- <programlisting language="XML"><xi:include
href="extras/Domains-Example-9.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
- <important>
- <para>
- Be careful in the choice of initial memory allocation, since
- too low a value may cause mysterious crashes and installation
- failures. Some operating systems need as much as 600 MB of memory
- for initial installation, though this can often be reduced
- post-install.
- </para>
- </important>
-
- <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Provisioning-ISO">
- <title>CDROM/ISO image provisioning</title>
-
- <para>
- All full virtualization technologies have support for emulating
- a CDROM device in a guest domain, making this an obvious choice
- for provisioning new guest domains. It is, however, fairly rare
- to find a hypervisor which provides CDROM devices for
paravirtualized
- guests.
- </para>
-
- <para>
- The first obvious change required to the XML configuration to
- support CDROM installation, is to add a CDROM device. A guest
- domains' CDROM device can be pointed to either a host CDROM
- device, or to a ISO image file. The next change is to determine
- what the BIOS boot order should be, with there being two
- possible options. If the hard disk is listed ahead of the
- CDROM device, then the CDROM media won't be booted unless
- the first boot sector on the hard disk is blank. If the
- CDROM device is listed ahead of the hard disk, then it will
- be necessary to alter the guest config after install to
- make it boot off the installed disk. While both can be made
- to work, the first option is easiest to implement.
- </para>
-
- <para>
- The guest configuration shown earlier would have the following
- XML chunk inserted:
- </para>
-
- <programlisting language="XML"><xi:include
href="extras/Domains-Example-10.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
-
- <para>
- NB, this assumes the hard disk boot sector is blank initially,
- so that the first boot attempt falls through to the CD-ROM drive.
- It will also need a CD-ROM drive device added.
- </para>
-
- <programlisting language="XML"><xi:include
href="extras/Domains-Example-11.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
-
- <para>
- With the configuration determined, it is now possible
- to provision the guest. This is an easy process, simply
- requiring a persistent guest to be defined, and then
- booted.
- </para>
-
- <example>
- <title>Defining and Booting a Persistent Guest
Domain</title>
- <programlisting language="Python"><xi:include
href="extras/Domains-Example-12.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
- </example>
-
- <para>
- If it was not possible to guarantee that the boot
- sector of the hard disk is blank, then provisioning
- would have been a two step process. First a transient
- guest would have been booted using CD-ROM drive as the
- primary boot device. Once that completed, then
- a persistent configuration for the guest would be
- defined to boot off the hard disk.
- </para>
-
- </section>
-
- <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Provisioning-PXE">
- <title>PXE Boot Provisioning</title>
+ <title>Defining and Booting a Persistent Guest
Domain</title>
- <para>
- Some newer full virtualization technologies provide a BIOS that
- is able to use the PXE boot protocol to boot off the network. If
- an environment already has a PXE boot provisioning server deployed,
- this is a desirable method to use for guest domains.
- </para>
+ <para>
+ Before a persistent domain can be booted, it must have its
configuration
+ defined. This again requires a connection to libvirt and a string
containing
+ the XML document describing the required guest configuration.
The
+ <literal>virDomain</literal> object obtained from
defining the guest,
+ can then be used to boot it.
+ </para>
- <para>
- PXE booting a guest obviously requires that the guest has a
- network device configured. The LAN that this network card is
- attached to, also needs a PXE / TFTP server available.
- The next change is to determine
- what the BIOS boot order should be, with there being two
- possible options. If the hard disk is listed ahead of the
- network device, then the network card won't PXE boot unless
- the first boot sector on the hard disk is blank. If the
- network device is listed ahead of the hard disk, then it will
- be necessary to alter the guest config after install to
- make it boot off the installed disk. While both can be made
- to work, the first option is easiest to implement.
- </para>
+ <example>
+ <title>Defining and Booting a Persistent Guest
Domain</title>
+ <programlisting language="Python"><xi:include
href="extras/Domains-Example-8.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ </example>
- <para>
- The guest configuration shown earlier would have the following
- XML chunk inserted:
- </para>
+ </section>
- <programlisting language="XML"><xi:include
href="extras/Domains-Example-13.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ </section>
+
+ <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Provisioning-Provisioning_Techniques">
+ <title>New Guest Domain Provisioning Techniques</title>
<para>
- NB, this assumes the hard disk boot sector is blank initially,
- so that the first boot attempt falls through to the NIC.
- With the configuration determined, it is now possible
- to provision the guest. This is an easy process, simply
- requiring a persistent guest to be defined, and then
- booted.
+ This section will first illustrate two configurations that
+ allow for a provisioning approach that is comparable to those
+ used for physical machines. It then outlines a third option
+ which is specific to virtualized hardware, but has some
+ interesting benefits. For the purposes of illustration, the
+ examples that follow will use an XML configuration that sets
+ up a KVM fully virtualized guest, with a single disk and
+ network interface and a video card using VNC for display.
</para>
- <example>
+ <programlisting language="XML"><xi:include
href="extras/Domains-Example-9.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ <important>
+ <para>
+ Be careful in the choice of initial memory allocation, since
+ too low a value may cause mysterious crashes and installation
+ failures. Some operating systems need as much as 600 MB of
memory
+ for initial installation, though this can often be reduced
+ post-install.
+ </para>
+ </important>
+
+ <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Provisioning-ISO">
+ <title>CDROM/ISO image provisioning</title>
+
+ <para>
+ All full virtualization technologies have support for emulating
+ a CDROM device in a guest domain, making this an obvious choice
+ for provisioning new guest domains. It is, however, fairly rare
+ to find a hypervisor which provides CDROM devices for
paravirtualized
+ guests.
+ </para>
+
+ <para>
+ The first obvious change required to the XML configuration to
+ support CDROM installation, is to add a CDROM device. A guest
+ domains' CDROM device can be pointed to either a host CDROM
+ device, or to a ISO image file. The next change is to determine
+ what the BIOS boot order should be, with there being two
+ possible options. If the hard disk is listed ahead of the
+ CDROM device, then the CDROM media won't be booted unless
+ the first boot sector on the hard disk is blank. If the
+ CDROM device is listed ahead of the hard disk, then it will
+ be necessary to alter the guest config after install to
+ make it boot off the installed disk. While both can be made
+ to work, the first option is easiest to implement.
+ </para>
+
+ <para>
+ The guest configuration shown earlier would have the following
+ XML chunk inserted:
+ </para>
+
+ <programlisting language="XML"><xi:include
href="extras/Domains-Example-10.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+
+ <para>
+ NB, this assumes the hard disk boot sector is blank initially,
+ so that the first boot attempt falls through to the CD-ROM
drive.
+ It will also need a CD-ROM drive device added.
+ </para>
+
+ <programlisting language="XML"><xi:include
href="extras/Domains-Example-11.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+
+ <para>
+ With the configuration determined, it is now possible
+ to provision the guest. This is an easy process, simply
+ requiring a persistent guest to be defined, and then
+ booted.
+ </para>
+
+ <example>
+ <title>Defining and Booting a Persistent Guest
Domain</title>
+ <programlisting language="Python"><xi:include
href="extras/Domains-Example-12.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ </example>
+
+ <para>
+ If it was not possible to guarantee that the boot
+ sector of the hard disk is blank, then provisioning
+ would have been a two step process. First a transient
+ guest would have been booted using CD-ROM drive as the
+ primary boot device. Once that completed, then
+ a persistent configuration for the guest would be
+ defined to boot off the hard disk.
+ </para>
+
+ </section>
+
+ <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Provisioning-PXE">
<title>PXE Boot Provisioning</title>
- <programlisting language="Python"><xi:include
href="extras/Domains-Example-14.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
- </example>
- <para>
- If it was not possible to guarantee that the boot
- sector of the hard disk is blank, then provisioning
- would have been a two step process. First a transient
- guest would have been booted using network as the
- primary boot device. Once that completed, then
- a persistent configuration for the guest would be
- defined to boot off the hard disk.
- </para>
- </section>
+ <para>
+ Some newer full virtualization technologies provide a BIOS that
+ is able to use the PXE boot protocol to boot off the network. If
+ an environment already has a PXE boot provisioning server
deployed,
+ this is a desirable method to use for guest domains.
+ </para>
- <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Provisioning-Kernel">
- <title>Direct Kernel Boot Provisioning</title>
+ <para>
+ PXE booting a guest obviously requires that the guest has a
+ network device configured. The LAN that this network card is
+ attached to, also needs a PXE / TFTP server available.
+ The next change is to determine
+ what the BIOS boot order should be, with there being two
+ possible options. If the hard disk is listed ahead of the
+ network device, then the network card won't PXE boot unless
+ the first boot sector on the hard disk is blank. If the
+ network device is listed ahead of the hard disk, then it will
+ be necessary to alter the guest config after install to
+ make it boot off the installed disk. While both can be made
+ to work, the first option is easiest to implement.
+ </para>
- <para>
- Paravirtualization technologies emulate a fairly restrictive
- set of hardware, often making it impossible to use the provisioning
- options just outlined. For such scenarios it is often possible to
- boot a new guest domain directly from an kernel and initrd image
- stored on the host file system. This has one interesting advantage,
- which is that it is possible to directly set kernel command line
- boot arguments, making it very easy to do fully automated
- installation. This advantage can be compelling enough that this
- technique is used even for fully virtualized guest domains with
- CD-ROM drive/PXE support.
- </para>
+ <para>
+ The guest configuration shown earlier would have the following
+ XML chunk inserted:
+ </para>
- <para>
- The one complication with direct kernel booting is that provisioning
- becomes a two step process. For the first step, it is necessary to
- configure the guest XML configuration to point to a kernel/initrd.
- </para>
+ <programlisting language="XML"><xi:include
href="extras/Domains-Example-13.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
- <example>
- <title>Kernel Boot Provisioning XML</title>
- <programlisting language="XML"><xi:include
href="extras/Domains-Example-15.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
- </example>
+ <para>
+ NB, this assumes the hard disk boot sector is blank initially,
+ so that the first boot attempt falls through to the NIC.
+ With the configuration determined, it is now possible
+ to provision the guest. This is an easy process, simply
+ requiring a persistent guest to be defined, and then
+ booted.
+ </para>
- <para>
- Notice how the kernel command line provides the URL of download
- site containing the distro install tree matching the kernel/initrd.
- This allows the installer to automatically download all its
resources
- without prompting the user for install URL. It could also be used to
- provide a kickstart file for completely unattended installation.
- Finally, this command line also tells the kernel to activate both
- the first serial port and the VGA card as consoles, with the latter
- being the default. Having kernel messages duplicated on the serial
- port in this manner can be a useful debugging avenue. Of course
- valid command line arguments vary according to the particular kernel
- being booted. Consult the kernel vendor/distributor's
documentation
- for valid options.
- </para>
+ <example>
+ <title>PXE Boot Provisioning</title>
+ <programlisting language="Python"><xi:include
href="extras/Domains-Example-14.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ </example>
- <para>
- The last XML configuration detail before starting the guest, is to
- change the 'on_reboot' element action to be
'destroy'. This ensures
- that when the guest installer finishes and requests a reboot, the
- guest is instead powered off. This allows the management application
- to change the configuration to make it boot off, just installed, the
- hard disk again. The provisioning process can be started now by
- creating a transient guest with the first XML configuration
- </para>
+ <para>
+ If it was not possible to guarantee that the boot
+ sector of the hard disk is blank, then provisioning
+ would have been a two step process. First a transient
+ guest would have been booted using network as the
+ primary boot device. Once that completed, then
+ a persistent configuration for the guest would be
+ defined to boot off the hard disk.
+ </para>
+ </section>
- <example>
- <title>Kernel Boot Provisioning</title>
- <programlisting language="Python"><xi:include
href="extras/Domains-Example-16.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
- </example>
+ <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Provisioning-Kernel">
+ <title>Direct Kernel Boot Provisioning</title>
- <para>
- Once this guest shuts down, the second phase of the provisioning
- process can be started. For this phase, the 'OS' element
will
- have the kernel/initrd/cmdline elements removed, and replaced
- by either a reference to a host side bootloader, or a BIOS
- boot setup. The former is used for Xen paravirtualized guests,
- while the latter is used for fully virtualized guests.
- </para>
+ <para>
+ Paravirtualization technologies emulate a fairly restrictive
+ set of hardware, often making it impossible to use the
provisioning
+ options just outlined. For such scenarios it is often possible
to
+ boot a new guest domain directly from an kernel and initrd image
+ stored on the host file system. This has one interesting
advantage,
+ which is that it is possible to directly set kernel command line
+ boot arguments, making it very easy to do fully automated
+ installation. This advantage can be compelling enough that this
+ technique is used even for fully virtualized guest domains with
+ CD-ROM drive/PXE support.
+ </para>
- <para>
- The phase 2 configuration for a Xen paravirtualized guest
- would thus look like:
- </para>
+ <para>
+ The one complication with direct kernel booting is that
provisioning
+ becomes a two step process. For the first step, it is necessary
to
+ configure the guest XML configuration to point to a
kernel/initrd.
+ </para>
- <programlisting language="XML"><xi:include
href="extras/Domains-Example-17.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ <example>
+ <title>Kernel Boot Provisioning XML</title>
+ <programlisting language="XML"><xi:include
href="extras/Domains-Example-15.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ </example>
- <para>
- while a fully-virtualized guest would use:
- </para>
+ <para>
+ Notice how the kernel command line provides the URL of download
+ site containing the distro install tree matching the
kernel/initrd.
+ This allows the installer to automatically download all its
resources
+ without prompting the user for install URL. It could also be used
to
+ provide a kickstart file for completely unattended installation.
+ Finally, this command line also tells the kernel to activate
both
+ the first serial port and the VGA card as consoles, with the
latter
+ being the default. Having kernel messages duplicated on the
serial
+ port in this manner can be a useful debugging avenue. Of course
+ valid command line arguments vary according to the particular
kernel
+ being booted. Consult the kernel vendor/distributor's
documentation
+ for valid options.
+ </para>
+
+ <para>
+ The last XML configuration detail before starting the guest, is
to
+ change the 'on_reboot' element action to be
'destroy'. This ensures
+ that when the guest installer finishes and requests a reboot,
the
+ guest is instead powered off. This allows the management
application
+ to change the configuration to make it boot off, just installed,
the
+ hard disk again. The provisioning process can be started now by
+ creating a transient guest with the first XML configuration
+ </para>
- <programlisting language="XML"><xi:include
href="extras/Domains-Example-18.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ <example>
+ <title>Kernel Boot Provisioning</title>
+ <programlisting language="Python"><xi:include
href="extras/Domains-Example-16.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ </example>
- <para>
- With the second phase configuration determined, the guest can
- be recreated, this time using a persistent configuration
- </para>
+ <para>
+ Once this guest shuts down, the second phase of the provisioning
+ process can be started. For this phase, the 'OS' element
will
+ have the kernel/initrd/cmdline elements removed, and replaced
+ by either a reference to a host side bootloader, or a BIOS
+ boot setup. The former is used for Xen paravirtualized guests,
+ while the latter is used for fully virtualized guests.
+ </para>
+
+ <para>
+ The phase 2 configuration for a Xen paravirtualized guest
+ would thus look like:
+ </para>
+
+ <programlisting language="XML"><xi:include
href="extras/Domains-Example-17.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+
+ <para>
+ while a fully-virtualized guest would use:
+ </para>
- <example>
- <title>Kernel Boot Provisioning for a Persistent Guest
Domain</title>
- <programlisting language="Python"><xi:include
href="extras/Domains-Example-19.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
- </example>
+ <programlisting language="XML"><xi:include
href="extras/Domains-Example-18.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ <para>
+ With the second phase configuration determined, the guest can
+ be recreated, this time using a persistent configuration
+ </para>
+
+ <example>
+ <title>Kernel Boot Provisioning for a Persistent Guest
Domain</title>
+ <programlisting language="Python"><xi:include
href="extras/Domains-Example-19.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ </example>
+
+ </section>
</section>
</section>
- </section>
- <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Stopping">
- <title>Stopping</title>
- <para>
- Stopping refers to the process of halting a running guest. A guest can be
stopped by two methods:
- shutdown and destroy.
- </para>
- <para>
- The <literal>shutdown</literal> method is a clean stop process,
which sends a signal to the guest domain operating system
- asking it to shut down immediately. The guest will only be stopped once the
operating system
- has successfuly shut down. The shutdown process is analagous to running a
shutdown command on
- a physical machine. There is also a
<literal>shutdownFlags</literal> method which can, depending
- on what the guest OS supports, can shudoown the domain and leave the objct in
a usable state.
- </para>
- <para>
- The <literal>destroy</literal> method immediately terminates the
guest domain. The destroy process is analogous to pulling
- the plug on a physical machine.
- </para>
- </section>
+ <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Stopping">
+ <title>Stopping</title>
+ <para>
+ Stopping refers to the process of halting a running guest. A guest can be
stopped by two methods:
+ shutdown and destroy.
+ </para>
+ <para>
+ The <literal>shutdown</literal> method is a clean stop
process, which sends a signal to the guest domain operating system
+ asking it to shut down immediately. The guest will only be stopped once
the operating system
+ has successfuly shut down. The shutdown process is analagous to running a
shutdown command on
+ a physical machine. There is also a
<literal>shutdownFlags</literal> method which can, depending
+ on what the guest OS supports, can shudoown the domain and leave the
objct in a usable state.
+ </para>
+ <para>
+ The <literal>destroy</literal> method immediately terminates
the guest domain. The destroy process is analogous to pulling
+ the plug on a physical machine.
+ </para>
+ </section>
- <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Save">
- <title>Suspend / Resume and Save / Restore</title>
+ <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Save">
+ <title>Suspend / Resume and Save / Restore</title>
- <para>
- the <literal>suspend</literal> and
<literal>resume</literal> methods refer to the process of taking a running
guest and
- temporarily saving its memory state. At a later time, it is possible to
resume the guest
- to its original running state, contiuining execution where it left off.
Suspend does not
- save a persistent image of the guest's memory. For this,
<literal>save</literal> is used.
- </para>
- <para>
- The <literal>save</literal> and
<literal>restore</literal> methods refer to the process of taking a running
guest
- and saving its memory state to a file. At some time later, it
- is possible to restore the guest to its original running state,
- continuing execution where it left off.
- </para>
+ <para>
+ the <literal>suspend</literal> and
<literal>resume</literal> methods refer to the process of taking a running
guest and
+ temporarily saving its memory state. At a later time, it is possible to
resume the guest
+ to its original running state, contiuining execution where it left off.
Suspend does not
+ save a persistent image of the guest's memory. For this,
<literal>save</literal> is used.
+ </para>
+ <para>
+ The <literal>save</literal> and
<literal>restore</literal> methods refer to the process of taking a running
guest
+ and saving its memory state to a file. At some time later, it
+ is possible to restore the guest to its original running state,
+ continuing execution where it left off.
+ </para>
- <para>
- It is important to note that the save/restore methods only save the
- memory state, no storage state is preserved. Thus when the guest
- is restored, the underlying guest storage must be in exactly the
- same state as it was when the guest was initially saved. For
- basic usage this implies that a guest can only be restored once
- from any given saved state image. To allow a guest to be restored
- from the same saved state multiple times, the application must
- also have taken a snapshot of the guest storage at time of saving,
- and explicitly revert to this storage snapshot when restoring.
- A future enhancement in libvirt will allow for an automated
- snapshot capability which saves memory and storage state in
- one operation.
- </para>
+ <para>
+ It is important to note that the save/restore methods only save the
+ memory state, no storage state is preserved. Thus when the guest
+ is restored, the underlying guest storage must be in exactly the
+ same state as it was when the guest was initially saved. For
+ basic usage this implies that a guest can only be restored once
+ from any given saved state image. To allow a guest to be restored
+ from the same saved state multiple times, the application must
+ also have taken a snapshot of the guest storage at time of saving,
+ and explicitly revert to this storage snapshot when restoring.
+ A future enhancement in libvirt will allow for an automated
+ snapshot capability which saves memory and storage state in
+ one operation.
+ </para>
- <para>
- The save operation requires the fully qualified path to a file
- in which the guest memory state will be saved. This filename
- is in the hypervisor's file system, not the libvirt client
- application's. There's no difference between the two if managing
- a local hypervisor, but it is critically important if connecting
- remotely to a hypervisor across the network. The example that
- follows demonstrates saving a guest called 'demo-guest' to a
- file. It checks to verify that the guest is running before
- saving, though this is technically redundant since the
- hypervisor driver will do such a check itself.
- </para>
+ <para>
+ The save operation requires the fully qualified path to a file
+ in which the guest memory state will be saved. This filename
+ is in the hypervisor's file system, not the libvirt client
+ application's. There's no difference between the two if managing
+ a local hypervisor, but it is critically important if connecting
+ remotely to a hypervisor across the network. The example that
+ follows demonstrates saving a guest called 'demo-guest' to a
+ file. It checks to verify that the guest is running before
+ saving, though this is technically redundant since the
+ hypervisor driver will do such a check itself.
+ </para>
- <example>
- <title>Saving a Guest Domain</title>
- <programlisting language="Python"><xi:include
href="extras/Domains-Example-20.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
- </example>
+ <example>
+ <title>Saving a Guest Domain</title>
+ <programlisting language="Python"><xi:include
href="extras/Domains-Example-20.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ </example>
- <para>
- Some period of time later, the saved state file can then be
- used to restart the guest where it left of, using the
- virDomainRestore API. The hypervisor driver will return an
- error if the guest is already running, however, it won't
- prevent attempts to restore from the same state file multiple
- times. As noted earlier, it is the applications' responsibility
- to ensure the guest storage is in exactly the same state as it
- was when the save image was created
- </para>
+ <para>
+ Some period of time later, the saved state file can then be
+ used to restart the guest where it left of, using the
+ virDomainRestore API. The hypervisor driver will return an
+ error if the guest is already running, however, it won't
+ prevent attempts to restore from the same state file multiple
+ times. As noted earlier, it is the applications' responsibility
+ to ensure the guest storage is in exactly the same state as it
+ was when the save image was created
+ </para>
- <example>
- <title>Restoring a Guest Domain</title>
- <programlisting language="Python"><xi:include
href="extras/Domains-Example-21.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
- </example>
+ <example>
+ <title>Restoring a Guest Domain</title>
+ <programlisting language="Python"><xi:include
href="extras/Domains-Example-21.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ </example>
- </section>
+ </section>
- <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Migration">
- <title>Migration</title>
+ <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Migration">
+ <title>Migration</title>
- <para>
- Migration is the process of taking the image of a guest domain and moving it
somewhere,
- typically from a hypervisor on one node to a hypervisor on another node.
There are two
- methods for migration. The <literal>migrate</literal> method
takes an established
- hypervisor connection, and instructs the domain to migrate to this
connection. The
- <literal>migrateToUri</literal> method takes a URI specifying a
hypervisor connection,
- opens the connection, then instructions the domain to migrate to this
connection. Both
- these methods can be passed a parameter to specify live migration. For
migration to
- complete successfully, storage needs to be shared between the source and
target hypervisors.
- </para>
- <para>
- The first parameter of the <literal>migrate</literal> method
specifies the connectiion to be used to the
- target of the migration. This parameter is required.
- </para>
- <para>
- The second parameter of the <literal>migrate</literal> method
specifies a set of flags that control how the
- migration takes place over the connection. If npo flags are needed then the
parameter should
- be set to zero.
- </para>
- <para>
- The third parameter of the <literal>migrate</literal> method
specifies a new name for the domain on the target
- of the migration. Not all hypervisors support this operation. If no rename of
the domain is
- required then the parameter shoule be set to
<literal>None</literal>.
- </para>
- <para>
- The third parameter of the <literal>migrate</literal> method
specifies a new name for the domain on the target
- of the migration. Not all hypervisors support this operation. If no rename of
the domain is
- required then the parameter can be set to
<literal>None</literal>.
- </para>
- <para>
- The forth parameter of the <literal>migrate</literal> method
specifies the URI to be used as the
- target of the migration. A URI is only required when the target system
supports multiple
- hypervisors. If ther is only a single hypervisor on the target system then
- the parameter can be set to <literal>None</literal>.
- </para>
- <para>
- The fifth and last parameter of the <literal>migrate</literal>
method specifies the bandwidth in
- MiB/s to be used. If this maximum is not needed then set the parameter to
zero.
- </para>
- <para>
- To migrate a guest domain to a connection that is already open use the
<literal>migrate</literal>
- method. An example follows:
- </para>
- <example>
- <title>Migrate a Domain to an Open Connection</title>
- <programlisting language="Python"><xi:include
href="extras/Domains-Example-22.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
- </example>
- <para>
- The <literal>migrateToURI</literal> method similar except that
the destination URI is the first
- parameter instead of an existing connection.
- </para>
- <para>
- To migrate a guest domain to a URI use the
<literal>migrateToURI</literal>
- method. An example follows:
- </para>
- <example>
- <title>Migrate a Domain to an Open Connection</title>
- <programlisting language="Python"><xi:include
href="extras/Domains-Example-23.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
- </example>
- <para>
- To migrate a live guest domain to a URI use the
<literal>migrate</literal> or the
- <literal>migrateToURI</literal> with the
<literal>VIR_MIGRATE_LIVE</literal> flag set. An example follows:
- </para>
- <example>
- <title>Migrate a Domain to an Open Connection</title>
- <programlisting language="Python"><xi:include
href="extras/Domains-Example-24.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
- </example>
- </section>
+ <para>
+ Migration is the process of taking the image of a guest domain and moving
it somewhere,
+ typically from a hypervisor on one node to a hypervisor on another node.
There are two
+ methods for migration. The <literal>migrate</literal> method
takes an established
+ hypervisor connection, and instructs the domain to migrate to this
connection. The
+ <literal>migrateToUri</literal> method takes a URI specifying
a hypervisor connection,
+ opens the connection, then instructions the domain to migrate to this
connection. Both
+ these methods can be passed a parameter to specify live migration. For
migration to
+ complete successfully, storage needs to be shared between the source and
target hypervisors.
+ </para>
+ <para>
+ The first parameter of the <literal>migrate</literal> method
specifies the connectiion to be used to the
+ target of the migration. This parameter is required.
+ </para>
+ <para>
+ The second parameter of the <literal>migrate</literal> method
specifies a set of flags that control how the
+ migration takes place over the connection. If npo flags are needed then
the parameter should
+ be set to zero.
+ </para>
+ <para>
+ The third parameter of the <literal>migrate</literal> method
specifies a new name for the domain on the target
+ of the migration. Not all hypervisors support this operation. If no
rename of the domain is
+ required then the parameter shoule be set to
<literal>None</literal>.
+ </para>
+ <para>
+ The third parameter of the <literal>migrate</literal> method
specifies a new name for the domain on the target
+ of the migration. Not all hypervisors support this operation. If no
rename of the domain is
+ required then the parameter can be set to
<literal>None</literal>.
+ </para>
+ <para>
+ The forth parameter of the <literal>migrate</literal> method
specifies the URI to be used as the
+ target of the migration. A URI is only required when the target system
supports multiple
+ hypervisors. If ther is only a single hypervisor on the target system
then
+ the parameter can be set to <literal>None</literal>.
+ </para>
+ <para>
+ The fifth and last parameter of the
<literal>migrate</literal> method specifies the bandwidth in
+ MiB/s to be used. If this maximum is not needed then set the parameter to
zero.
+ </para>
+ <para>
+ To migrate a guest domain to a connection that is already open use the
<literal>migrate</literal>
+ method. An example follows:
+ </para>
+ <example>
+ <title>Migrate a Domain to an Open Connection</title>
+ <programlisting language="Python"><xi:include
href="extras/Domains-Example-22.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ </example>
+ <para>
+ The <literal>migrateToURI</literal> method is similar except
that the destination URI is the first
+ parameter instead of an existing connection.
+ </para>
+ <para>
+ To migrate a guest domain to a URI use the
<literal>migrateToURI</literal>
+ method. An example follows:
+ </para>
+ <example>
+ <title>Migrate a Domain to an Open Connection</title>
+ <programlisting language="Python"><xi:include
href="extras/Domains-Example-23.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ </example>
+ <para>
+ To migrate a live guest domain to a URI use the
<literal>migrate</literal> or the
+ <literal>migrateToURI</literal> with the
<literal>VIR_MIGRATE_LIVE</literal> flag set. An example follows:
+ </para>
+ <example>
+ <title>Migrate a Domain to an Open Connection</title>
+ <programlisting language="Python"><xi:include
href="extras/Domains-Example-24.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ </example>
+ </section>
- <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Autostart">
- <title>Autostart</title>
+ <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Lifecycle-Autostart">
+ <title>Autostart</title>
- <para>
- A guest domain can be configured to autostart on a particular hypervisor,
either by the
- hypervisor itself or libvirt. In combination with managed save, this allows
the operating
- system on a guest domain to withstand host reboots without ever considering
itself to have
- rebooted. When libvirt restarts, the guest domain will be automatically
restored. This is
- handled by an API separate to regular save and restore, because paths must be
known to
- libvirt without user input.
- </para>
- <example>
- <title>Set Autostart for a Domain</title>
- <programlisting language="Python"><xi:include
href="extras/Domains-Example-25.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
- </example>
- </section>
+ <para>
+ A guest domain can be configured to autostart on a particular hypervisor,
either by the
+ hypervisor itself or libvirt. In combination with managed save, this
allows the operating
+ system on a guest domain to withstand host reboots without ever
considering itself to have
+ rebooted. When libvirt restarts, the guest domain will be automatically
restored. This is
+ handled by an API separate to regular save and restore, because paths
must be known to
+ libvirt without user input.
+ </para>
+ <example>
+ <title>Set Autostart for a Domain</title>
+ <programlisting language="Python"><xi:include
href="extras/Domains-Example-25.py" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ </example>
+ </section>
</section>
<section
id="libvirt_application_development_guide_using_python-Guest_Domains-Domain_Config">
- <title>Domain configuration</title>
-
- <para>
- Domains are defined in libvirt using XML. Everything related only to the domain,
such as memory and CPU, is defined in the domain XML. The domain XML format is specified
at <ulink
url="http://libvirt.org/formatdomain.html">http://libvirt.or...;.
This can be accessed locally in
<filename>/usr/share/doc/libvirt-devel-version/</filename> if your system has
the <package>libvirt-devel</package> package installed.
- </para>
-
- <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Domain_Config-Boot">
- <title>Boot modes</title>
+ <title>Domain Configuration</title>
<para>
- TBD
-
+ Domains are defined in libvirt using XML. Everything related only to the
domain, such as
+ memory and CPU, is defined in the domain XML. The domain XML format is
specified at
+ <ulink
url="http://libvirt.org/formatdomain.html">http://libvirt.or...;.
+ This can be accessed locally in
<filename>/usr/share/doc/libvirt-devel-version/</filename>
+ if your system has the <package>libvirt-devel</package> package
installed.
</para>
- </section>
+ <section
id="libvirt_application_development_guide_using_python-Guest_Domains-Domain_Config-Boot">
+ <title>Boot Modes</title>
+
+ <para>
+ Booting via the BIOS is available for hypervisors supporting full
virtualization. In this
+ case the BIOS has a boot order priority (floppy, harddisk, cdrom, network)
determining where
+ to obtain/find the boot image.
+ </para>
+ <example>
+ <title>Setting the Boot Mode</title>
+ <programlisting language="XML"><xi:include
href="extras/Domains-Example-26.xml" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" /></programlisting>
+ </example>
+
+ </section>
<section
id="libvirt_application_development_guide_using_python-Guest_Domains-Domain_Config-Memory_CPU">
<title>Memory / CPU resources</title>
diff --git a/en-US/extras/Domains-Example-26.xml b/en-US/extras/Domains-Example-26.xml
new file mode 100644
index 0000000..75b916b
--- /dev/null
+++ b/en-US/extras/Domains-Example-26.xml
@@ -0,0 +1,12 @@
+ ...
+ <os>
+ <type>hvm</type>
+ <loader readonly='yes'
type='rom'>/usr/lib/xen/boot/hvmloader</loader>
+ <nvram
template='/usr/share/OVMF/OVMF_VARS.fd'>/var/lib/libvirt/nvram/guest_VARS.fd</nvram>
+ <boot dev='hd'/>
+ <boot dev='cdrom'/>
+ <bootmenu enable='yes' timeout='3000'/>
+ <smbios mode='sysinfo'/>
+ <bios useserial='yes' rebootTimeout='0'/>
+ </os>
+ ...