[virtualization-guide] iscsi section update

Chris Curran tsagadai at fedoraproject.org
Fri Jul 2 04:36:34 UTC 2010


commit 2e1b88100d0a8554bde0ce234f1220d4fb6c4a97
Author: Chris Curran <ccurran at redhat.com>
Date:   Fri Jul 2 14:36:24 2010 +1000

    iscsi section update

 en-US/Colophon.xml               |    2 +-
 en-US/KVM_Clock.xml              |    6 +-
 en-US/KVM_Win_PV.xml             |    2 +-
 en-US/Misc_Administration.xml    |   13 ++--
 en-US/PCI.xml                    |    6 +-
 en-US/Reference_virsh.xml        |    8 +-
 en-US/Reference_virt-manager.xml |    4 +-
 en-US/Storage_Pools_iSCSI.xml    |  152 +++++++++++++++++++++++++++++++++++---
 en-US/Troubleshooting.xml        |    6 +-
 9 files changed, 167 insertions(+), 32 deletions(-)
---
diff --git a/en-US/Colophon.xml b/en-US/Colophon.xml
index 843ea79..6e0f369 100644
--- a/en-US/Colophon.xml
+++ b/en-US/Colophon.xml
@@ -8,7 +8,7 @@
 		This manual was written in the DocBook XML v4.3 format.
 	</para>
 	<para>
-		This book is based on the work of Jan Mark Holzer and Chris Curran.
+		This book is based on the original work of Jan Mark Holzer, Justin Clift and Chris Curran.
 	</para>
 	<para>
 		Other writing credits go to:
diff --git a/en-US/KVM_Clock.xml b/en-US/KVM_Clock.xml
index a6addc0..dec299e 100644
--- a/en-US/KVM_Clock.xml
+++ b/en-US/KVM_Clock.xml
@@ -77,10 +77,10 @@ http://cleo.tlv.redhat.com/qumrawiki/KVM/TimeKeeping#head-a03434101d51798c6d005d
 	<para>
 		If the CPU lacks the <computeroutput>constant_tsc</computeroutput> bit, disable all power management features (<ulink url="https://bugzilla.redhat.com/show_bug.cgi?id=513138">BZ#513138</ulink>). Each system has several timers it uses to keep time. The TSC is not stable on the host, which is sometimes caused by <command>cpufreq</command> changes, deep C state, or migration to a host with a faster TSC. Deep C sleep states can stop the TSC. To prevent the kernel using deep C states append <command>processor.max_cstate=1</command> to the kernel boot options in the <filename>grub.conf</filename> file on the host:
 	</para>
-<!--FIXME-->
-	<screen>term Fedora (2.6.18-159.el5)
+
+	<screen>term Red Hat Enterprise Linux (2.6.32-36.x86-64)
         root (hd0,0)
-	kernel /vmlinuz-2.6.18-159.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet <emphasis>processor.max_cstate=1</emphasis>
+	kernel /vmlinuz-2.6.32-36.x86-64 ro root=/dev/VolGroup00/LogVol00 rhgb quiet <emphasis>processor.max_cstate=1</emphasis>
 	</screen>
 	<para>
 		Disable <command>cpufreq</command> (only necessary on hosts without the <command>constant_tsc</command>) by editing the <filename>/etc/sysconfig/cpuspeed</filename> configuration file and change the <command>MIN_SPEED</command> and <command>MAX_SPEED</command> variables to the highest frequency available. Valid limits can be found in the <filename>/sys/devices/system/cpu/cpu*/cpufreq/scaling_available_frequencies</filename> files.
diff --git a/en-US/KVM_Win_PV.xml b/en-US/KVM_Win_PV.xml
index f804e3c..47c3ad5 100644
--- a/en-US/KVM_Win_PV.xml
+++ b/en-US/KVM_Win_PV.xml
@@ -11,7 +11,7 @@
 	</para>
 	<para>
 		The KVM para-virtualized drivers are automatically loaded and installed on the following:</para>
-		<itemizedlist><listitem><para>Any 2.6.27 or newer kernel. </para></listitem>
+		<itemizedlist><listitem><para>Any kernel version 2.6.27 or newer. </para></listitem>
 		<listitem><para>Newer Ubuntu, Fedora, CentOS, Red Hat Enterprise Linux.</para></listitem></itemizedlist>
 		<para>
 		Those versions of Linux detect and install the drivers so additional installation steps are not required.
diff --git a/en-US/Misc_Administration.xml b/en-US/Misc_Administration.xml
index 5157211..6dd94a7 100644
--- a/en-US/Misc_Administration.xml
+++ b/en-US/Misc_Administration.xml
@@ -506,17 +506,18 @@ exec  gnome-session
 	</section>
 <section>
 		<title>Gracefully shutting down guests</title>
-		<para>Virtualized Red Hat Enterprise Linux 6 guests installed with the <option>Minimal installation</option> does not install the <package>acpid</package> package.</para>
-		<para>Without the <package>acpid</package> package the Red Hat Enterprise Linux 6 will not shut down when the <command>virsh shutdown</command> command is used to gracefully shut down the virtualized guest.</para>
-		<para>Using <command>virsh shutdown</command> is easier for system administration. Without graceful shut down with the <command>virsh shutdown</command> command a system administrator must log into a virtualized guest manually or send a <keycap>Ctrl</keycap>-<keycap>Alt</keycap>-<keycap>Del</keycap> key combination to each guest.</para>
+		<para>Installing virtualized Red Hat Enterprise Linux 6 guests with the <option>Minimal installation</option> installation option will not install the <package>acpid</package> package.</para>
+		<para>Without the <package>acpid</package> package, the Red Hat Enterprise Linux 6 guest does not shut down when the <command>virsh shutdown</command> command is executed. The <command>virsh shutdown</command> command is designed to gracefully shut down virtualized guests.</para>
+		<para>Using <command>virsh shutdown</command> is easier amd safer for system administration. Without graceful shut down with the <command>virsh shutdown</command> command a system administrator must log into a virtualized guest manually or send the <keycap>Ctrl</keycap>-<keycap>Alt</keycap>-<keycap>Del</keycap> key combination to each guest.</para>
 		<note>
 			<title>Other virtualized operating systems</title>
-			<para>Other virtualized guests may also be affected by this issue. The <command>virsh shutdown</command> command requires that the guest operating system is configured to handle ACPI shut down requests. Configure the guest to accept ACPI shut down requests for <command>virsh shutdown</command> to work effectively and properly.</para>
+			<para>Other virtualized operating systems may be affected by this issue. The <command>virsh shutdown</command> command requires that the guest operating system is configured to handle ACPI shut down requests. Many operating systems require additional configuration on the guest operating system to accept ACPI shut down requests.</para>
 		</note>
 		<procedure><title>Workaround for Red Hat Enteprise Linux 6</title>
 			<step>
 				<title>Install the acpid package</title>
-				<para>To work around this issue, install the <package>acpid</package> package on the guest:</para>
+				<para>The <command>acpid</command> service listen and processes ACPI requests.</para>
+				<para>Log into the guest and install the <package>acpid</package> package on the guest:</para>
 				<screen># yum install acpid</screen>
 			</step>
 			<step>
@@ -526,7 +527,7 @@ exec  gnome-session
 # service acpid start</screen>
 			</step>
 		</procedure>
-		<para>The guest is now configured to shut down when the host uses the <command>virsh shutdown</command> command.</para>
+		<para>The guest is now configured to shut down when the <command>virsh shutdown</command> command is used.</para>
 	</section>
 	
 	<!--<section id="sect-Virtualization-Tips_and_tricks-Cloning_guest_configuration_files">
diff --git a/en-US/PCI.xml b/en-US/PCI.xml
index cfef288..bb8d8c9 100644
--- a/en-US/PCI.xml
+++ b/en-US/PCI.xml
@@ -31,10 +31,10 @@
 timeout=5
 splashimage=(hd0,0)/grub/splash.xpm.gz
 hiddenmenu
-title Fedora Server (2.6.18-190.el5)
+title Red Hat Enterprise Linux Server (2.6.32-36.x86-64)
         root (hd0,0)
-        kernel /vmlinuz-2.6.18-190.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet <emphasis>intel_iommu=on</emphasis>
-        initrd /initrd-2.6.18-190.el5.img</screen>
+        kernel /vmlinuz-2.6.32-36.x86-64 ro root=/dev/VolGroup00/LogVol00 rhgb quiet <emphasis>intel_iommu=on</emphasis>
+        initrd /initrd-2.6.32-36.x86-64.img</screen>
 		
 		</step>
 		<step>
diff --git a/en-US/Reference_virsh.xml b/en-US/Reference_virsh.xml
index 7c724e5..e9b3c51 100644
--- a/en-US/Reference_virsh.xml
+++ b/en-US/Reference_virsh.xml
@@ -2,10 +2,12 @@
 <!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
 ]>
 <chapter id="chap-Virtualization-Managing_guests_with_virsh">
-	<title>Managing guests with virsh</title>
-	
+	<title>virsh reference guide</title>
+	<para>
+		This chapter provides a comprehensive reference for the <command>virsh</command> command provided by the <package>libvirt</package> package. This chapter covers various functions, parameters and provides examples of common tasks using the <command>virsh</command> command.<!--FIXME RHEL-->
+	</para>
 	<para>
-		<command>virsh</command> is a command line interface tool for managing guests and the hypervisor.
+		<command>virsh</command> is a command line interface tool for managing virtualized guests and the hypervisor.
 	</para>
 	<para>
 		The <command>virsh</command> tool is built on the <command>libvirt</command> management API and operates as an alternative to the <command>xm</command> command and the graphical guest Manager (<command>virt-manager</command>). <command>virsh</command> can be used in read-only mode by unprivileged users. You can use <command>virsh</command> to execute scripts for the guest machines.
diff --git a/en-US/Reference_virt-manager.xml b/en-US/Reference_virt-manager.xml
index 993163a..7ed946a 100644
--- a/en-US/Reference_virt-manager.xml
+++ b/en-US/Reference_virt-manager.xml
@@ -2,9 +2,9 @@
 <!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
 ]>
 <chapter id="chap-Virtualization-Managing_guests_with_the_Virtual_Machine_Manager_virt_manager">
-	<title>Managing guests with the Virtual Machine Manager (virt-manager)</title>
+	<title>virt-manager reference guide</title>
 	<para>
-		This section describes the Virtual Machine Manager (<command>virt-manager</command>) windows, dialog boxes, and various GUI controls.
+		This chapter provides a comprehensive reference for the Virtual Machine Manager (<command>virt-manager</command>) package. This chapter covers dialog boxes, windows, and various graphical controls for libvirt on Fedora.<!--FIXME RHEL-->
 	</para>
 	<para>
 		<command>virt-manager</command> provides a graphical view of hypervisors and guest on your system and on remote machines. You can use <command>virt-manager</command> to define virtualized guests. <command>virt-manager</command> can perform virtualization management tasks, including:
diff --git a/en-US/Storage_Pools_iSCSI.xml b/en-US/Storage_Pools_iSCSI.xml
index 2b5e765..5191a21 100644
--- a/en-US/Storage_Pools_iSCSI.xml
+++ b/en-US/Storage_Pools_iSCSI.xml
@@ -12,7 +12,7 @@
 		<para>Fedora provides a tool for creating software backed iSCSI targets, the <package>scsi-target-utils</package> package.</para>
 		<procedure>
 			<title>Creating an iSCSI target</title>
-							
+			
 			
 			<step>
 				<title>Install the required packages</title>
@@ -43,7 +43,9 @@
 					<step>
 						<title>Create a LVM logical volume</title>
 						<para>Create a logical volume group named <replaceable>virtimage1</replaceable> on the <replaceable>virtstore</replaceable> volume group with a size of 20GB using the <command>lvcreate</command> command.</para>
-						<screen># lvcreate --size 20G -n <replaceable>virtimage1</replaceable> <replaceable>virtstore</replaceable></screen>
+						<screen># lvcreate --size 20G -n <replaceable>virtimage1</replaceable>
+							<replaceable>virtstore</replaceable>
+						</screen>
 						<para>The new logical volume, <replaceable>virtimage1</replaceable>, is ready to use for iSCSI.</para>
 					</step>
 				</substeps>
@@ -178,7 +180,7 @@ scsiadm: Max file limits 1024 1024
 
 Logging in to [iface: default, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.0.1,3260]
 Login to [iface: default, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.0.1,3260] successful.</screen>
-<para>Detach the device.</para>
+				<para>Detach the device.</para>
 				<screen># iscsiadm -d2 -m node --logout
 scsiadm: Max file limits 1024 1024
 
@@ -192,9 +194,10 @@ Logout of [sid: 2, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.
 	<section>
 		<title>Adding an iSCSI target to virt-manager</title>
 		<para>This procedure covers creating a storage pool with an iSCSI target in <command>virt-manager</command>.</para>
-		<!--reusable--><procedure>
+		<!--reusable-->
+		<procedure>
 			<title>Adding an iSCSI device to virt-manager</title>
-			<step> 
+			<step>
 				<title>Open the host storage tab</title>
 				<para>Open the <guilabel>Storage</guilabel> tab in the <guilabel>Host Details</guilabel> window.</para>
 				<substeps>
@@ -222,7 +225,7 @@ Logout of [sid: 2, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.
 					</step>
 				</substeps>
 			</step>
-
+			
 			<step>
 				<title>Add a new pool (part 1)</title>
 				<para>
@@ -237,7 +240,7 @@ Logout of [sid: 2, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.
 					Choose a name for the storage pool, change the Type to iscsi, and press <guibutton>Forward</guibutton> to continue.
 				</para>
 			</step>
-
+			
 			<step>
 				<title>Add a new pool (part 2)</title>
 				<para>
@@ -260,10 +263,139 @@ Logout of [sid: 2, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.
 			</step>
 		</procedure>
 	</section>
-
+	
 	<section>
-		<title>Adding an iSCSI target with virsh</title>
-		<para>Placeholder</para>
+		<title>Creating an iSCSI-based storage pool with virsh</title>
+		<procedure>
+			<step>
+				<title>
+					Create the storage pool definition
+				</title>
+				<para>The example below is an XML definition file for a iSCSI-based storage pool.</para>
+				<variablelist>
+					<varlistentry>
+						<term>&lt;name&gt;trial1&lt;/name&gt;</term>
+						<listitem>
+							<para>The <command>name</command> element sets the name for the storage pool. The name is required and must be unique.</para>
+						</listitem>
+					</varlistentry>
+					<varlistentry>
+						<term>&lt;uuid&gt;afcc5367-6770-e151-bcb3-847bc36c5e28&lt;/uuid&gt;T</term>
+						<listitem>
+							<para>The optional <command>uuid</command> element provides a unique global identifier for the storage pool. The <command>uuid</command> element can contain any valid UUID or an existing UUID for the storage device. If a UUID is not provided, <command>virsh</command> will generate a UUID for the storage pool.</para>
+						</listitem>
+					</varlistentry>
+					<varlistentry>
+						
+							<term>&lt;host name='server1.example.com'/&gt;</term>
+						<listitem>
+							<para>The <command>host</command> element with the <parameter>name</parameter> attribute specifies the hostname of the iSCSI server. The <command>host</command> element attribute can contain a <parameter>port</parameter> attribute for a non-standard iSCSI protocol port number. </para>
+						</listitem>
+					</varlistentry>
+				
+   
+				
+					<varlistentry>
+						<term>&lt;device path='<replaceable>iqn.2010-05.com.example.server1:trial1</replaceable>'/&gt;</term>
+						<listitem>
+							<para>
+   The <command>device</command> element <parameter>path</parameter> attribute must contain the IQN for the iSCSI server.</para>
+						</listitem>
+					</varlistentry>
+				
+
+				</variablelist>
+				<para>With a text editor, create an XML file for the iSCSI storage pool. This example uses a XML definition named <filename>trial1.xml</filename>.</para>
+				<screen>&lt;pool type='iscsi'&gt;
+  &lt;name&gt;trial1&lt;/name&gt;
+  &lt;uuid&gt;afcc5367-6770-e151-bcb3-847bc36c5e28&lt;/uuid&gt;
+  &lt;source&gt;
+    &lt;host name='server1.example.com'/&gt;
+    &lt;device path='iqn.2010-05.com.example.server1:trial1'/&gt;
+  &lt;/source&gt;
+  &lt;target&gt;
+    &lt;path&gt;/dev/disk/by-path&lt;/path&gt;
+   &lt;/target&gt;
+&lt;/pool&gt;</screen>
+				
+				<para>Use the <command>pool-define</command> command to define the storage pool but not start it.</para>
+<screen># virsh pool-define trial1.xml
+Pool trial1 defined
+</screen>
+			</step>
+			<step>
+				<title><emphasis role="bold">Alternative step:</emphasis> Use pool-define-as from command line</title>
+				<para>
+					Use the <command>virsh pool-define-as</command> command to define a new storage pool. There are two options required for creating directory-based storage pools:
+				</para>
+				<screen> # virsh pool-define-as <replaceable>trial1</replaceable> iscsi - - - - "<replaceable>/dev/disk/by-path</replaceable>"
+Pool trial1 defined</screen>
+			</step>
+			<step>
+				<title>Verify the storage pool is listed</title>
+				<para>
+					Verify the storage pool object is created correctly and the state reports as <computeroutput>inactive</computeroutput>.
+				</para>
+				<screen># virsh pool-list --all
+Name                 State      Autostart 
+-----------------------------------------
+default              active     yes       
+<replaceable>trial1</replaceable>               inactive   no   </screen>
+
+			
+			</step>
+			
+			<step>
+				<title>Start the storage pool</title>
+				<para>
+					Use the virsh command <command>pool-start</command> for this. <command>pool-start</command> enables a directory storage pool, allowing it to be used for volumes and guests.
+				</para>
+				<screen># virsh pool-start <replaceable>guest_images_disk</replaceable>
+Pool guest_images_disk started
+# virsh pool-list --all
+Name                 State      Autostart 
+-----------------------------------------
+default              active     yes       
+<replaceable>trial1</replaceable>               active     no        
+</screen>
+			</step>
+			
+			<step>
+				<title>Turn on autostart</title>
+				<para>
+					Turn on <parameter>autostart</parameter> for the storage pool. Autostart configures the <systemitem class="daemon">libvirtd</systemitem> service to start the storage pool when the service starts.
+				</para>
+	
+				
+				<screen># virsh pool-autostart <replaceable>trial1</replaceable>
+Pool trial1 marked as autostarted</screen>
+				<para>Verify that the <replaceable>trial1</replaceable> pool has autostart set:</para>
+				<screen># virsh pool-list --all
+Name                 State      Autostart 
+-----------------------------------------
+default              active     yes       
+<replaceable>trial1</replaceable>               active     yes       
+</screen>
+			</step>
+			
+			<step>
+				<title>Verify the storage pool configuration</title>
+				<para>
+					Verify the storage pool was created correctly, the sizes reported correctly, and the state reports as <computeroutput>running</computeroutput>.
+				</para>
+				<screen># virsh pool-info <replaceable>trial1</replaceable>
+Name:           <replaceable>trial1</replaceable>
+UUID:           afcc5367-6770-e151-bcb3-847bc36c5e28
+State:          running
+Persistent:     unknown
+Autostart:      yes
+Capacity:       100.31 GB
+Allocation:     0.00
+Available:      100.31 GB
+</screen>
+			</step>
+		</procedure>
+		<para>An iSCSI-based storage pool is now available.</para>
 	</section>
 </section>
 
diff --git a/en-US/Troubleshooting.xml b/en-US/Troubleshooting.xml
index b85f553..58c75e3 100644
--- a/en-US/Troubleshooting.xml
+++ b/en-US/Troubleshooting.xml
@@ -214,11 +214,11 @@ topology-change-timer  0.00                gc-timer              0.02
 				To output kernel information from a fully virtualized Linux guest into the domain modify the <filename>/boot/grub/grub.conf</filename> file by inserting the line <parameter>console=tty0 console=ttys0,115200</parameter>.
 			</para>
 			
-			<screen>title Fedora Server (2.6.18-92.el5)
+			<screen>title Red Hat Enterprise Linux Server (2.6.32-36.x86-64)
 	root (hd0,0)
-	kernel /vmlinuz-2.6.18-92.el5 ro root=/dev/volgroup00/logvol00
+	kernel /vmlinuz-2.6.32-36.x86-64 ro root=/dev/volgroup00/logvol00
 	<parameter>console=tty0 console=ttys0,115200</parameter>
-	initrd /initrd-2.6.18-92.el5.img
+	initrd /initrd-2.6.32-36.x86-64.img
 </screen>
 			<para>
 				Reboot the guest.


More information about the docs-commits mailing list