[virtualization-guide] editing LVM content

Chris Curran tsagadai at fedoraproject.org
Wed Jun 23 06:56:12 UTC 2010


commit f5ce72b1f46717e4a6833ffe0d90b5ebac51e100
Author: Chris Curran <ccurran at redhat.com>
Date:   Wed Jun 23 16:48:09 2010 +1000

    editing LVM content

 en-US/KVM_Migration.xml            |    4 +-
 en-US/Misc_Administration.xml      |    2 +-
 en-US/PCI.xml                      |    2 +-
 en-US/Storage_Pools_Directory.xml  |    6 +-
 en-US/Storage_Pools_LVM.xml        |  168 ++++++++++++++++++++++++++++--------
 en-US/Storage_Pools_Partitions.xml |   48 ++---------
 en-US/libvirt_Affinity.xml         |    2 +-
 7 files changed, 146 insertions(+), 86 deletions(-)
---
diff --git a/en-US/KVM_Migration.xml b/en-US/KVM_Migration.xml
index 566fb48..72c253b 100644
--- a/en-US/KVM_Migration.xml
+++ b/en-US/KVM_Migration.xml
@@ -355,7 +355,7 @@ Id Name                 State
 			</step>
 			<step>
 				<para>
-					Add a new storage pool. In the lower left corner of the window, click the <guibutton>+</guibutton> button. The Add a New Storage Pool window appears.
+					Add a new storage pool. In the lower left corner of the window, click the <guibutton>+</guibutton> button. The <guilabel>Add a New Storage Pool</guilabel> window appears.
 				</para>
 				<para>
 					Enter the following details:
@@ -436,7 +436,7 @@ Id Name                 State
 					</imageobject>
 				</mediaobject>
 				<para>
-					The Virtual Machine window appears.
+					The <guilabel>Virtual Machine</guilabel> window appears.
 				</para>
 				<mediaobject>
 					<imageobject>
diff --git a/en-US/Misc_Administration.xml b/en-US/Misc_Administration.xml
index bd47c0d..5157211 100644
--- a/en-US/Misc_Administration.xml
+++ b/en-US/Misc_Administration.xml
@@ -456,7 +456,7 @@ Starting vsftpd for vsftpd:                  [  OK  ]
 	<section id="sect-Virtualization-Tips_and_tricks-Disable_SMART_disk_monitoring_for_guests">
 		<title>Disable SMART disk monitoring for guests</title>
 		<para>
-			SMART disk monitoring can be disabled as we are running on virtual disks and the physical storage is managed by the host. 
+			SMART disk monitoring can be safely disabled as virtual disks and the physical storage devices are managed by the host. 
 		</para>
 <screen># service smartd stop
 # chkconfig --del smartd
diff --git a/en-US/PCI.xml b/en-US/PCI.xml
index 7929a16..cfef288 100644
--- a/en-US/PCI.xml
+++ b/en-US/PCI.xml
@@ -137,7 +137,7 @@ function='0x7'</screen>
 			
 			
 			<step>
-				<para>Once the guest system is configured to use the PCI address, we need to tell the host system to stop using it. The <computeroutput>ehci</computeroutput> driver is loaded by default for the USB PCI controller.</para>
+				<para>Once the guest system is configured to use the PCI address, the host system must be configured to stop using the device. The <computeroutput>ehci</computeroutput> driver is loaded by default for the USB PCI controller.</para>
 				
 				<screen>$ readlink /sys/bus/pci/devices/0000\:00\:1d.7/driver
 ../../../bus/pci/drivers/ehci_hcd</screen>
diff --git a/en-US/Storage_Pools_Directory.xml b/en-US/Storage_Pools_Directory.xml
index 686acea..642f3e5 100644
--- a/en-US/Storage_Pools_Directory.xml
+++ b/en-US/Storage_Pools_Directory.xml
@@ -148,15 +148,15 @@ dr-xr-xr-x. 26 root root 4096 May 28 13:57 ..
 				<itemizedlist>
 					<listitem>
 						<para>
-							The <command>name</command> of the Storage Pool. All further virsh command we use for this storage pool will use this name to refer to it.
+							The <command>name</command> of the storage pool. 
 						</para>
 						<para>
-							This example uses the name <replaceable>guest_images_dir</replaceable>.
+							This example uses the name <replaceable>guest_images_dir</replaceable>. All further <command>virsh</command> command used in this example use this name.
 						</para>
 					</listitem>
 					<listitem>
 						<para>
-							The <command>path</command> to a file system directory for storage virtual guest image files . This directory does not need to already exist, and it's generally better to allow <command>virsh</command> to create it in the next step.
+							The <command>path</command> to a file system directory for storing virtualized guest image files . If this directory does not exist, <command>virsh</command> will create it.
 						</para>
 						<para>
 							This example uses the <replaceable>/guest_images</replaceable> directory.
diff --git a/en-US/Storage_Pools_LVM.xml b/en-US/Storage_Pools_LVM.xml
index 51c2265..2ec6d88 100644
--- a/en-US/Storage_Pools_LVM.xml
+++ b/en-US/Storage_Pools_LVM.xml
@@ -4,20 +4,116 @@
 <section id="sect-Virtualization-Storage_Pools-Creating-LVM">
 	<title>LVM Volume Groups</title>
 	<para>
-		This section covers using LVM Volume Groups to store virtualized guests.
+		This chapter covers using LVM volume groups as storage pools.
 	</para>
+<para>LVM-based storage groups provide flexibility of</para>
+	<warning><title>Warning</title>
+			<para>
+				LVM-based storage pools require a full disk partition. This partition will be  formatted and all data presently stored on the disk device will be erased. Back up the storage device before commencing the procedure.
+			</para>
+			
+		</warning>
 
-	<warning>
-		<para>
-			The device used for this procedure must not have data on it you want to keep.
-		</para>
-		<para>
-			This procedure WILL destroy the existing data on the device!
-		</para>
-	</warning>
-
+	<section><title>Creating an LVM-based storage pool with virt-manager</title>
+	<para>LVM-based storage pools can use existing LVM volume groups or create new LVM volume groups on a blank partition.</para>
+	
 	<procedure>
-		<title>Creating an LVM storage pool using virt-manager</title>
+		
+		<step><title>Optional: Create new partition for LVM volumes</title>
+		<para>These steps describe how to create a new partition and LVM volume group on a new hard disk drive. </para><warning><title>Warning</title><para>This procedure will remove all data from the selected storage device.</para></warning>
+		
+		<substeps>
+		<step><title>Create a new partition</title>
+		
+		<para>Use the <command>fdisk</command> command to create a new disk partition from the command line. The following example creates a new partition that uses the entire disk on the storage device <computeroutput>/dev/sdb</computeroutput>.</para>			
+					<screen># fdisk /dev/sdb
+Command (m for help):
+</screen>
+						<para>
+							Press <parameter>n</parameter> for a new partition.
+						</para>
+						
+						
+					</step>
+					<step>
+						<para>
+							Press <parameter>p</parameter> for a primary partition.
+						</para>
+						
+						<screen>Command action
+   e   extended
+   p   primary partition (1-4)
+</screen>
+					</step>
+					<step>
+						<para>
+							Choose an available partition number. In this example the first partition is chosen by entering <parameter>1</parameter>.
+						</para>
+						
+						<screen>Partition number (1-4): <parameter>1</parameter>
+						</screen>
+					</step>
+					<step>
+						<para>
+							Enter the default first cylinder by pressing <parameter>Enter</parameter>.
+						</para>
+						
+						<screen>First cylinder (1-400, default 1):
+</screen>
+					</step>
+					<step>
+						<para>
+							Select the size of the partition. In this example the entire disk is allocated by pressing <parameter>Enter</parameter>.
+						</para>
+						
+						<screen>Last cylinder or +size or +sizeM or +sizeK (2-400, default 400):
+</screen>
+					</step>
+					<step>
+						<para>
+							Set the type of partition by pressing <parameter>t</parameter>.
+						</para>
+						
+						<screen>Command (m for help): <parameter>t</parameter>
+						</screen>
+					</step>
+					<step>
+						<para>
+							Choose the partition you created in the previous steps. In this example, the partition number is <parameter>1</parameter>.
+						</para>
+						
+						<screen>Partition number (1-4): <parameter>1</parameter>
+						</screen>
+					</step>
+					<step>
+						<para>
+							Enter <parameter>8e</parameter> for a Linux LVM partition.
+						</para>
+						
+						<screen>Hex code (type L to list codes): <parameter>8e</parameter>
+						</screen>
+					</step>
+					<step>
+						<para>
+							write changes to disk and quit.
+						</para>
+						
+						<screen>Command (m for help): <parameter>w</parameter> 
+Command (m for help): <parameter>q</parameter>
+						</screen>
+					</step>
+					<step><title>Create a new LVM volume group</title>
+						<para>
+							Create a new LVM volume group with the vgcreate command. This example creates a volume group named <replaceable>guest_images_lvm</replaceable>.
+						</para>
+						
+						<screen># vgcreate <replaceable>guest_images_lvm</replaceable> /dev/sdb1
+  Physical volmue "/dev/vdb1" successfully created
+  Volume group "<replaceable>guest_images_lvm</replaceable>" successfully created
+</screen>
+					</step>
+				</substeps>
+				<para>The new LVM volume group, <replaceable>guest_images_lvm</replaceable>, can now be used for an LVM-based storage pool.</para></step>
 		<step>
 			<title>Open the storage pool settings</title>
 			<substeps>
@@ -47,6 +143,7 @@
 				</step>
 			</substeps>
 		</step>
+		
 		<step>
 			<title>Create the new storage pool</title>
 			<substeps>
@@ -67,56 +164,56 @@
 				</step>
 
 				<step>
-					<title>Complete the Wizard</title>
-					<para>
-						Now fill in the <guibutton>Target Path</guibutton> and <guibutton>Source Path</guibutton> fields, then tick the <guibutton>Build Pool</guibutton> check box.
+					<title>Add a new pool (part 2)</title>
+						<para>
+							Change the <guibutton>Target Path</guibutton> field. This example uses <replaceable>/guest_images</replaceable>. 
+						</para>
+						<para>Now fill in the <guibutton>Target Path</guibutton> and <guibutton>Source Path</guibutton> fields, then tick the <guibutton>Build Pool</guibutton> check box.
 					</para>
 					<itemizedlist>
 						<listitem>
 							<para>
-								Leave the <guibutton>Target Path</guibutton> field with the value suggested by <command>virt-manager</command>.  It follows the format of /dev/<replaceable>storage_pool_name</replaceable>, and must not be changed.  This is unlike virsh, which has more flexibility in this area.
+								Use the <guibutton>Target Path</guibutton> field to <emphasis>either</emphasis> select an existing LVM volume group or as the name for a new volume group. The default format is  <computeroutput>/dev/</computeroutput><replaceable>storage_pool_name</replaceable>. 
 							</para>
 							<para>
-								Our example below uses <replaceable>/dev/guest_images_lvm</replaceable>.
+								This example uses a new volume group named <replaceable>/dev/guest_images_lvm</replaceable>.
 							</para>
 						</listitem>
 						<listitem>
 							<para>
-								The <command>Source Path</command> is the device this storage pool will use.
+								The <command>Source Path</command> field is optional if an existing LVM volume group is used in the <guibutton>Target Path</guibutton>.
 							</para>
 							<para>
-								We use <replaceable>/dev/sdc</replaceable> in the example below.
+								For new LVM volume groups, input the location of a storage device in the <command>Source Path</command> field. This example uses a blank partition <replaceable>/dev/sdc</replaceable>.
 							</para>
 						</listitem>
 						<listitem>
 							<para>
-								Ensure the <guibutton>Build Pool</guibutton> check box is ticked.
+								The <guibutton>Build Pool</guibutton> checkbox instructs <command>virt-manager</command> to create a new LVM volume group. If you are using an existing volume group you should not select the <guibutton>Build Pool</guibutton> checkbox.
 							</para>
+							<para>This example is using a blank partition to create a new volume group so the <guibutton>Build Pool</guibutton> checkbox must be selected. </para>
 						</listitem>
 					</itemizedlist>
-					<para>
-						Click the <guibutton>Finish</guibutton> button to complete the creation of the storage pool.
-					</para>
 					<mediaobject>
 						<imageobject>
 							<imagedata fileref="images/virt-manager_storage_pools_add_lvm_step_2a_paths_and_pool.png" format="PNG" scalefit="1" />
 						</imageobject>
-					</mediaobject>
-				</step>
-
+					</mediaobject><para>Verify the details and press the <guibutton>Finish</guibutton> button format the LVM volume group and create the storage pool.</para>
+					</step>
 				<step>
-					<title>Allow the device to be formatted</title>
+					<title>Confirm the device to be formatted</title>
 					<para>
-						A warning message will appear, as proceeding will destroy any existing data on the device.
-					</para>
-					<para>
-						Click the <guilabel>Yes</guilabel> button if you want to proceed. 
+						A warning message appears.
 					</para>
+					
 					<mediaobject>
 						<imageobject>
 							<imagedata fileref="images/virt-manager_storage_pools_add_lvm_step_2b_format_warning.png" format="PNG" scalefit="1" />
 						</imageobject>
 					</mediaobject>
+					<para>
+						Press the <guilabel>Yes</guilabel> button to proceed to erase all data on the storage device and create the storage pool.
+					</para>
 				</step>
 			</substeps>
 		</step>
@@ -140,19 +237,16 @@
 		</step>
 
 	</procedure>		
-	
-	<procedure>
-		<title>Creating an LVM storage pool using virsh</title>
+	</section>
+	<section><title>Creating an LVM-based storage pool with virsh</title><procedure>
+		
 		<step>
 			<screen># virsh pool-define-as <replaceable>guest_images_lvm</replaceable> logical - - <replaceable>/dev/sdc</replaceable> <replaceable>libvirt_lvm</replaceable> /dev/<replaceable>libvirt_lvm</replaceable>
 Pool guest_images_lvm defined
-
 # virsh pool-build <replaceable>guest_images_lvm</replaceable>
 Pool guest_images_lvm built
-
 # virsh pool-start <replaceable>guest_images_lvm</replaceable>
 Pool guest_images_lvm started
-
 # vgs
 VG          #PV #LV #SN Attr   VSize   VFree  
 libvirt_lvm   1   0   0 wz--n- 465.76g 465.76g
@@ -197,6 +291,6 @@ vg_host2      1   3   0 wz--n- 465.27g      0
 #
 </screen>
 		</step>
-	</procedure>
+	</procedure></section>
 </section>
 
diff --git a/en-US/Storage_Pools_Partitions.xml b/en-US/Storage_Pools_Partitions.xml
index e7c38dc..c78f868 100644
--- a/en-US/Storage_Pools_Partitions.xml
+++ b/en-US/Storage_Pools_Partitions.xml
@@ -150,41 +150,16 @@
 							The <parameter>name</parameter> parameter determines the name of the storage pool. This example uses the name <replaceable>guest_images_fs</replaceable> in the example below.
 						</para>
 					</listitem></varlistentry>
-					<!--<varlistentry><term> &lt;device path='<replaceable>/dev/sdb</replaceable>'/&gt;</term>
+					<varlistentry><term>device</term>
 					<listitem>
 						<para>
 							The <parameter>device</parameter> parameter with the <parameter>path</parameter> attribute
-							specifies the device path of the storage device. This example uses the device <replaceable>/dev/sdb</replaceable> .
-						</para>
-					</listitem>
-					<listitem>
-						<para>
-						The <replaceable>/path/to/source</replaceable> to the formatted file system. If the directory does not exist, the <command>virsh</command> command can create the directory.
-					</para>
-						<para>
-						The directory <replaceable>/guest_images</replaceable> is used in this example.
-					</para>
-					</listitem>
-					</varlistentry>
-					<varlistentry><term>&lt;target&gt; &lt;path&gt;<replaceable>/dev</replaceable>&lt;/path&gt;</term>
-					<listitem>
-						<para>
-							The file system <parameter>target</parameter> parameter with the <parameter>path</parameter> sub-parameter determines the location on the host file system to attach volumes created with this this storage pool.
-						</para>
-						<para>
-							For example, sdb1, sdb2, sdb3. Using <replaceable>/dev/</replaceable>, as in the example below, means volumes created from this storage pool can be accessed as <replaceable>/dev</replaceable>/sdb1, <replaceable>/dev</replaceable>/sdb2, <replaceable>/dev</replaceable>/sdb3.
+							specifies the device path of the storage device. This example uses the partition <replaceable>/dev/sdc1</replaceable> .
 						</para>
 					</listitem>
+					
 					</varlistentry>
-					<varlistentry><term>&lt;format type='<replaceable>gpt</replaceable>'/&gt;</term>
-					<listitem>
-						<para>
-							The <parameter>format</parameter> parameter specifies the partition table type. his example uses the  <replaceable>gpt</replaceable> in the example below, to match the GPT disk label type created in the previous step.
-						</para>
-					</listitem></varlistentry>-->
-				</variablelist>	
-			
-			<!--
+					<varlistentry><term>mountpoint</term>
 					
 					<listitem>
 						<para>
@@ -194,7 +169,9 @@
 						The directory <replaceable>/guest_images</replaceable> is used in this example.
 					</para>
 					</listitem>
-				</itemizedlist>-->
+					</varlistentry>
+		</variablelist>	
+			
 				<screen># virsh pool-define-as <replaceable>guest_images_fs</replaceable> fs - - <replaceable>/dev/sdc1</replaceable> - "<replaceable>/guest_images</replaceable>"
 Pool guest_images_fs defined
 </screen>
@@ -209,8 +186,6 @@ Name                 State      Autostart
 -----------------------------------------
 default              active     yes
 <replaceable>guest_images_fs</replaceable>      inactive   no
-
-#
 </screen>
 			</step>
 			
@@ -224,7 +199,6 @@ default              active     yes
 			</para>
 				<screen># virsh pool-build <replaceable>guest_images_fs</replaceable>
 Pool guest_images_fs built
-
 # ls -la /<replaceable>guest_images</replaceable>
 total 8
 drwx------.  2 root root 4096 May 31 19:38 .
@@ -235,7 +209,6 @@ Name                 State      Autostart
 default              active     yes
 guest_images_fs      inactive   no
 
-#
 </screen>
 			</step>
 				
@@ -248,14 +221,11 @@ guest_images_fs      inactive   no
 			</para>
 				<screen># virsh pool-start <replaceable>guest_images_fs</replaceable>
 Pool guest_images_fs started
-
 # virsh pool-list --all
 Name                 State      Autostart
 -----------------------------------------
 default              active     yes
 guest_images_fs      active     no
-
-#
 </screen>
 			</step>
 				
@@ -274,8 +244,6 @@ Name                 State      Autostart
 -----------------------------------------
 default              active     yes
 guest_images_fs      active     yes
-
-#
 </screen>
 			</step>
 				
@@ -293,7 +261,6 @@ State:          running
 Capacity:       458.39 GB
 Allocation:     197.91 MB
 Available:      458.20 GB
-
 # mount | grep /guest_images
 /dev/sdc1 on /guest_images type ext4 (rw)
 # ls -al /guest_images
@@ -301,7 +268,6 @@ total 24
 drwxr-xr-x.  3 root root  4096 May 31 19:47 .
 dr-xr-xr-x. 25 root root  4096 May 31 19:38 ..
 drwx------.  2 root root 16384 May 31 14:18 lost+found
-#
 </screen>
 			</step>
 		</procedure>
diff --git a/en-US/libvirt_Affinity.xml b/en-US/libvirt_Affinity.xml
index 587dfe3..aaa7357 100644
--- a/en-US/libvirt_Affinity.xml
+++ b/en-US/libvirt_Affinity.xml
@@ -63,7 +63,7 @@ Memory size:         8179176 kB</screen>
  <emphasis>[ Additional XML removed ]</emphasis>
 
 &lt;/capabilities&gt;</programlisting>
-	<para>The output shows two NUMA nodes (also know as NUMA cells), each containing four logical CPUs (four processing cores). This system has two sockets, therefore we can infer that each socket is a separate NUMA node. For a guest with four virtual CPUs, it would be optimal to lock the guest to physical CPUs 0 to 3, or 4 to 7 to avoid accessing non-local memory, which are significantly slower than accessing local memory. </para>
+	<para>The output shows two NUMA nodes (also know as NUMA cells), each containing four logical CPUs (four processing cores). This system has two sockets, therefore it can be inferred that each socket is a separate NUMA node. For a guest with four virtual CPUs, it would be optimal to lock the guest to physical CPUs 0 to 3, or 4 to 7 to avoid accessing non-local memory, which are significantly slower than accessing local memory. </para>
 	<para>
 	If a guest requires eight virtual CPUs, as each NUMA node only has four physical CPUs, a better utilization may be obtained by running a pair of four virtual CPU guests and splitting the work between them, rather than using a single 8 CPU guest. Running across multiple NUMA nodes significantly degrades performance for physical and virtualized tasks.</para>
 	<formalpara>


More information about the docs-commits mailing list