[install-guide: 12/18] tag cleanup

Pete Travis immanetize at fedoraproject.org
Fri Oct 26 05:51:54 UTC 2012


commit a75ec7098cdc166d49b1d7d9db32e8d9f0a7ad61
Author: Pete Travis <immanetize at fedoraproject.org>
Date:   Sun Sep 16 15:47:28 2012 -0600

    tag cleanup

 en-US/Boot_Init_Shutdown.xml |  771 ++++++++++++++----------------------------
 1 files changed, 247 insertions(+), 524 deletions(-)
---
diff --git a/en-US/Boot_Init_Shutdown.xml b/en-US/Boot_Init_Shutdown.xml
index ad4c01f..871657e 100644
--- a/en-US/Boot_Init_Shutdown.xml
+++ b/en-US/Boot_Init_Shutdown.xml
@@ -108,307 +108,269 @@
 		</para>
 	
 		<section id="sect-firmware_interface">
-			<title>The firmware interface</title>
-			<section id="s2-boot-init-shutdown-bios">
-				<title>BIOS-based x86 systems</title>
-				<indexterm significance="normal">
-					<primary>Basic Input/Output System</primary>
-					<see>BIOS</see>
-				</indexterm>
-				<indexterm significance="normal">
-					<primary>BIOS</primary>
-					<secondary>definition of</secondary>
-					<seealso>boot process</seealso>
-				</indexterm>
-				<indexterm significance="normal">
-					<primary>Master Boot Record</primary>
-					<see>MBR</see>
-				</indexterm>
-				<indexterm significance="normal">
-					<primary>MBR</primary>
-					<secondary>definition of</secondary>
-					<seealso>boot loaders</seealso>
-				</indexterm>
-				<indexterm>
-				  <primary>GUID Partition Table</primary>
-				  <see>GPT</see>
-				<indexterm significance="normal">
-				  <primary>GPT</primary>
-				  <secondary>definition of</secondary>
-				</indexterm>
-				<para>
-					The <firstterm>Basic Input/Output System</firstterm> (BIOS) is a firmware interface that controls not only the first step of the boot process, but also provides the lowest level interface to peripheral devices. On x86 systems equipped with BIOS, the program is written into read-only, permanent memory and is always available for use. When the system boots, the processor looks at the end of system memory for the BIOS program, and runs it.
-				</para>
-				<para>
-					Once loaded, the BIOS tests the system, looks for and checks peripherals, and then locates a valid device with which to boot the system. Usually, it checks any optical drives or USB storage devices present for bootable media, then, failing that, looks to the system's hard drives. In most cases, the order of the drives searched while booting is controlled with a setting in the BIOS, and it looks for bootable media in the specified order.
-				</para>
-				<para>
-A disk may either have a <firstterm>Master Boot Record</firstterm> (MBR) or a <firstterm>GUID Partition Table</firstterm> (GPT). The <systemitem>MBR</systemitem> is only 512 bytes in size and contains machine code instructions for booting the machine, called a boot loader, along with the partition table. The newer <systemitem>GPT</systemitem> serves the same role and allows for more and larger partitions, but is generally used on newer <systemitem>UEFI</systemitem> systems. Once the BIOS finds and loads the boot loader program into memory, it yields control of the boot process to it.
-				</para>
-
-				<para>
-					This first-stage boot loader is a small machine code binary on the MBR. Its sole job is to locate the second stage boot loader (<application>GRUB</application>) and load the first part of it into memory.
-				</para>
-			</section>
-			<section id="s2-boot-init-shutdown-uefi">
-				<title>UEFI-based x86 systems</title>
-				<indexterm significance="normal">
-					<primary>Extensible Firmware Interface shell</primary>
-					<see>EFI shell</see>
-				</indexterm>
-				<indexterm significance="normal">
-					<primary>EFI shell</primary>
-					<seealso>boot process</seealso>
-				</indexterm>
-				<indexterm significance="normal">
-					<primary>boot process</primary>
-					<secondary>stages of</secondary>
-					<tertiary>EFI shell</tertiary>
-				</indexterm>
-				<para>
-					The <firstterm>Unified Extensible Firmware Interface</firstterm> (UEFI) is designed, like BIOS, to control the boot process (through <firstterm>boot services</firstterm>) and to provide an interface between system firmware and an operating system (through <firstterm>runtime services</firstterm>). Unlike BIOS, it features its own architecture, independent of the CPU, and its own device drivers. UEFI can mount partitions and read certain file systems. 
-				</para>
-				<para>
-					When an x86 computer equipped with UEFI boots, the interface searches the system storage for a partition labeled with a specific <firstterm>globally unique identifier</firstterm> (GUID) that marks it as the <firstterm>EFI System Partition</firstterm> (ESP). This partition contains applications compiled for the EFI architecture, which might include bootloaders for operating systems and utility software. UEFI systems include an <firstterm>EFI boot manager</firstterm> that can boot the system from a default configuration, or prompt a user to choose an operating system to boot. When a bootloader is selected, manually or automatically, UEFI reads it into memory and yields control of the boot process to it.
-				</para>
-			</section>
-		</section>
-		
-		<section id="s2-boot-init-shutdown-loader">
-			<title>The Boot Loader</title>
-			<section id="s2-boot-init-shutdown-loader-bios">
-				<title>The GRUB boot loader for x86 systems</title>
-				<indexterm significance="normal">
-					<primary>boot process</primary>
-					<secondary>stages of</secondary>
-					<tertiary>boot loader</tertiary>
-				</indexterm>
-				<indexterm significance="normal">
-					<primary>GRUB</primary>
-					<secondary>role in boot process</secondary>
-				</indexterm>
-				<indexterm significance="normal">
-					<primary>GRUB</primary>
-					<seealso>boot loaders</seealso>
-				</indexterm>
-		
-				<para>
-					The system loads GRUB into memory, as directed by either a first-stage bootloader in the case of systems equipped with BIOS, or read directly from an EFI System Partition in the case of systems equipped with UEFI.
-				</para>
-				<para>
-					<application>GRUB</application> version 2 has the advantage of being able to read a variety of open filesystems, as well as virtual devices such as <application>mdadm</application> RAID arrays and <application>LVM</application> .
-				</para>
-			
-<para>
- GRUB mounts a designated partition and load its configuration file &mdash; <filename>/boot/grub2/grub.cfg</filename> (for BIOS) or <filename>/boot/efi/EFI/redhat/grub.cfg</filename> (for UEFI) &mdash; at boot time. Refer to <xref linkend="s1-grub-configfile" /> for information on how to edit this file.
-				</para>
-				<para>
-					Once the second stage boot loader is in memory, it presents the user with a graphical screen showing the different operating systems or kernels it has been configured to boot (when you update the kernel, the boot loader configuration file is updated automatically). On this screen a user can use the arrow keys to choose which operating system or kernel they wish to boot and press <keycap>Enter</keycap>. Typically, if no key is pressed, the boot loader loads the default selection after a configurable period of time has passed.
-				</para>
-				<para>
-					Once the second stage boot loader has determined which kernel to boot, it locates the corresponding kernel binary in the <filename>/boot/</filename> directory. The kernel binary is named using the following format &mdash; <filename>/boot/vmlinuz-<replaceable>&lt;kernel-version&gt;</replaceable></filename> file (where <filename><replaceable>&lt;kernel-version&gt;</replaceable></filename> corresponds to the kernel version specified in the boot loader's settings).
-				</para>
-				<para>
-					The bootloader is also used to pass arguments to the kernel it loads.  This allows the system to operate with a specified root filesystem, enable or disable kernel modules and system features, or configure booting to a specific runlevel. For instructions on using the boot loader to supply command line arguments to the kernel, refer to <xref linkend="ch-grub" />. Specific kernel parameters are described in <filename>/usr/share/doc/kernel-doc-*/Documentation/kernel-parameters.txt</filename>, which is provided by the <package>kernel-doc</package> package. For information on changing the runlevel at the boot loader prompt, refer <xref linkend="s1-grub-runlevels" />.
-				</para>
-				<para>
-					The boot loader then places one or more appropriate <filename>initramfs</filename> images into memory. The <filename>initramfs</filename> is used by the kernel to load drivers and modules necessary to boot the system. This is particularly important if SCSI hard drives are present or if the systems use the ext3 or ext4 file system.
-				</para>
-				<para>
-					Once the kernel and the <filename>initramfs</filename> image(s) are loaded into memory, the boot loader hands control of the boot process to the kernel.
-				</para>
-				<para>
-					For a more detailed overview of the GRUB boot loader, refer to <xref linkend="ch-grub" />.
-				</para>
-			
-			</section>
-			
-			<section id="s3-boot-init-shutdown-other-architectures">
-				<title>Boot Loaders for Other Architectures</title>
-				<para>
-					Once the kernel loads and hands off the boot process to the <command>init</command> command, the same sequence of events occurs on every architecture. So the main difference between each architecture's boot process is in the application used to find and load the kernel.
-				</para>
-				<para>
-					For example, the IBM eServer pSeries architecture uses <application>yaboot</application>, and the IBM System&nbsp;z systems use the z/IPL boot loader. Configuration of alternative bootloaders is outside the scope of this document.
-				</para>
-			</section>
-		</section>
-		
-		 <section id="s2-boot-init-shutdown-kernel">
-			<title>The Kernel</title>
-
-			 <indexterm significance="normal">
-				<primary>boot process</primary>
-				 <secondary>stages of</secondary>
-				 <tertiary>kernel</tertiary>
-			</indexterm>
-
-			 <indexterm significance="normal">
-				<primary>kernel</primary>
-				 <secondary>role in boot process</secondary>
-
-			</indexterm>
-			 <para>
-				When the kernel is loaded, it immediately initializes and configures the computer's memory and configures the various hardware attached to the system, including all processors, I/O subsystems, and storage devices. It then looks for the compressed <filename>initramfs</filename> image(s) in a predetermined location in memory, decompresses it directly to <filename>/sysroot/</filename> via <command>cpio</command>, and loads all necessary drivers. Next, it initializes virtual devices related to the file system, such as LVM or software RAID, before completing the <filename>initramfs</filename> processes and freeing up all the memory the disk image once occupied.
-			</para>
-			 <para>
-				The kernel then creates a root device, mounts the root partition read-only, and frees any unused memory.
-			</para>
-			 <para>
-				At this point, the kernel is loaded into memory and operational. However, since there are no user applications that allow meaningful input to the system, not much can be done with the system.
-			</para>
-			 <para>
-				To set up the user environment, the kernel executes the system daemon, <application>systemd</application>.
-			</para>
-
+		  <title>The firmware interface</title>
+		  <section id="s2-boot-init-shutdown-bios">
+		    <title>BIOS-based x86 systems</title>
+		    <indexterm significance="normal">
+		      <primary>Basic Input/Output System</primary>
+		      <see>BIOS</see>
+		    </indexterm>
+		    <indexterm significance="normal">
+		      <primary>BIOS</primary>
+		      <secondary>definition of</secondary>
+		      <seealso>boot process</seealso>
+		    </indexterm>
+		    <indexterm significance="normal">
+		      <primary>Master Boot Record</primary>
+		      <see>MBR</see>
+		    </indexterm>
+		    <indexterm significance="normal">
+		      <primary>MBR</primary>
+		      <secondary>definition of</secondary>
+		      <seealso>boot loaders</seealso>
+		    </indexterm>
+		    <indexterm>
+		      <primary>GUID Partition Table</primary>
+		      <see>GPT</see>
+		    </indexterm>
+		    <indexterm significance="normal">
+		      <primary>GPT</primary>
+		      <secondary>definition of</secondary>
+		    </indexterm>
+		    <para>
+		      The <firstterm>Basic Input/Output System</firstterm> (BIOS) is a firmware interface that controls not only the first step of the boot process, but also provides the lowest level interface to peripheral devices. On x86 systems equipped with BIOS, the program is written into read-only, permanent memory and is always available for use. When the system boots, the processor looks at the end of system memory for the BIOS program, and runs it.
+		    </para>
+		    <para>
+			Once loaded, the BIOS tests the system, looks for and checks peripherals, and then locates a valid device with which to boot the system. Usually, it checks any optical drives or USB storage devices present for bootable media, then, failing that, looks to the system's hard drives. In most cases, the order of the drives searched while booting is controlled with a setting in the BIOS, and it looks for bootable media in the specified order.
+		    </para>
+		    <para>
+			A disk may either have a <firstterm>Master Boot Record</firstterm> (MBR) or a <firstterm>GUID Partition Table</firstterm> (GPT). The <systemitem>MBR</systemitem> is only 512 bytes in size and contains machine code instructions for booting the machine, called a boot loader, along with the partition table. The newer <systemitem>GPT</systemitem> serves the same role and allows for more and larger partitions, but is generally used on newer <systemitem>UEFI</systemitem> systems. Once the BIOS finds and loads the boot loader program into memory, it yields control of the boot process to it.
+		    </para>
+		    <para>
+			This first-stage boot loader is a small machine code binary on the MBR. Its sole job is to locate the second stage boot loader (<application>GRUB</application>) and load the first part of it into memory.
+		    </para>
+		  </section>
 		</section>
-		<section id="s2-boot-init-shutdown-systemd">
-		  <title>Booting with <application>systemd</application></title>
-
-		  <indexterm significance="preferred">
-		    <primary><application>systemd</application></primary>
-		    <secondary>role in boot process</secondary>
+		<section id="s2-boot-init-shutdown-uefi">
+		  <title>UEFI-based x86 systems</title>
+		  <indexterm significance="normal">
+		    <primary>Extensible Firmware Interface shell</primary>
+		    <see>EFI shell</see>
+		  </indexterm>
+		  <indexterm significance="normal">
+		    <primary>EFI shell</primary>
+		    <seealso>boot process</seealso>
 		  </indexterm>
-
 		  <indexterm significance="normal">
 		    <primary>boot process</primary>
 		    <secondary>stages of</secondary>
+		    <tertiary>EFI shell</tertiary>
 		  </indexterm>
-
 		  <para>
-		    <application>systemd</application> is the first process started by the kernel. It replaces the venerable <application>SysVinit</application> program (also called <application>init</application>) and the newer <application>Upstart</application> init system. <application>systemd</application> coordinates the rest of the boot process and configures the environment for the user.
+		    The <firstterm>Unified Extensible Firmware Interface</firstterm> (UEFI) is designed, like BIOS, to control the boot process (through <firstterm>boot services</firstterm>) and to provide an interface between system firmware and an operating system (through <firstterm>runtime services</firstterm>). Unlike BIOS, it features its own architecture, independent of the CPU, and its own device drivers. UEFI can mount partitions and read certain file systems. 
 		  </para>
-
 		  <para>
-		    <application>systemd</application> improves on other init systems by offering increased parallelization. It starts the process of loading all programs it launches immediately, and manages information between interdependent programs as they load. By dissociating programs and their means of communication, each program is able to load without waiting for unrelated or even dependent programs to load first.
+		    When an x86 computer equipped with UEFI boots, the interface searches the system storage for a partition labeled with a specific <firstterm>globally unique identifier</firstterm> (GUID) that marks it as the <firstterm>EFI System Partition</firstterm> (ESP). This partition contains applications compiled for the EFI architecture, which might include bootloaders for operating systems and utility software. UEFI systems include an <firstterm>EFI boot manager</firstterm> that can boot the system from a default configuration, or prompt a user to choose an operating system to boot. When a bootloader is selected, manually or automatically, UEFI reads it into memory and yields control of the boot process to it.
 		  </para>
-
-
+		</section>
+	      </section>
+	      <section id="s2-boot-init-shutdown-loader">
+		<title>The Boot Loader</title>
+		<section id="s2-boot-init-shutdown-loader-bios">
+		  <title>The GRUB boot loader for x86 systems</title>
 		  <indexterm significance="normal">
-		       <primary>cgroups</primary>
-			<secondary>use by <application>systemd</application></secondary>
-		     </indexterm>
-		     
-
-		     <itemizedlist>
-		       <title> The Boot Process </title>
-		       <listitem><para>
+		    <primary>boot process</primary>
+		    <secondary>stages of</secondary>
+		    <tertiary>boot loader</tertiary>
+		  </indexterm>
+		  <indexterm significance="normal">
+		    <primary>GRUB</primary>
+		    <secondary>role in boot process</secondary>
+		  </indexterm>
+		  <indexterm significance="normal">
+		    <primary>GRUB</primary>
+		    <seealso>boot loaders</seealso>
+		  </indexterm>
+		  <para>
+		    The system loads GRUB into memory, as directed by either a first-stage bootloader in the case of systems equipped with BIOS, or read directly from an EFI System Partition in the case of systems equipped with UEFI.
+		  </para>
+		  <para>
+		    <application>GRUB</application> version 2 has the advantage of being able to read a variety of open filesystems, as well as virtual devices such as <application>mdadm</application> RAID arrays and <application>LVM</application> .
+		  </para>
+		  <para>
+		    GRUB mounts a designated partition and load its configuration file &mdash; <filename>/boot/grub2/grub.cfg</filename> (for BIOS) or <filename>/boot/efi/EFI/redhat/grub.cfg</filename> (for UEFI) &mdash; at boot time. Refer to <xref linkend="s1-grub-configfile" /> for information on how to edit this file.
+		  </para>
+		  <para>
+		    Once the second stage boot loader is in memory, it presents the user with a graphical screen showing the different operating systems or kernels it has been configured to boot (when you update the kernel, the boot loader configuration file is updated automatically). On this screen a user can use the arrow keys to choose which operating system or kernel they wish to boot and press <keycap>Enter</keycap>. Typically, if no key is pressed, the boot loader loads the default selection after a configurable period of time has passed.
+		  </para>
+		  <para>
+		    Once the second stage boot loader has determined which kernel to boot, it locates the corresponding kernel binary in the <filename>/boot/</filename> directory. The kernel binary is named using the following format &mdash; <filename>/boot/vmlinuz-<replaceable>&lt;kernel-version&gt;</replaceable></filename> file (where <filename><replaceable>&lt;kernel-version&gt;</replaceable></filename> corresponds to the kernel version specified in the boot loader's settings).
+		  </para>
+		  <para>
+		    The bootloader is also used to pass arguments to the kernel it loads.  This allows the system to operate with a specified root filesystem, enable or disable kernel modules and system features, or configure booting to a specific runlevel. For instructions on using the boot loader to supply command line arguments to the kernel, refer to <xref linkend="ch-grub" />. Specific kernel parameters are described in <filename>/usr/share/doc/kernel-doc-*/Documentation/kernel-parameters.txt</filename>, which is provided by the <package>kernel-doc</package> package. For information on changing the runlevel at the boot loader prompt, refer <xref linkend="s1-grub-runlevels" />.
+		  </para>
+		  <para>
+		    The boot loader then places one or more appropriate <filename>initramfs</filename> images into memory. The <filename>initramfs</filename> is used by the kernel to load drivers and modules necessary to boot the system. This is particularly important if SCSI hard drives are present or if the systems use the ext3 or ext4 file system.
+		  </para>
+		  <para>
+		    Once the kernel and the <filename>initramfs</filename> image(s) are loaded into memory, the boot loader hands control of the boot process to the kernel.
+		  </para>
+		  <para>
+		    For a more detailed overview of the GRUB boot loader, refer to <xref linkend="ch-grub" />.
+		  </para>
+		</section>
+		<section id="s3-boot-init-shutdown-other-architectures">
+		  <title>Boot Loaders for Other Architectures</title>
+		  <para>
+		    Once the kernel loads and hands off the boot process to the <command>init</command> command, the same sequence of events occurs on every architecture. So the main difference between each architecture's boot process is in the application used to find and load the kernel.
+		  </para>
+		  <para>
+		    For example, the IBM eServer pSeries architecture uses <application>yaboot</application>, and the IBM System&nbsp;z systems use the z/IPL boot loader. Configuration of alternative bootloaders is outside the scope of this document.
+		  </para>
+		</section>
+	      </section>
+	      <section id="s2-boot-init-shutdown-kernel">
+		<title>The Kernel</title>
+		<indexterm significance="normal">
+		  <primary>boot process</primary>
+		  <secondary>stages of</secondary>
+		  <tertiary>kernel</tertiary>
+		</indexterm>
+		<indexterm significance="normal">
+		  <primary>kernel</primary>
+		  <secondary>role in boot process</secondary>
+		</indexterm>
+		<para>
+		  When the kernel is loaded, it immediately initializes and configures the computer's memory and configures the various hardware attached to the system, including all processors, I/O subsystems, and storage devices. It then looks for the compressed <filename>initramfs</filename> image(s) in a predetermined location in memory, decompresses it directly to <filename>/sysroot/</filename> via <command>cpio</command>, and loads all necessary drivers. Next, it initializes virtual devices related to the file system, such as LVM or software RAID, before completing the <filename>initramfs</filename> processes and freeing up all the memory the disk image once occupied.
+		</para>
+		<para>
+		  The kernel then creates a root device, mounts the root partition read-only, and frees any unused memory.
+		</para>
+		<para>
+		  At this point, the kernel is loaded into memory and operational. However, since there are no user applications that allow meaningful input to the system, not much can be done with the system.
+		</para>
+		<para>
+		  To set up the user environment, the kernel executes the system daemon, <application>systemd</application>.
+		</para>
+	      </section>
+	      <section id="s2-boot-init-shutdown-systemd">
+		<title>Booting with <application>systemd</application></title>
+		<indexterm significance="preferred">
+		  <primary><application>systemd</application></primary>
+		  <secondary>role in boot process</secondary>
+		</indexterm>
+		<indexterm significance="normal">
+		  <primary>boot process</primary>
+		  <secondary>stages of</secondary>
+		</indexterm>
+		<para>
+		    <application>systemd</application> is the first process started by the kernel. It replaces the venerable <application>SysVinit</application> program (also called <application>init</application>) and the newer <application>Upstart</application> init system. <application>systemd</application> coordinates the rest of the boot process and configures the environment for the user.
+		</para>
+		<para>
+		  <application>systemd</application> improves on other init systems by offering increased parallelization. It starts the process of loading all programs it launches immediately, and manages information between interdependent programs as they load. By dissociating programs and their means of communication, each program is able to load without waiting for unrelated or even dependent programs to load first.
+		</para>
+		
+		<indexterm significance="normal">
+		  <primary>cgroups</primary>
+		  <secondary>use by <application>systemd</application></secondary>
+		</indexterm>
+		
+		<itemizedlist>
+		  <title> The Boot Process </title>
+		  <listitem><para>
 			A socket is created for each daemon that will be launched. The sockets allow daemons to communicate with each other and userspace programs. Because the sockets are abstracted from the processes that use them, interdependent services do not have to wait for each other to come up before sending messages to the socket.
-		       </para></listitem>		       
-		   
-		       <listitem><para>
+		  </para></listitem>		       
+		  
+		  <listitem><para>
 			New process started by <application>systemd</application> are assigned to <function>Control Groups</function>, or <function> cgroups</function>. Processes in a <function>cgroup</function> are isolated to resources alloted by the kernel, and the restrictions are inherited by newly spawned processes. Communication with outside processes will be handled by the kernel through sockets.
-		      </para></listitem>
-		    </itemizedlist>
+		  </para></listitem>
+		</itemizedlist>
 		    
-		  </section>
-
-		  <section id="s2-boot-init-shutdown-systemd-units">
-		    <title><application>systemd</application> <function>units</function></title>     
-		      <indexterm significance="normal">
-			<primary><application>systemd</application></primary>
-			<secondary>units</secondary>
-		      </indexterm>
-			
-			<para>
+	      </section>
+	      
+	      <section id="s2-boot-init-shutdown-systemd-units">
+		<title><application>systemd</application> <function>units</function></title>     
+		<indexterm significance="normal">
+		  <primary><application>systemd</application></primary>
+		  <secondary>units</secondary>
+		</indexterm>
+		
+		<para>
 			  Functions administered by <application>systemd</application> are referred to as <function>units</function>. Each <function>unit</function> has a name and a type, and is decribed in a file that follows the convention of <replaceable>unit-name</replaceable>.<replaceable>type</replaceable>. The configuration file defines the relationship between a <function>unit</function> and it's dependencies.
 Let's look at the different types of units:
-			</para>
+		</para>
 		    
-			<segmentedlist>
-			  <segtitle><function>unit</function> type</segtitle>
-			  <segtitle>Role</segtitle>
-			  
-			  <seglistitem>
-			    <seg><function>socket</function></seg>
-			    <seg>
-			      These provide an endpoint for interprocesses communication. Messages can be transported through files, or network or unix sockets. Each <function>socket</function> has a corresponding <function>service</function>.
-			    </seg>
-			  </seglistitem>
-
-
-			  <seglistitem>
-			    <seg><function>service</function></seg>
-			    <seg>
+		<segmentedlist>
+		  <segtitle><function>unit</function> type</segtitle>
+		  <segtitle>Role</segtitle>
+		  <seglistitem>
+		    <seg><function>socket</function></seg>
+		    <seg>
+		      These provide an endpoint for interprocesses communication. Messages can be transported through files, or network or unix sockets. Each <function>socket</function> has a corresponding <function>service</function>.
+		    </seg>
+		  </seglistitem>
+		  
+		  <seglistitem>
+		    <seg><function>service</function></seg>
+		    <seg>
 			      These are traditional daemons. <function>Service</function> <function>units</function> are described in simple configuration files that define the type, execution, and envoronment of the program, as well as information regarding how <application>systemd</application> should monitor it.
-			    </seg>
-			  </seglistitem>
+		    </seg>
+		  </seglistitem>
 			    
-			  <seglistitem>
-			    <seg><function>device</function></seg>
-			    <seg>
-			      These are automatically created for all devices discovered by the kernel. These <function>units</function> are provided for services that are dependent on devices, or for virtual devices that are dependent on services, as with a network block device.
-			    </seg>
-			  </seglistitem>
-			  
-			  <seglistitem>
-			    <seg><function>mount</function></seg>
-			    <seg>
-			      These <function>units</function> allow <application>systemd</application>to monitor the mounting and unmounting of filesystems, and allow <function>units</function> to declare relationships with the filesystems they use.
-			    </seg>
-			  </seglistitem>
-
-			  <seglistitem>
-			    <seg><function>automount</function></seg>
-			    <seg>
+		  <seglistitem>
+		    <seg><function>device</function></seg>
+		    <seg>
+		      These are automatically created for all devices discovered by the kernel. These <function>units</function> are provided for services that are dependent on devices, or for virtual devices that are dependent on services, as with a network block device.
+		    </seg>
+		  </seglistitem>
+		  
+		  <seglistitem>
+		    <seg><function>mount</function></seg>
+		    <seg>
+		      These <function>units</function> allow <application>systemd</application>to monitor the mounting and unmounting of filesystems, and allow <function>units</function> to declare relationships with the filesystems they use.
+		    </seg>
+		  </seglistitem>
+
+		  <seglistitem>
+		    <seg><function>automount</function></seg>
+		    <seg>
 			      These <function>units</function> facilitate dynamic mounting of filesystems when their mountpoint is accessed. They are always paired with a <function>mount</function> <function>unit</function>.
-			    </seg>
-			  </seglistitem>
+		    </seg>
+		  </seglistitem>
 
-			  <seglistitem>
-			    <seg><function>target</function></seg>
-			    <seg>
+		  <seglistitem>
+		    <seg><function>target</function></seg>
+		    <seg>
 			      These are logical groupings of <function>units</function> that are required for userspace functionality. Some are large, such as <filename>multi-user.target</filename> that defines a full graphical user environment, or more topical, such as <filename>bluetooth.target</filename> that provides the services a user expects to be available when using bluetooth devices.
-			    </seg>
-			  </seglistitem>
+		    </seg>
+		  </seglistitem>
 
-			  <seglistitem>
-			    <seg><function>snapshot</function></seg>
-			    <seg><function>snapshots</function> allow the user to save the state of all <function>units</function> with the command <command>systemctl snapshot</command> and return to that state with <command>systemctl isolate</command>. This is useful for temporary adjustments that don't merit reconfiguration of a target.
-			    </seg>
-			  </seglistitem>
-			</segmentedlist>
+		  <seglistitem>
+		    <seg><function>snapshot</function></seg>
+		    <seg><function>snapshots</function> allow the user to save the state of all <function>units</function> with the command <command>systemctl snapshot</command> and return to that state with <command>systemctl isolate</command>. This is useful for temporary adjustments that don't merit reconfiguration of a target.
+		    </seg>
+		  </seglistitem>
+		</segmentedlist>
 
 		     
-			<para>
-			  Although <application>systemd</application> <function>units</function> will ultimately be available for all services, it retains support for legacy init scripts. <function>units</function> are dynamically created for these services, with dependencies inferred from LSB headers in the script. There are drawbacks to this method, so it is best to have a native <application>systemd</application> <function>unit</function> file.
-			</para>
-			 <para>
-			   The function and usage of legacy init systems and their configuration files is outside of the scope of this document.
-			</para>
+		<para>
+		  Although <application>systemd</application> <function>units</function> will ultimately be available for all services, it retains support for legacy init scripts. <function>units</function> are dynamically created for these services, with dependencies inferred from LSB headers in the script. There are drawbacks to this method, so it is best to have a native <application>systemd</application> <function>unit</function> file.
+		</para>
+		<para>
+		  The function and usage of legacy init systems and their configuration files is outside of the scope of this document.
+		</para>
 			 
-			 <para>
-				As illustrated in this listing, none of the scripts that actually start and stop the services are located in the <filename>/etc/rc.d/rc5.d/</filename> directory. Rather, all of the files in <filename>/etc/rc.d/rc5.d/</filename> are <firstterm>symbolic links</firstterm> pointing to scripts located in the <filename>/etc/rc.d/init.d/</filename> directory. Symbolic links are used in each of the <filename>rc</filename> directories so that the runlevels can be reconfigured by creating, modifying, and deleting the symbolic links without affecting the actual scripts they reference.
-			</para>
-			 <para>
-				The name of each symbolic link begins with either a <computeroutput>K</computeroutput> or an <computeroutput>S</computeroutput>. The <computeroutput>K</computeroutput> links are processes that are killed on that runlevel, while those beginning with an <computeroutput>S</computeroutput> are started.
-			</para>
-			 <para>
-				The <command>init</command> command first stops all of the <computeroutput>K</computeroutput> symbolic links in the directory by issuing the <command>/etc/rc.d/init.d/<replaceable>&lt;command&gt;</replaceable> stop</command> command, where <replaceable>&lt;command&gt;</replaceable> is the process to be killed. It then starts all of the <computeroutput>S</computeroutput> symbolic links by issuing <command>/etc/rc.d/init.d/<replaceable>&lt;command&gt;</replaceable> start</command>.
-			</para>
-			 <note>
-				<title>Note</title>
-				 <para>
-					After the system is finished booting, it is possible to log in as root and execute these same scripts to start and stop services. For instance, the command <command>/etc/rc.d/init.d/httpd stop</command> stops the Apache HTTP Server.
-				</para>
-
-			</note>
-			 <para>
-				Each of the symbolic links are numbered to dictate start order. The order in which the services are started or stopped can be altered by changing this number. The lower the number, the earlier it is started. Symbolic links with the same number are started alphabetically.
-			</para>
-			 <note>
-				<title>Note</title>
-				 <para>
-					One of the last things the <command>init</command> program executes is the <filename>/etc/rc.d/rc.local</filename> file. This file is useful for system customization. Refer to <xref linkend="s1-boot-init-shutdown-run-boot" /> for more information about using the <filename>rc.local</filename> file.
-				</para>
-
-			</note>
-			 <para>
+			<!-- section covering symlinks for targets. -->			 
+			<!--note about managing services-->
+			<!--IMPORTANT: add a comment that rc.local still works!-->
+			<!--paragraphs describing various runlevels: 
+			<para>
 				After the <command>init</command> command has progressed through the appropriate <filename>rc</filename> directory for the runlevel, <application>Upstart</application> forks an <command>/sbin/mingetty</command> process for each virtual console (login prompt) allocated to the runlevel by the job definition in the <filename>/etc/event.d</filename> directory. Runlevels 2 through 5 have all six virtual consoles, while runlevel 1 (single user mode) has one, and runlevels 0 and 6 have none. The <command>/sbin/mingetty</command> process opens communication pathways to <firstterm>tty</firstterm> devices<footnote> <para>
 					Refer to the Fedora Deployment Guide for more information about <filename>tty</filename> devices.
 				</para>
 				 </footnote>, sets their modes, prints the login prompt, accepts the user's username and password, and initiates the login process.
 			</para>
+
 			 <para>
 				In runlevel 5, <application>Upstart</application> runs a script called <filename>/etc/X11/prefdm</filename>. The <filename>prefdm</filename> script executes the preferred X display manager<footnote> <para>
 					Refer to the Fedora Deployment Guide for more information about display managers.
@@ -417,42 +379,8 @@ Let's look at the different types of units:
 			</para>
 			 <para>
 				Once finished, the system operates on runlevel 5 and displays a login screen.
-			</para>
-
-		      </section>
-		
-		 <section id="s2-boot-init-shutdown-jobs">
-			<title>Job definitions</title>
-			 <para>
-				Previously, the <package>sysvinit</package> package provided the <application>init</application> daemon for the default configuration. When the system started, this <application>init</application> daemon ran the <filename>/etc/inittab</filename> script to start system processes defined for each runlevel. The default configuration now uses an event-driven <application>init</application> daemon provided by the <package>Upstart</package> package. Whenever particular <firstterm>events</firstterm> occur, the <application>init</application> daemon processes <firstterm>jobs</firstterm> stored in the <filename>/etc/event.d</filename> directory. The <application>init</application> daemon recognizes the start of the system as such an event.
-			</para>
-			 <para>
-				Each job typically specifies a program, and the events that trigger <application>init</application> to run or to stop the program. Some jobs are constructed as <firstterm>tasks</firstterm>, which perform actions and then terminate until another event triggers the job again. Other jobs are constructed as <firstterm>services</firstterm>, which <application>init</application> keeps running until another event (or the user) stops it.
-			</para>
-			 <para>
-				For example, the <filename>/etc/events.d/tty2</filename> job is a service to maintain a virtual terminal on <application>tty2</application> from the time that the system starts until the system shuts down, or another event (such as a change in runlevel) stops the job. The job is constructed so that <application>init</application> will restart the virtual terminal if it stops unexpectedly during that time:
-			</para>
-			
-<screen># tty2 - getty
-#
-# This service maintains a getty on tty2 from the point the system is
-# started until it is shut down again.
-
-start on stopped rc2
-start on stopped rc3
-start on stopped rc4
-start on started prefdm
-
-stop on runlevel 0
-stop on runlevel 1
-stop on runlevel 6
-
-respawn
-exec /sbin/mingetty tty2
-</screen>
-
-		</section>
-		
+			</para>-->
+<!--watchdogs!-->
 
 	</section>
 	
@@ -508,7 +436,8 @@ exec /sbin/mingetty tty2
 		   <tertiary><filename>/etc/inittab</filename> </tertiary>
 		 </indexterm>
 		 <para>
-		   <application>systemd</application> replaces traditional <application>SysVinit</application> <function>runlevels</function> with predefined groups of <function>units</function> called <function>targets</function>. <function>Targets</function> are usually defined according to the intended use of the system, and ensure that required dependencies for that use are met. The following table shows some standard preconfigured targets, the <application>sysVinit</application> <function>runlevels</function> they resemble, the use case they address, and examples of targets they might require.
+<!-- There's possibly too much in here for a table; present it another way? -->
+		   <application>systemd</application> replaces traditional <application>SysVinit</application> <function>runlevels</function> with predefined groups of <function>units</function> called <function>targets</function>. <function>Targets</function> are usually defined according to the intended use of the system, and ensure that required dependencies for that use are met. The following table shows some standard preconfigured targets, the <application>sysVinit</application> <function>runlevels</function> they resemble and the use case they address.
 		 </para>
 		 <table>
 		   <title>Predefined <application>systemd</application> targets</title>
@@ -516,7 +445,7 @@ exec /sbin/mingetty tty2
 		   <colspec colname="runlevel" colnum="1" />
 		   <colspec colname="target" colnum="2" />
 		   <colspec colname="usage" colnum="3" />
-		   <colspec colname="member_units" colnum="4"
+		   <colspec colname="member_units" colnum="4" />
 		   <thead>
 		     <row>
 		       <entry>Runlevel</entry>
@@ -527,118 +456,12 @@ exec /sbin/mingetty tty2
 		   </thead>
 		   <tbody>
 		     <row>
-		       <entry>run</entry>
-		       <entry></entry>
-							 <entry>
-								graphical display
-							</entry>
-
-						</row>
-		 <para>
-			The configuration files for SysV init are located in the <filename>/etc/rc.d/</filename> directory. Within this directory, are the <filename>rc</filename>, <filename>rc.local</filename>, <filename>rc.sysinit</filename>, and, optionally, the <filename>rc.serial</filename> scripts as well as the following directories:
-		</para>
-		
-<screen>
-<computeroutput>init.d/ rc0.d/ rc1.d/ rc2.d/ rc3.d/ rc4.d/ rc5.d/ rc6.d/</computeroutput></screen>
-		 <para>
-			The <filename>init.d/</filename> directory contains the scripts used by the <command>/sbin/init</command> command when controlling services. Each of the numbered directories represent the six runlevels configured by default under Fedora.
-		</para>
-		
-
- <section id="s2-init-boot-shutdown-rl">
-			<title>Runlevels</title>
-			 <indexterm significance="normal">
-				<primary>runlevels</primary>
-				 <see><command>init</command> command</see>
-
-			</indexterm>
-			 <indexterm significance="normal">
-				<primary><command>init</command> command</primary>
-				 <secondary>runlevels accessed by</secondary>
-
-			</indexterm>
-			 <para>
-				The idea behind SysV init runlevels revolves around the idea that different systems can be used in different ways. For example, a server runs more efficiently without the drag on system resources created by the X Window System. Or there may be times when a system administrator may need to operate the system at a lower runlevel to perform diagnostic tasks, like fixing disk corruption in runlevel 1.
-			</para>
-			 <para>
-				The characteristics of a given runlevel determine which services are halted and started by <command>init</command>. For instance, runlevel 1 (single user mode) halts any network services, while runlevel 3 starts these services. By assigning specific services to be halted or started on a given runlevel, <command>init</command> can quickly change the mode of the machine without the user manually stopping and starting services.
-			</para>
-			 <para>
-				The following runlevels are defined by default under Fedora:
-			</para>
-			 <blockquote>
-				<itemizedlist>
-					<listitem>
-						<para>
-							<command>0</command> &mdash; Halt
-						</para>
-
-					</listitem>
-					 <listitem>
-						<para>
-							<command>1</command> &mdash; Single-user text mode
-						</para>
-
-					</listitem>
-					 <listitem>
-						<para>
-							<command>2</command> &mdash; Not used (user-definable)
-						</para>
-
-					</listitem>
-					 <listitem>
-						<para>
-							<command>3</command> &mdash; Full multi-user text mode
-						</para>
-
-					</listitem>
-					 <listitem>
-						<para>
-							<command>4</command> &mdash; Not used (user-definable)
-						</para>
-
-					</listitem>
-					 <listitem>
-						<para>
-							<command>5</command> &mdash; Full multi-user graphical mode (with an X-based login screen)
-						</para>
-
-					</listitem>
-					 <listitem>
-						<para>
-							<command>6</command> &mdash; Reboot
-						</para>
-
-					</listitem>
-
-				</itemizedlist>
-
-			</blockquote>
-			 <para>
-				In general, users operate Fedora at runlevel 3 or runlevel 5 &mdash; both full multi-user modes. Users sometimes customize runlevels 2 and 4 to meet specific needs, since they are not used.
-			</para>
-			 <para>
-				The default runlevel for the system is listed in <filename>/etc/inittab</filename>. To find out the default runlevel for a system, look for the line similar to the following near the bottom of <filename>/etc/inittab</filename>:
-			</para>
-			
-<screen>
-<computeroutput>id:5:initdefault:</computeroutput></screen>
-			 <para>
-				The default runlevel listed in this example is five, as the number after the first colon indicates. To change it, edit <filename>/etc/inittab</filename> as root.
-			</para>
-			 <warning>
-				<title>Warning</title>
-				 <para>
-					Be very careful when editing <filename>/etc/inittab</filename>. Simple typos can cause the system to become unbootable. If this happens, either use a boot diskette, enter single-user mode, or enter rescue mode to boot the computer and repair the file.
-				</para>
-				 <para>
-					For more information on single-user and rescue mode, refer to the chapter titled <citetitle>Basic System Recovery</citetitle> in the <citetitle>Fedora Deployment Guide</citetitle>.
-				</para>
-
-			</warning>
-			 <para>
-				It is possible to change the default runlevel at boot time by modifying the arguments passed by the boot loader to the kernel. For information on changing the runlevel at boot time, refer to <xref linkend="s1-grub-runlevels" />.
-			</para>
+		       <entry>1,single</entry>
+		       <entry>rescue.target</entry>
+		       <entry>single user mode, for recovery of critical system components or configuration</entry>
+         	     </row>
+		   </tbody>
+		 </table>
 
 		</section>
 		
@@ -648,112 +471,12 @@ exec /sbin/mingetty tty2
 				<primary>runlevels</primary>
 				 <secondary>configuration of</secondary>
 				 <seealso>services</seealso>
+			 </indexterm>
 
-			</indexterm>
-			 <indexterm significance="normal">
-				<primary><application>Services Configuration Tool</application> </primary>
-				 <seealso>services</seealso>
-
-			</indexterm>
-			 <indexterm significance="normal">
-				<primary>services</primary>
-				 <secondary>configuring with <application>Services Configuration Tool</application> </secondary>
-
-			</indexterm>
-			 <indexterm significance="normal">
-				<primary><application>ntsysv</application> </primary>
-				 <seealso>services</seealso>
-
-			</indexterm>
-			 <indexterm significance="normal">
-				<primary>services</primary>
-				 <secondary>configuring with <application>ntsysv</application> </secondary>
-
-			</indexterm>
-			 <indexterm significance="normal">
-				<primary>services</primary>
-				 <secondary>configuring with <command>chkconfig</command> </secondary>
-
-			</indexterm>
-			 <indexterm significance="normal">
-				<primary><command>chkconfig</command> </primary>
-				 <seealso>services</seealso>
-
-			</indexterm>
-			 <para>
-				One of the best ways to configure runlevels is to use an <firstterm>initscript utility</firstterm>. These tools are designed to simplify the task of maintaining files in the SysV init directory hierarchy and relieves system administrators from having to directly manipulate the numerous symbolic links in the subdirectories of <filename>/etc/rc.d/</filename>.
-			</para>
-			 <para>
-				Fedora provides three such utilities:
-			</para>
-			 <itemizedlist>
-				<listitem>
-					<para>
-						<command>/sbin/chkconfig</command> &mdash; The <command>/sbin/chkconfig</command> utility is a simple command line tool for maintaining the <filename>/etc/rc.d/init.d/</filename> directory hierarchy.
-					</para>
-
-				</listitem>
-				 <listitem>
-					<para>
-						<application>/usr/sbin/ntsysv</application> &mdash; The ncurses-based <application>/sbin/ntsysv</application> utility provides an interactive text-based interface, which some find easier to use than <command>chkconfig</command>.
-					</para>
-
-				</listitem>
-				 <listitem>
-					<para>
-						<application>Services Configuration Tool</application> &mdash; The graphical <application>Services Configuration Tool</application> (<command>system-config-services</command>) program is a flexible utility for configuring runlevels.
-					</para>
-
-				</listitem>
-
-			</itemizedlist>
-			 <para>
-				Refer to the chapter titled <citetitle>Controlling Access to Services</citetitle> in the <citetitle>Fedora Deployment Guide</citetitle> for more information regarding these tools.
-			</para>
-
-		</section>
-		
-
-	</section>
-	
-	 <section id="s1-boot-init-shutdown-shutdown">
-		<title>Shutting Down</title>
-		 <indexterm significance="normal">
-			<primary>shutdown</primary>
-			 <seealso>halt</seealso>
-
-		</indexterm>
-		 <indexterm significance="normal">
-			<primary>halt</primary>
-			 <seealso>shutdown</seealso>
-
-		</indexterm>
-		 <para>
-			To shut down Fedora, the root user may issue the <command>/sbin/shutdown</command> command. The <command>shutdown</command> man page has a complete list of options, but the two most common uses are:
-		</para>
-		
-<screen>
-<command>/sbin/shutdown -h now</command></screen>
-		 <para>
-			and
-		</para>
-		
-<screen>
-<command>/sbin/shutdown -r now</command></screen>
-		 <para>
-			After shutting everything down, the <command>-h</command> option halts the machine, and the <command>-r</command> option reboots.
-		</para>
-		 <para>
-			PAM console users can use the <command>reboot</command> and <command>halt</command> commands to shut down the system while in runlevels 1 through 5. For more information about PAM console users, refer to the Fedora Deployment Guide.
-		</para>
-		 <para>
-			If the computer does not power itself down, be careful not to turn off the computer until a message appears indicating that the system is halted.
-		</para>
-		 <para>
-			Failure to wait for this message can mean that not all the hard drive partitions are unmounted, which can lead to file system corruption.
-		</para>
+		 </section>
+		<!-- basically, systemd to sysvinit cheatsheet here. systemctl enable, et al-->
 
-	</section>
+<!--	</section>-->
 	
 
 </appendix>


More information about the docs-commits mailing list