[musicians-guide] added many chapters

crantila crantila at fedoraproject.org
Wed Jul 28 06:27:37 UTC 2010


commit 4447fcd58d2f37b6acd03e1936df9194e3a8a5dd
Author: Christopher Antila <crantila at fedoraproject.org>
Date:   Wed Jul 28 02:27:12 2010 -0400

    added many chapters

 en-US/Book_Info.xml                  |    6 +-
 en-US/Digital_Audio_Workstations.xml |  244 ++++++++++++++++++++++++++++++++++
 en-US/Musicians_Guide.xml            |    9 +-
 en-US/Planet_CCRMA_at_Home.xml       |  139 +++++++++++++++++++
 en-US/Real_Time_and_Low_Latency.xml  |   77 +++++++++++
 en-US/Sound_Cards.xml                |  129 ++++++++++++++++++
 en-US/Sound_Servers.xml              |  141 ++++++++++++++++++++
 7 files changed, 741 insertions(+), 4 deletions(-)
---
diff --git a/en-US/Book_Info.xml b/en-US/Book_Info.xml
index a4670a9..2738af5 100644
--- a/en-US/Book_Info.xml
+++ b/en-US/Book_Info.xml
@@ -5,14 +5,14 @@
 ]>
 <bookinfo id="book-Musicians_Guide-Musicians_Guide">
 	<title>Musicians' Guide</title>
-	<subtitle>short description</subtitle>
+	<subtitle>A guide to Fedora Linux's audio creation and music capabilities.</subtitle>
+<!-- Haydn: <subtitle>A guide to Fedora Linux's audio creation and music capabilities, written in a new and special way.</subtitle> -->
 	<productname>Fedora Draft Documentation</productname>
 	<productnumber></productnumber>
 	<edition>14.0.1</edition>
 	<pubsnumber>0</pubsnumber>
 	<abstract>
-		<para>
-			A short overview and summary of the book&#39;s subject and purpose, traditionally no more than one paragraph long. Note: the abstract will appear in the front matter of your book and will also be placed in the description field of the book&#39;s RPM spec file.
+		<para>This document explores some audio-creation and music activities possible with Fedora Linux.  Computer audio concepts are explained, and a selection of programs are demonstrated with tutorials showing a typical usage.
 		</para>
 	</abstract>
 	<corpauthor>
diff --git a/en-US/Digital_Audio_Workstations.xml b/en-US/Digital_Audio_Workstations.xml
new file mode 100644
index 0000000..3b626f8
--- /dev/null
+++ b/en-US/Digital_Audio_Workstations.xml
@@ -0,0 +1,244 @@
+<?xml version='1.0' encoding='utf-8' ?>
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
+<!ENTITY % BOOK_ENTITIES SYSTEM "Musicians_Guide.ent">
+%BOOK_ENTITIES;
+]>
+
+<chapter id="chap-Musicians_Guide-Digital_Audio_Workstations">
+	<title>Digital Audio Workstations</title>
+	<para>
+		The term '''Digital Audio Workstation''' (henceforth '''DAW''') refers to the entire hardware and software setup used for professional (or professional-quality) audio recording, manipulation, synthesis, and production.  It originally referred to devices purpose-built for the task, but as personal computers have become more powerful and wide-spread, certain specially-designed personal computers can also be thought of as DAWs.  The software running on these computers, especially software capable of multi-track recording, playback, and synthesis, is simply called "DAW software," which is often shortened to "DAW."  So, the term "DAW" and its usage are moderately ambiguous, but generally refer to one of the things mentioned.
+   </para>
+	
+	<section id="sect-Musicians_Guide-Knowing_Which_DAW_to_Use">
+		<title>Knowing Which DAW to Use</title>
+		<para>
+			The Musicians' Guide covers three widely-used DAWs: Ardour, Qtractor, and Rosegarden.  All three use JACK extensively, are highly configurable, share a similar user interface, and allow users to work with both audio and MIDI signals.  Many other DAWs exist, including a wide selection of commercially-available solutions.  Here is a brief description of the programs documented in this Guide:
+			<itemizedlist>
+			<listitem><para>Ardour: the open-source standard for audio manipulation.  Flexible and extensible.</para></listitem>
+			<listitem><para>Qtractor: a relative new-comer, but easy to use; a "lean and mean," MIDI-focused DAW.  Available from Planet CCRMA at Home or RPM Fusion.</para></listitem>
+			<listitem><para>Rosegarden: a well-tested, feature-packed workhorse of Linux audio, especially MIDI.  Includes a visual score editor for creating MIDI tracks.</para></listitem>
+			</itemizedlist>
+      </para>
+      <para>
+			If you are unsure of where to start, then you may not need a DAW at all:
+			<itemizedlist>
+			<listitem><para>If you are looking for a high-quality recording application, or a tool for manipulating one audio file at a time, then you would probably be better off with Audacity.  This will be the choice of most computer users, especially those new to computer audio, or looking for a quick solution requiring little specialized knowledge.  Audacity is also a good way to get your first computer audio experiences, specifically because it is easier to use than most other audio software.</para></listitem>
+			<listitem><para>To take full advantage of the features offered by Ardour, Qtractor, and Rosegarden, your computer should be equipped with professional-quality audio equipment, including an after-market audio interface and input devices like microphones.  If you do not have access to such equipment, then Audacity may be a better choice for you.</para></listitem>
+			<listitem><para>If you are simply hoping to create a "MIDI recording" of some sheet music, you are probably better off using LilyPond.  This program is designed primarily to create printable sheet music, but it will produce a MIDI-format version of a score if you include the following command in the "score" section of your LilyPond source file: <code>\midi { }</code>.  There are a selection of options that can be put in the "midi" section; refer to the LilyPond help files for a listing.</para></listitem>
+			</itemizedlist>
+		</para>
+	</section>
+	
+   <section id="sect-Musicians_Guide-Stages_of_Recording">
+		<title>Stages of Recording</title>
+		<para>
+			There are three main stages involved in the the process of recording something and preparing it for listeners: recording, mixing, and mastering.  Each step of the process has distinct characteristics, yet they can sometimes be mixed together.
+      </para>
+      	<section id="sect-Musicians_Guide-Recording">
+      		<title>Recording</title>
+   	   	<para>
+					Recording is the process of capturing audio regions (also called "clips" or "segments") into the DAW software, for later processing.  Recording is a complex process, involving a microphone that captures sound energy, translates it into electrical energy, and transmits it to an audio interface.  The audio interface converts the electrical energy into digital signals, and sends it through the operating system to the DAW software.  The DAW stores regions in memory and on the hard drive as required.  Every time the musicians perform some (or all) of the performance to be recorded, while the DAW is recording, it is considered to be a '''take'''.  A successful recording usually requires several takes, due to the inconsistencies of musical performance and of the related technological aspects.
+      		</para>
+      	</section>
+      	<section id="sect-Musicians_Guide-Mixing">
+      		<title>Mixing</title>
+   	   	<para>
+					Mixing is the process through which recorded audio regions (also called "clips") are coordinated to produce an aesthetically-appealing musical output.  This usually takes place after recording, but sometimes additional takes will be needed.  Mixing often involves reducing audio from multiple tracks into two channels, for stereo audio - a process known as "down-mixing," because it decreases the amount of audio data.
+            </para>
+      		<para>
+					Mixing includes the following procedures, among others:
+					<itemizedlist>
+					<listitem><para>automating effects,</para></listitem>
+					<listitem><para>adjusting levels,</para></listitem>
+					<listitem><para>time-shifting,</para></listitem>
+					<listitem><para>filtering,</para></listitem>
+					<listitem><para>panning,</para></listitem>
+					<listitem><para>adding special effects.</para></listitem>
+					</itemizedlist>
+            </para>
+            <para>
+					When the person performing the mixing decides that they have finished, their finalized production is called the '''final mix'''.
+            </para>
+      	</section>
+         <section id="sect-Musicians_Guide-Mastering">
+      		<title>Mastering</title>
+   	   	<para>
+					Mastering is the process through which a version of the final mix is prepared for distribution and listening.  Mastering can be performed for many target formats, including CD, tape, SuperAudio CD, or hard drive.  Mastering often involves a reduction in the information available in an audio file: audio CDs are commonly recorded with 20- or 24-bit samples, for example, and reduced to 16-bit samples during mastering.  While most physical formats (like CDs) also specify the audio signal's format, audio recordings mastered to hard drive can take on many formats, including OGG, FLAC, AIFF, MP3, and many others.  This allows the person doing the mastering some flexibility in choosing the quality and file size of the resulting audio.
+      		</para>
+      		<para>
+					Even though they are both distinct activities, mixing and mastering sometimes use the same techniques.  For example, a mastering technician might apply a specific equalization filter to optimize the audio for a particular physical medium.
+      		</para>
+      	</section>
+      	<section id="sect-Musicians_Guide-Record_Mix_Master_More_Info">
+      		<title>More Information</title>
+   	   	<para>
+					It takes experience and practice to gain the skills involved in successful recording, mixing, and mastering.  Further information about these procedures is available from many places, including these web pages:
+					<itemizedlist>
+					<listitem><para>[http://www.64studio.com/howto-mastering "Mastering your final mix"]</para></listitem>
+					<listitem><para>[http://en.wikipedia.org/wiki/Audio_mixing_%28recorded_music%29 "Audio mixing (recorded music)"]</para></listitem>
+					<listitem><para>[http://en.wikipedia.org/wiki/Multitrack_recording "Multitrack recording"]</para></listitem>
+					</itemizedlist>
+      		</para>
+      	</section>
+	</section>
+	
+	<section id="sect-Musicians_Guide-DAW_Audio_Vocabulary_Transclusion">
+		<title>Audio Vocabulary</title>
+		<para>
+			This part is going to be transcluded form the "Audio Vocabulary" page.
+		</para>
+	</section>
+	
+	<section id="sect-Musicians_Guide-DAW_Interface_Vocabulary">
+		<title>Interface Vocabulary</title>
+		<para>
+			Understanding these concepts is essential to understanding how to use the DAW software's interface.
+		</para>
+	
+	   <section id="sect-Musicians_Guide-Session">
+		   <title>Session</title>
+		   <para>
+				A '''session''' is all of the tracks, regions, automation settings, and everything else that goes along with one "file" saved by the DAW software.  Some software DAWs manage to hide the entire session within one file, but others instead create a new directory to hold the regions and other data.
+         </para>
+         <para>
+				Typically, one session is used to hold an entire recording session; it is broken up into individual songs or movements after recording.  Sometimes, as in the tutorial examples with the Musicians' Guide, one session holds only one song or movement.  There is no strict rule as to how much music should be held within one session, so your personal preference can determine what you do here.
+		   </para>
+	   </section>
+	
+	   <section id="sect-Musicians_Guide-Track_and_Multitrack">
+		   <title>Track and Multitrack</title>
+		   <para>
+				A '''track''' represents one channel, or a predetermined collection of simultaneous, inseparable channels (as is often the case with stereo audio).  In the DAW's main window, tracks are usually represented as rows, whereas time is represented by columns.  A track may hold multiple regions, but usually only one of those regions can be heard at a time.  The '''multitrack''' capability of modern software-based DAWs is one of the reasons for their success.  Although each individual track can play only one region at a time, the use of multiple tracks allows the DAW's outputted audio to contain a virtually unlimited number of simultaneous regions.  The most powerful aspect of this is that audio does not have to be recorded simultaneously in order to be played back simultaneously; you could sing a duet with yourself, for example.
+		   </para>
+	   </section>
+	
+	   <section id="sect-Musicians_Guide-Region_Clip_Segment">
+		   <title>Region, Clip, or Segment</title>
+		   <para>
+				Region, clip, and segment are synonyms: different software uses a different word to refer to the same thing.  A '''region''' (or '''clip''' or '''segment''') is the portion of audio recorded into one track during one take.  Regions are represented in the main DAW interface window as a rectangle, usually coloured, and always contained in only one track.  Regions containing audio signal data usually display a spectrographic representation of that data.  Regions containing MIDI signal data usually displayed as matrix-based representation of that data.
+         </para>
+         <para>
+				For the three DAW applications in the Musicians' Guide:
+				<itemizedlist>
+				<listitem><para>Ardour calls them "regions,"</para></listitem>
+				<listitem><para>Qtractor calls them "clips," and,</para></listitem>
+				<listitem><para>Rosegarden calls them "segments."</para></listitem>
+				</itemizedlist>
+		   </para>
+	   </section>
+	
+	   <section id="sect-Musicians_Guide-Session_Track_Region">
+		   <title>Relationship of Session, Track, and Region</title>
+		   <para>
+				<!-- [[File:Ardour-session_track_region.xcf]] -->
+				[[File:Ardour-session_track_region.png|200px|left|Session, Track, and Region in Ardour.]]
+		   </para>
+	   </section>
+	
+	   <section id="sect-Musicians_Guide-Transport_and_Playhead">
+		   <title>Transport and Playhead</title>
+		   <para>
+				The '''transport''' is responsible for managing the current time in a session, and with it the playhead.   The '''playhead''' marks the point on the timeline from where audio audio would be played, or to where audio would be recorded.  The transport controls the playhead, and whether it is set for recording or only playback.  The transport can move the playhead forward or backward, in slow motion, fast motion, or real time.  In most computer-based DAWs, the playhead can also be moved with the cursor.  The playhead is represented on the DAW interface as a vertical line through all tracks.  The transport's buttons and displays are usually located in a toolbar at the top of the DAW window, but some people prefer to have the transport controls detached from the main interface, and this is how they appear by default in Rosegarden.
+		   </para>
+	   </section>
+	
+	   <section id="sect-Musicians_Guide-Automation">
+		   <title>Automation</title>
+		   <para>
+				Automation of the DAW sounds like it might be an advanced topic, or something used to replace decisions made by a human.  This is absolutely not the case - '''automation''' allows the user to automatically make the same adjustments every time a session is played.  This is superior to manual-only control because it allows very precise, gradual, and consistent adjustments, because it relieves you of having to remember the adjustments, and because it allows many more adjustments to be made simultaneously than you could make manually.  The reality is that automation allows super-human control of a session.  Most settings can be adjusted by means of automation; the most common are the fader and the panner.
+         </para>
+         <para>
+				The most common method of automating a setting is with a two-dimensional graph called an '''envelope''', which is drawn on top of an audio track, or underneath it in an '''automation track'''.   The user adds adjustment points by adding and moving points on the graph.  This method allows for complex, gradual changes of the setting, as well as simple, one-time changes.  Automation is often controlled by means of MIDI signals, for both audio and MIDI tracks.  This allows for external devices to adjust settings in the DAW, and vice-versa - you can actually automate your own hardware from within a software-based DAW!  Of course, not all hardware supports this, so refer to your device's user manual.
+		   </para>
+	   </section>
+	</section>
+   
+	<section id="sect-Musicians_Guide-DAW_User_Interface">
+		<title>User Interface</title>
+		<para>
+		<!--
+File:Qtractor-interface.xcf
+File:Qtractor-interface-clocks.png
+File:Qtractor-interface-messages.png
+File:Qtractor-interface-track.png
+File:Qtractor-interface-track_info.png
+File:Qtractor-interface-transport.png
+-->
+This section describes various components of software-based DAW interfaces.  Although the Qtractor application is visible in the images, both Ardour and Rosegarden (along with most other DAW software) have an interface that differs only in details, such as which buttons are located where.
+		</para>
+	
+	   <section id="sect-Musicians_Guide-Messages_Pane">
+		   <title>"Messages" Pane</title>
+		   <para>
+				[[File:Qtractor-interface-messages.png|300px|"Messages" Pane]]
+
+				The "messages" pane, shown in the above diagram, contains messages produced by the DAW, and sometimes messages produced by software used by the DAW, such as JACK.  If an error occurs, or if the DAW does not perform as expected, you should check the "messages" pane for information that may help you to get the desired results.  The "messages" pane can also be used to determine whether JACK and the DAW were started successfully, with the options you prefer.
+		   </para>
+	   </section>
+	
+	   <section id="sect-Musicians_Guide-DAW_Clock">
+		   <title>Clock</title>
+		   <para>
+				[[File:Qtractor-interface-clocks.png|300px|Clock]]
+
+				The clock shows the current place in the file, as indicated by the transport.  In the image, you can see that the transport is at the beginning of the session, so the clock indicates "0".  This clock is configured to show time in minutes and seconds, so it is a "time clock."  Other possible settings for clocks are to show BBT (bars, beats, and ticks - a "MIDI clock"), samples (a "sample clock"), or an SMPTE timecode (used for high-precision synchronization, usually with video - a "timecode clock").  Some DAWs allow the use of multiple clocks simultaneously.
+         </para>
+         <para>
+				Note that this particular time clock in "Qtractor" also offers information about the MIDI tempo and metre (120.0 beats per minute, and 4/4 metre), along with a quantization setting for MIDI recording.
+		   </para>
+	   </section>
+	
+	   <section id="sect-Musicians_Guide-Track_Info_Pane">
+		   <title>"Track Info" Pane</title>
+		   <para>
+				[[File:Qtractor-interface-track_info.png|300px|"Track Info" Pane]]
+
+				The "track info" pane contains information and settings for each track and bus in the session.  Here, you can usually adjust settings like the routing of a track's or bus' input and output routing, the instrument, bank, program, and channel of MIDI tracks, and the three buttons shown on this image: "R" for "arm to record," "M" for "mute/silence track's output," and "S" for "solo mode," where only the selected tracks and busses are heard.
+         </para>
+         <para>
+				The information provided, and the layout of buttons, can change dramatically between DAWs, but they all offer the same basic functionality.  Often, right-clicking on a track info box will give access to extended configuration options.  Left-clicking on a portion of the track info box that is not a button allows you to select a track without selecting a particular moment in "track" pane.
+		   </para>
+		   <para>
+			   The "track info" pane does not scroll out of view as the "track" pane is adjusted, but is independent.
+		   </para>
+	   </section>
+	
+	   <section id="sect-Musicians_Guide-Track_Pane">
+		   <title>"Track" Pane</title>
+		   <para>
+				[[File:Qtractor-interface-track.png|300px|"Track" Pane]]
+
+				The "track" pane is the main workspace in a DAW.  It shows regions (also called "clips") with a rough overview of the audio wave-form or MIDI notes, allows you to adjust the starting-time and length of regions, and also allows you to assign or re-assign a region to a track.  The "track" pane shows the transport as a vertical line; in this image it is the left-most red line in the "track" pane.
+         </para>
+         <para>
+				Scrolling the "track" pane horizontally allows you to view the regions throughout the session.  The left-most point is the start of the session; the right-most point is after the end of the session.  Most DAWs allow you to scroll well beyond the end of the session.  Scrolling vertically in the "track" pane allows you to view the regions and tracks in a particular time range.
+		   </para>
+	   </section>
+	
+	   <section id="sect-Musicians_Guide-DAW_Transport_Controls">
+		   <title>Transport Controls</title>
+		   <para>
+			   [[File:Qtractor-interface-transport.png|300px|Transport Controls]]
+			   
+			   The transport controls allow you to manipulate the transport in various ways.  The shape of the buttons is somewhat standardized; a similar-looking button will usually perform the same function in all DAWs, as well as in consumer electronic devices like CD players and DVD players.
+         </para>
+         <para>
+				The single, left-pointing arrow with a vertical line will move the transport to the start of the session, without playing or recording any material.  In "Qtractor," if there is a blue place-marker between the transport and the start of the session, the transport will skip to the blue place-marker.  You can press the button again if you wish to skip to the next blue place-marker or the beginning of the session.
+         </para>
+         <para>
+				The double left-pointing arrows move the transport in fast motion, towards the start of the session.  The double right-pointing arrows move the transport in fast motion, towards the end of the session.
+         </para>
+         <para>
+				The single, right-pointing arrow with a vertical line will move the transport to the end of the last region currently in a session.  In "Qtractor," if there is a blue place-marker between the transport and the end of the last region in the session, the transport will skip to the blue place-marker.  You can press the button again if you wish to skip to the next blue place-marker or the end of the last region in the session.
+         </para>
+         <para>
+				The single, right-pointing arrow is commonly called "play," but it actually moves the transport forward in real-time.  When it does this, if the transport is armed for recording, any armed tracks will record.  Whether or not the transport is armed, pressing the "play" button causes all un-armed tracks to play all existing regions.
+         </para>
+         <para>
+				The circular button arms the transport for recording.  It is conventionally red in colour.  In "Qtractor," the transport can only be armed ''after'' at least one track has been armed; to show this, the transport's "arm" button only turns red if a track is armed.
+		   </para>
+	   </section>
+	</section>
+</chapter>
+
diff --git a/en-US/Musicians_Guide.xml b/en-US/Musicians_Guide.xml
index ac3a328..60ab481 100644
--- a/en-US/Musicians_Guide.xml
+++ b/en-US/Musicians_Guide.xml
@@ -6,7 +6,14 @@
 <book status="draft">
 	<xi:include href="Book_Info.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
 	<xi:include href="Preface.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
-	<xi:include href="Chapter.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
+	<!-- Start Chapters -->
+	<xi:include href="Sound_Cards.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
+	<xi:include href="Sound_Servers.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
+	<xi:include href="Planet_CCRMA_at_Home.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
+	<xi:include href="Real_Time_and_Low_Latency.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
+	
+	<xi:include href="Digital_Audio_Workstations.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
+	<!-- End Chapters -->
 	<xi:include href="Revision_History.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
 	<index />
 </book>
diff --git a/en-US/Planet_CCRMA_at_Home.xml b/en-US/Planet_CCRMA_at_Home.xml
new file mode 100644
index 0000000..18ce3e3
--- /dev/null
+++ b/en-US/Planet_CCRMA_at_Home.xml
@@ -0,0 +1,139 @@
+<?xml version='1.0' encoding='utf-8' ?>
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
+<!ENTITY % BOOK_ENTITIES SYSTEM "Musicians_Guide.ent">
+%BOOK_ENTITIES;
+]>
+
+<chapter id="chap-Musicians_Guide-Planet_CCRMA_at_Home">
+	<title>Planet CCRMA at Home</title>
+	
+	<section id="sect-Musicians_Guide-What_Is_Planet_CCRMA">
+		<title>What Planet CCRMA at Home Is</title>
+		<para>
+			As stated on the project's home page, it is the goal of Planet CCRMA at Home to provide packages which will transform a Fedora Linux-based computer into an audio workstation.  What this means is that, while the Fedora Project does an excellent job of providing a general-purpose operating system, a general purpose operating system is insufficient for audio work of the highest quality.  The contributors to Planet CCRMA at Home provide software packages which can tune your system specifically for audio work.
+		</para>
+		<para>
+			Users of GNU Solfege and LilyPond should not concern themselves with Planet CCRMA at Home, unless they also user other audio software.  Neither Solfege nor LilyPond would benefit from a computer optimzed for audio production.
+		</para>
+		<section id="sect-Musicians_Guide-CCRMA">
+			<title>CCRMA</title>
+			<para>
+				CCRMA stands for "Center for Computer Research in Music and Acoustics," which is the name of an academic research initiative and music computing facility at Stanford University, located in Stanford, California.  Its initiatives help scholars to understand the effects and possibilities of computers and technology in various musical contexts.  They offer academic courses, hold workshops and concerts, and try to incorporate the work of many highly-specialized fields.
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-CCRMA_Software">
+			<title>The Software</title>
+			<para>
+				The Planet CCRMA at Home website suggests that they provide most of the software used on the computers in CCRMA's computing facilities.  Much of this software is highly advanced and complex, and not intended for everyday use.  More adventurous users are encouraged to explore Planet CCRMA's website, and investigate the software for themselves.
+			</para>
+		</section>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Knowing_Whether_to_Use_Planet_CCRMA">
+		<title>Knowing Whether You Should Use Planet CCRMA at Home</title>
+		<section id="sect-Musicians_Guide-CCRMA_Need_Exclusive_Software">
+			<title>Do You Need Exclusive Software?</title>
+			<para>
+				The only useful reason to install an additional repository is if you intend to install and use its software.  The only software application covered in this guide, which is available exclusively from the Planet CCRMA at Home repository, is "SuperCollider".  The Planet CCRMA repository also offers many other audio-related software applications, many of which are available from the default Fedora Project repositories.
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-CCRMA_Updated_Versions">
+			<title>Do You Need Updated Versions?</title>
+			<para>
+				Most of the audio software currently available in the default Fedora repositories was initially available in Fedora from the Planet CCRMA at Home repository.  Sometimes, an updated version of an application is available from the Planet CCRMA repository before it is available from the Fedora Updates repository.  If you need the newer software version, then you should install the Planet CCRMA repository.
+			</para>
+			<para>
+				This is also a potential security weakness, for users who install the Planet CCRMA repository, but do not install any of its software.  When "yum" finds a newer version of an installed application, it will be installed, regardless of the repository.  This may happen to you without you noticing, so that you begin using Planet CCRMA software without knowing it.
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-CCRMA_Security_and_Stability">
+			<title>Security and Stability with Third-Party Repositories</title>
+			<para>
+				The biggest reason that you should avoid installing the Planet CCRMA at Home repository unless you ''need'' its software is security.  There are two main security issues with using the Planet CCRMA repositories:
+				<orderedlist>
+				<listitem><para>Planet CCRMA is intended for specialized audio workstations.  The software is packaged in such a way that creates potential (and unknown) security threats caused by the optimizations necessary to prepare a computer system for use in audio work.  Furthermore, these optimizations may reveal software bugs present in non-Planet CCRMA software, and allow them to do more damage than on a non-optimized system.  Finally, a computer system's "stability" (its ability to run without trouble) may be compromised by audio optimizations.  Regular desktop applications may perform less well on audio-optimized systems, if the optimization process unintentionally un-optimized some other process.</para></listitem>
+				<listitem><para>CCRMA is not a large, Linux-focussed organization.  It is an academic organization, and its primary intention with the Planet CCRMA at Home repository is to allow anybody with a computer to do the same kind of work that they do.  The Fedora Project is a relatively large organization, backed by one of the world's largest commercial Linux providers, which is focussed on creating a stable and secure operating system for daily use.  Furthermore, thousands of people around the world are working for the Fedora Project or its corporate sponsor, and it is their responsibility to proactively solve problems.  CCRMA has the same responsibility, but they do not have the dedicated resources of the Fedora Project, it would be naive(???) to think that they would be capable of providing the same level of support.</para></listitem>
+				</orderedlist>
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-CCRMA_Best_Practices">
+			<title>A "Best Practices" Solution</title>
+			<para>
+				All Fedora Linux users should be grateful to the people working at CCRMA, who help to provide the Planet CCRMA at Home repository.  Their work has been instrumental in allowing Fedora to provide the amount of high-quality audio software that it does.  Furthermore, the availability of many of CCRMA's highly-specialized software applications through the Planet CCRMA at Home repository is an invaluable resource to audio and music enthusiasts.
+			</para>
+			<para>
+				On the other hand, Fedora users cannot expect that Planet CCRMA software is going to meet the same standards as Fedora software.  While the Fedora Project's primary goal is to provide Linux software, CCRMA's main goal is to advance the state of knowledge of computer-based music and audio research and art.
+			</para>
+			<para>
+				Where do these two goals meet?
+			</para>
+			<para>
+				If you want to use your computer for both day-to-day desktop tasks and high-quality audio production, one good solution is to "dual-boot" your computer.  This involves installing Fedora Linux twice on the same physical computer, but it will allow you to keep an entirely separate operating system environment for the Planet CCRMA at Home software.  Not only will this allow you to safely and securely run Planet CCRMA applications in their most-optimized state, but you can help to further optimize your system by turning off and even removing some system services that you do not need for audio work.  For example, a GNOME or KDE user might choose to install only "Openbox" for their audio-optimized installation.
+			</para>
+			<para>
+				Alternatively, there is the possibility of going half-way: installing only some Planet CCRMA applications, but not the fully-optimized kernel and system components.  This would be more suitable for a computer used most often for typical day-to-day operations (email, word processing, web browsing, etc.)  If you wanted to use SuperCollider, but did not require other audio software, for example, then this might be the best solution for you.
+			</para>
+			<para>
+				Ultimately, it is your responsibility to ensure that your computer and its data is kept safely and securely.  You will need to find the best solution for your own work patterns and desires.
+			</para>
+		</section>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Using_Planet_CCRMA_Software">
+		<title>Using Software from Planet CCRMA at Home</title>
+		<para>
+			The Planet CCRMA at Home software is hosted (stored) on a server at Stanford University.  It is separate from the Fedora Linux servers, so yum (the command-line utility used by PackageKit and KPackageKit) must be made aware that you wish to use it.  After installing the repository, Planet CCRMA at Home software can be installed through yum, PackageKit, or KPackageKit just as easily as any other software.
+		</para>
+		<section id="sect-Musicians_Guide-CCRMA_Installing_Repository">
+			<title>Installing the Planet CCRMA at Home Repositories</title>
+			<para>
+				The following steps will install the Planet CCRMA at Home repository, intended only for Fedora Linux-based computers.
+				<orderedlist>
+				<listitem><para>Update your computer with PackageKit, KPackageKit, or by running <code>su -c 'yum update'</code> and approving the installation.</para></listitem>
+				<listitem><para>You will have to use a terminal window for the next portion.</para></listitem>
+				<listitem><para>Run the following commands: <code>su -c 'rpm -Uvh http://ccrma.stanford.edu/planetccrma/mirror/fedora/linux/planetccrma/12/i386/planetccrma-repo-1.1-2.fc12.ccrma.noarch.rpm'</code> ADMONITION that this will work for Fedora 12, 13, and 14 and 32-bit and 64-bit</para></listitem>
+				<listitem><para>Update your computer again.</para></listitem>
+				<listitem><para>You may receive a warning that the RPM database was altered outside of "yum".  This is normal.</para></listitem>
+				<listitem><para>Your repository definition will automatically be updated.</para></listitem>
+				<listitem><para>Some packages are available from Fedora repositories in addition to other repositories (like Planet CCRMA at Home).  If the Planet CCRMA repository has a newer version of something than the other repositories that you have installed, then the Planet CCRMA version will be installed at this point.</para></listitem>
+				</orderedlist>
+			</para>
+			<para>
+				Although it is necessary to use the "rpm" program directly, all other Planet CCRMA software can be installed through "yum", like all other applications.  Here is an explanation of the command-line options used above:
+				<itemizedlist>
+				<listitem><para>-U means "upgrade," which will install the specified package, and remove any previously-installed version</para></listitem>
+				<listitem><para>-v means "verbose," which will print additional information meessages</para></listitem>
+				<listitem><para>-h means "hash," which will display hash marks (these: #) showing the progress of installation.</para></listitem>
+				</itemizedlist>
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-CCRMA_Repository_Priorities">
+			<title>Setting Repository Priorities</title>
+			<para>
+				This is optional, and recommended only for advanced users.  Normally, "yum" will install the latest version of a package, regardless of which repository provides it.  Using this plugin will change this behaviour, so that yum will choose package versions primarily based on which repository provides it.  If a newer version is available at a repository with lower priority, yum will not upgrade the package.  If you simply wish to prevent a particular package from being updated, the instructions in "Preventing LINK LINK" are better-suited to your needs.
+				<orderedlist>
+				<listitem><para>Install the "yum-plugin-priorities" package.</para></listitem>
+				<listitem><para>Use a text editor or the "cat" or "less" command to verify that <code>/etc/yum/pluginconf.d/priorities.conf</code> exists, and contains the following text:
+				[pre][main]
+				enabled = 1[/pre]If you want to stop using the plugin, you can edit this file so that <code>enabled = 0</code>.  This allows you to keep the priorities as set in the repository configuration files.</para></listitem>
+				<listitem><para>You can set priorities for some or all repositories.  To add a priority to a repository, edit its respective file in the <code>/etc/yum.repos.d/*</code> directory, adding a line like: [pre]priority = N[/pre]where N is a number from 1 to 99, inclusive.  A priority of 1 is the highest setting, and 99 is the lowest.  You will need to set priorities of at least two repositories before this becomes useful.</para></listitem>
+				</orderedlist>
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-CCRMA_Preventing_Package_Updates">
+			<title>Preventing a Package from Being Updated</title>
+			<para>
+				This is optional, and recommended only for advanced users.  Normally, "yum" will install the latest version of a package.  Using this plugin will allow you to prevent certain packages from being updated.  If you wish to prevent packages from a particular repository from being used, then THIS SECTION is better-suited to your needs.
+				<orderedlist>
+				<listitem><para>Install the "yum-plugin-versionlock" package.</para></listitem>
+				<listitem><para>Use a text editor or the "cat" or "less" command to verify that <code>/etc/yum/pluginconf.d/versionlock.conf</code> exists, and contains the following text:
+				[pre]enabled = 1[/pre]</para></listitem>
+				<listitem><para>Add the list of packages which you do not want to be updated to <code>/etc/yum/pluginconf.d/versionlock.list</code>.  Each package should go on its own line.  For example:
+				[pre]jack-audio-connect-kit-1.9.4
+				qjackctl-0.3.6[/pre]</para></listitem>
+				</orderedlist>
+			</para>
+		</section>
+	</section>
+	
+</chapter>
\ No newline at end of file
diff --git a/en-US/Real_Time_and_Low_Latency.xml b/en-US/Real_Time_and_Low_Latency.xml
new file mode 100644
index 0000000..c5ced01
--- /dev/null
+++ b/en-US/Real_Time_and_Low_Latency.xml
@@ -0,0 +1,77 @@
+<?xml version='1.0' encoding='utf-8' ?>
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
+<!ENTITY % BOOK_ENTITIES SYSTEM "Musicians_Guide.ent">
+%BOOK_ENTITIES;
+]>
+
+<chapter id="chap-Musicians_Guide-Real_Time_and_Low_Latency">
+	<title>Real-Time and Low Latency</title>
+	<para>
+		It is perhaps a common perception that computers can compute things instantaneously.  Anybody who has ever waited for a web page to load has first-hand experience that this is not the case: computers take time to do things, even if the amount of time is often imperceptible to human observers.  Moreover, a computer doing one thing can seem like it's acting nearly instantaneously, but a computer doing fifteen things will have a more difficult time keeping up appearances.
+	</para>
+	
+	<section id="sect-Musicians_Guide-Low_Latency">
+		<title>Why Low Latency Is Desirable</title>
+		<para>
+			When computer audio specialists talk about a computer acting in '''real-time''', they mean that it is acting with only an imperceptible delay.  A computer cannot act on something instantaneously, and the amount of waiting time between an input and its output is called '''latency'''.  In order for the delay between input and output to be perceived as non-existant (in other words, for a computer to "react in real-time,") the latency must be low.
+		</para>
+		<para>
+			For periodic tasks, like processing audio (which has a consistently recurring amount of data per second), low latency is desirable, but ''consistent'' latency is usually more important.  Think of it like this:  years ago in North America, milk was delivered to homes by a dedicated delivery person.  Imagine if the milk delivery person had a medium-latency, but consistent schedule, returning every seven days.  You would be able to plan for how much milk to buy, and to limit your intake so that you don't run out too soon.  Now imagine if the milk delivery person had a low-latency, but inconsistent schedule, returning every one to four days.  You would never be sure how much milk to buy, and you wouldn't know how to limit yourself.  Sometimes there would be too much milk, and sometimes you would run out.  Audio-processing and synthesis software behaves in a similar way: if it has a consistent amount of latency, it can plan accordingly.  If it has an inconsistent amount of lat
 ency - whether large or small - there will sometimes be too much data, and sometimes not enough.  If your application runs out of audio data, there will be noise or silence in the audio signal - both bad things.
+		</para>
+		<para>
+			Relatively low latency is still important, so that your computer reacts imperceptibly quickly to what's going on.  The point is that the difference between an 8&nbsp;ms target latency and a 16&nbsp;ms target latency is almost certainly imperceptible to humans, but the higher latency may help your computer to be more consistent - and that's more important.
+		</para>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Processor_Scheduling">
+		<title>Processor Scheduling</title>
+		<para>
+			If you've ever opened the "System Monitor" application, you will probably have noticed that there are a lot of "processes" running all the time.  Some of these processes need the processor, and some of them are just waiting around for something to happen.  To help increase the number of processes that can run at the same time, many modern CPUs have more than one "core," which allows for more processes to be evaluated at the same time.  Even with these improvements, there are usually more processes than available cores: my computer right now has 196 processes and only three cores.  There has to be a way of decided which process gets to run and when, and this task is left to the operating system.
+		</para>
+		<para>
+			In GNU/Linux systems like Fedora Linux, the core of the operating system (called the '''kernel''') is responsible for deciding which process gets to execute at what time.  This responsibility is called "scheduling."  Scheduling access to the processor is called, '''processor scheduling'''.  The kernel also manages scheduling for a number of other things, like memory access, video hardware access, audio hardware access, hard drive access, and so on.  The algorithm (procedure) used for each of these scheduling tasks is different for each, and can be changed depending on the user's needs and the specific hardware being used.  In a hard drive, for example, it makes sense to consider the physical location of data on a disk before deciding which process gets to read first.  For a processor this is irrelevant, but there are many other things to consider.
+		</para>
+		<para>
+			There are a number of scheduling algorithms that are available with the standard Linux kernel, and for most uses, a "fair queueing" system is appropriate.  This helps to ensure that all processes get an equal amount of time with the processor, and it's unacceptable for audio work.  If you're recording a live concert, and the "PackageKit" update manager starts, you don't care if PackageKit gets a fair share of processing time - it's more important that the audio is recorded as accurately as possible.  For that matter, if you're recording a live concert, and your computer isn't fast enough to update the monitor, keyboard, and mouse position while providing uninterrupted, high-quality audio, you want the audio instead of the monitor, keyboard, and mouse.  After all, once you've missed even the smallest portion of audio, it's gone for good!
+		</para>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Real_Time_Linux_Kernel">
+		<title>The Real-Time Linux Kernel</title>
+		<para>
+			There is a "real-time patch" for the Linux kernel which enables the processor to unfairly schedule certain processes that ask for higher priority.  Although the term "patch" may make it seem like this is just a partial solution, it really refers to the fact that the programming code used to enable this kind of unfair scheduling is not included in standard kernels; the standard kernel code must have this code "patched" into it.
+		</para>
+		<para>
+			The default behaviour of a real-time kernel is still to use the "fair queueing" system by default.  This is good, because most processes don't need to have consistently low latencies.  Only specific processes are designed to request high-priority scheduling.  Each process is given (or asks for) a priority number, and the real-time kernel will always give processing time to the process with the highest priority number, even if that process uses up ''all'' of the available processing time.  This puts regular applications at a disadvantage: when a high-priority process is running, the rest of the system may be unable to function properly.  In extreme (and very rare!) cases, a real-time process can encounter an error, use up all the processing time, and disallow any other process from running - effectively locking you out of your computer.  Security measures have been taken to help ensure this doesn't happen, but as with anything, there is no guarantee.  If you use a real-tim
 e kernel, you are exposing yourself to a slightly higher risk of system crashes.
+		</para>
+		<para>
+			A real-time kernel should not be used on a computer that acts as a server, for these reasons.
+		</para>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Hard_and_Soft_Real_Time">
+		<title>Hard and Soft Real-Time</title>
+		<para>
+			Finally, there are two different kinds of real-time scheduling.  The Linux kernel, even at its most extreme, uses only '''soft real-time'''.  This means that, while processor and other scheduling algorithms may be optimized to give preference to higher-priority processes, no absolute guarantee of performance can be made.  A real-time kernel helps to greatly reduce the chance of an audio process running out of data, but sometimes it can still happen.
+		</para>
+		<para>
+			A '''hard real-time''' computer is designed for specialized purposes, where even the smallest amount of latency can make the difference between life and death.  These systems are implemented in hardware as well as software.  Example uses include triggering airbag deployment in automobile crashes, and monitoring the heart rate of a patient during an operation.  These computers are not particularly multi-functional, which is part of their means to accomplishing a guaranteed low latency.
+		</para>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Getting_Real_Time_Kernel_in_Fedora">
+		<title>Getting a Real-Time Kernel in Fedora Linux</title>
+		<para>
+			In Fedora Linux, the real-time kernel is provided by the Planet CCRMA at Home software repositories.  Along with the warnings in the [[User:Crantila/FSC/CCRMA/Everything|Planet CCRMA section]], here is one more to consider: the real-time kernel is used by fewer people than the standard kernel, so it is less well-tested.  The changes of something going wrong are relatively low, but be aware that using a real-time kernel increases the level of risk.  Always leave a non-real-time option available, in case the real-time kernel stops working.
+		</para>
+		<para>
+			You can install the real-time kernel, along with other system optimizations, by following these instructions:
+			<orderedlist>
+			<listitem><para>Install the Planet CCRMA at Home repositories by following [[User:Crantila/FSC/CCRMA/Everything#Using Planet CCRMA at Home Software|these instructions]].</para></listitem>
+			<listitem><para>Run the following command in a terminal: [pre]su -c 'yum install planetccrma-core'[/pre]  Note that this is a meta-package, which does not install anything by itself, but causes a number of other packages to be installed, which will themselves perform the desired installation and optimization.</para></listitem>
+			<listitem><para>Shut down and reboot your computer, to test the new kernel.  If you decided to modify your GRUB configuration, be sure that you leave a non-real-time kernel available for use.</para></listitem>
+			</orderedlist>
+		</para>
+	</section>
+	
+</chapter>
diff --git a/en-US/Sound_Cards.xml b/en-US/Sound_Cards.xml
new file mode 100644
index 0000000..e8fe2c0
--- /dev/null
+++ b/en-US/Sound_Cards.xml
@@ -0,0 +1,129 @@
+<?xml version='1.0' encoding='utf-8' ?>
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
+<!ENTITY % BOOK_ENTITIES SYSTEM "Musicians_Guide.ent">
+%BOOK_ENTITIES;
+]>
+
+<chapter id="chap-Musicians_Guide-Sound_Cards">
+	<title>Sound Cards and Digital Audio</title>
+	<para>
+		Everybody has a vague idea of what sound cards are, how they work, and what they do.  Sometimes, especially when doing professional-quality audio work, a vague understanding is no longer sufficient.  This chapter introduces the technical vocabulary used when discussing computer audio hardware.
+	</para>
+	
+	<section id="sect-Musicians_Guide-What_Sound_Cards_Are">
+		<title>Defining Sound Cards</title>
+		<para>
+			Broadly defined, a sound card is any computer-connected device which allows the computer to process audio in some way.  There are two general categories into which most sound cards fit, described below.
+		</para>
+		<section id="sect-Musicians_Guide-Audio_Interfaces">
+			<title>Audio Interfaces</title>
+			<para>
+				This is a hardware device that allows audio equipment to be connected to your computer, including microphones and speakers.  Typically audio entering or leaving an audio interface from/to an external device requires conversion between digital and analogue formats.  However, with the rise of external digital audio equipment, there are an increasing number of devices that connect digitally to an audio interface.
+			</para>
+			<para>
+				The conversion between analogue and digital signals is a prerequisite for computers to be able to process audio signals, so it is the primary function of audio interfaces.  The real world creates sound with a limitless range of possibilities for pitch, volume, and duration.  The digital nature of computers requires these limitless possibilities to be reduced to finite limits.  The best digital/analogue converters are capable of using these limits in such a way that humans don't notice anything missing - much like the best computer monitors and graphics adapters are able to disguise the fact that only about half of the colours our eyes can see are display-able on computers.  This problem is discussed further in the "Bit Rates and Sample Rates" section.
+			</para>
+			<para>
+				Audio interfaces also amplify signals for directly-connected analogue devices (like headphones).  Some offer power for microphones, too (pre-amplification and/or phantom power).
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-MIDI_Interfaces">
+			<title>MIDI Interfaces</title>
+			<para>
+				MIDI stands for "Musical Instrument Digital Interface," and is commonly associated with low-quality imitations of acoustic instruments.  This association is unfortunate, since high-quality audio is indeed possible with MIDI, and MIDI-driven devices have played a part in many mainstream and non-mainstream audio environments.  Whereas audio signals specify the sounds themselves, MIDI signals contain instructions on how to make the sounds.  It is a synthesizer's responsibility to follow these instructions, turning them into sounds.  Going even further, the MIDI specification allows for the control of many audio-related devices, like mixers, sequencers, and Digital Audio Workstations.  Although the signals used to control these devices (or software applications) do not directly cause the generation of music, they still follow  the definition of "MIDI signals": instructions on how to make sounds.
+			</para>
+			<para>
+				Whereas audio interfaces allow the input and output of audio signals ("normal sound") from a computer, MIDI interfaces allow the input and output of MIDI signals.  Some audio interfaces have MIDI capabilities built-in, and some MIDI interfaces also transform MIDI signals into audio signals.  The latter kind of device is performing "MIDI synthesis," a task for which there exist many software-only solutions.  "FluidSynth," covered in [[User:Crantila/FSC/Synthesizers/FluidSynth|this section]] of the Musicians' Guide, is one such software solution.
+			</para>
+			<para>
+				Having a hardware-based MIDI interface is not a requirement for working with MIDI signals and applications.  The costly nature of most MIDI hardware makes it impractical for occasional or beginning MIDI users and computer music enthusiasts.  Much of the software in this Guide is capable of working with MIDI signals, and supports but does not require MIDI-capable hardware.
+			</para>
+		</section>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Sound_Card_Connections">
+		<title>Methods of Connection</title>
+		<para>
+			The following connection methods can be used by either audio or MIDI interfaces, so they are collectively referred to as "sound cards," in this section.
+		</para>
+		
+		<section id="sect-Musicians_Guide-Motherboard_Integrated">
+			<title>Integrated into the Motherboard</title>
+			<para>
+				These sound cards are built into the computer's motherboard.  In recent years, the quality of audio produced by these sound cards has greatly increased, but the best integrated solutions are still not as good as the best non-integrated solutions.  Good integrated sound cards should be good enough for most audio work; if you want a professional-sounding sound card, or especially if you want to connect high-quality input devices, then an additional sound card is recommended.
+			</para>
+			<para>
+				Hardware MIDI interfaces are rarely, if ever, integrated into the motherboard.
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-PCI_Sound_Cards">
+			<title>Internal PCI Connection</title>
+			<para>
+				Sound cards connected to the motherboard by PCI (or PCI-Express, etc.) will probably offer higher performance, and lower latencies, than USB- or FireWire-connected devices.  Professional-quality sound cards often have insufficient space for connectors on the card itself, so they often include a proprietary, external component specifically for adding connectors.  The biggest disadvantage of PCI-connected sound cards is that they cannot be used with notebooks or netbooks, and that they are only as portable as the computer in which they're installed.
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-USB_Sound_Cards">
+			<title>External USB Connection</title>
+			<para>
+				USB-connected sound cards are becoming more popular, especially with the increasing bandwidth possibilities of USB connections.  The quality can be as good as internally-connected sound cards, although the USB connection may add additional latency, which may or may not be a concern.  The biggest advantages of USB-connected sound cards is that they can be used with notebooks and netbooks, and that they are usually easier to transport than an entire desktop computer.
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-FireWire_Sound_Cards">
+			<title>External FireWire Connection</title>
+			<para>
+				FireWire-connected sound cards are not as popular as USB sound cards, but they tend to be of higher quality.  In addition, unlike USB-connected sound cards, FireWire-connected sound cards are able to take advantage of FireWire's "guaranteed bandwidth" and "bus-mastering" capabilities.  Having guaranteed bandwidth ensures that the sound card will be able to send data when it chooses; the sound card will not have to compete with other devices connected with the same connection type.  Using bus-mastering enables the FireWire-connected device to read and write directly to and from the computer's main memory, without first going through the CPU.  High-speed FireWire connections are also available on older computers where a USB 2.0 connection is not available.
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-Choose_Sound_Card_Connection">
+			<title>Choosing a Connection Type</title>
+			<para>
+				The method of connection should not by itself determine which sound card is appropriate for you.  Which connection type is right for you will depend on a wide range of factors, but the actual sound quality is significantly more important than the theoretical advantages or disadvantages of the connection type.  If possible, you should try out potential devices with your computer before you buy one.
+			</para>
+		</section>
+	</section>
+
+	<section id="sect-Musicians_Guide-Bit_Rates_and_Sample_Rates">
+		<title>Bit Rates and Sample Rates</title>
+		<para>
+			As mentioned in the !!Audio Interface section!!, the primary job of audio interfaces is to carry out the transformation of audio signals between digital and analogue forms.  This diagram from Wikipedia illustrates the "digital problem," when it comes to audio: [http://en.wikipedia.org/wiki/File:Pcm.svg here].  Although the wave-shape of the analogue signal, which is what is produced by most acoustic instruments and by the human voice, is shown in red, computers cannot store that information.  Instead, they usually store some approximation, which is represented in that diagram by the gray, shaded area.  Note that the diagram is simply an example, and not meant to depict a particular real-world recording.
+		</para>
+		<para>
+			It is the conversion between digital and analogue signals that distinguishes low- and high-quality audio interfaces.  High-quality convertors will be able to record and reproduce a signal that is nearly identical to the original.  Bit and sample rates are tied to the closeness of approximation that an audio interface can make, and they are explained below.  There are other factors involved in overall sound quality.
+		</para>
+		<section id="sect-Musicians_Guide-Bit_Rate">
+			<title>Bit Rate (Sample Format)</title>
+			<para>
+				This is the number of bits used to describe the audio in a length of time.  The higher the number of bits, the greater the detail that will be stored.  For most uses, the bit-rate is usually measured in "bits per second," as in the often-used 128&nbsp;kb/s bit-rate for MP3 audio.  Professional audio is more often referred to as "bits per sample," which is usually simply called "bits."  CDs have a 16&nbsp;bit/sample bit-rate, professional audio is usually recorded at a 24&nbsp;bit/sample bit-rate, and a 32&nbsp;bit/sample bit-rate is supported by some hardware and software, but not widely used.  Due to technical limitations, 20-bit audio is also widely used.  See Wikipedia for more information (get a link??)
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-Sample_Rate">
+			<title>Sample Rate</title>
+			<para>
+				A sample is a collection of a number of bits, representing a sound at an instantaneous point in time.  The number of bits contained in a sample is determined by the bit-rate (usually 16 or 24 bits per sample).  The sample rate is a measure of how many samples occupy one second - that is, how many "instants" of sound are catalogued for each second.  Theoretically, a higher sample rate results in a higher-quality audio signal.  The sample rate is measured in Hertz, which means "samples per second."  CDs have a 44&nbsp;100&nbsp;Hz sample rate, but audio is often recorded at 48&nbsp;000&nbsp;Hz, 96&nbsp;000&nbsp;Hz, or even 192&nbsp;000&nbsp;Hz.  These are often indicated as 44.1&nbsp;kHz, 48&nbsp;kHz, 96&nbsp;kHz, and 192&nbsp;kHz, respectively.
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-Bit_and_Sample_Rate_Conclusions">
+			<title>Conclusions</title>
+			<para>
+				Both of these factors have an impact on potential sound quality.  Depending on the limitations and capabilities of your equipment, you may be more inclined to use particular settings than others.  Here are some comparisons:
+				<itemizedlist>
+				<listitem><para>16-bit bit rate, and 44.1&nbsp;kHz sample rate (CD audio; good for wide distribution and maximum compatibility; 705.6&nbsp;kb/s)</para></listitem>
+				<listitem><para>24-bit bit rate, and 96&nbsp;kHz sample rate (CDs are usually recorded at these rates, then "down-mixed" later; 2304&nbsp;kb/s)</para></listitem>
+				<listitem><para>24-bit bit rate, and 192&nbsp;kHz sample rate (DVD Audio; not widely compatible; 4608&nbsp;kb/s)</para></listitem>
+				<listitem><para>1-bit bit rate, and 2822.4&nbsp;kHz sample rate (Super Audio CD; not widely compatible; 2822.4&nbsp;kb/s)</para></listitem>
+				</itemizedlist>
+			</para>
+			<para>
+				In the end, bit rate and sample rate are only part of what determines overall sound quality.  Moreover, sound quality is subjective, and you will need to experiment to find the equipment and rates that work best for what you do.
+			</para>
+		</section>
+	</section>
+
+	<section id="sect-Musicians_Guide-Sound_Cards_Audio_Vocabular">
+		<title>Audio Vocabulary</title>
+		<para>
+			This part will transclude the "audio vocabulary" file.
+		</para>
+	</section>
+
+</chapter>
\ No newline at end of file
diff --git a/en-US/Sound_Servers.xml b/en-US/Sound_Servers.xml
new file mode 100644
index 0000000..d8b6b74
--- /dev/null
+++ b/en-US/Sound_Servers.xml
@@ -0,0 +1,141 @@
+<?xml version='1.0' encoding='utf-8' ?>
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
+<!ENTITY % BOOK_ENTITIES SYSTEM "Musicians_Guide.ent">
+%BOOK_ENTITIES;
+]>
+
+<chapter id="chap-Musicians_Guide-How_Computers_Deal_with_Hardware">
+	<title>How Computers Deal with Audio and MIDI Hardware</title>
+	<para>
+		One of the techniques consistently used in computer science is abstraction.  Abstraction is the process of creating a generic model for something (or some things) that are actually unique.  The "driver" for a hardware device in a computer is one form of dealing with abstraction: the computer's software interacts with all sound cards in a similar way, and it is the driver which translates the universal instructions given by the software into specific instructions for operating that hardware device.  Consider this real-world comparison: you know how to operate doors because of abstracted instructions.  You don't know how to open and close every door that exists, but from the ones that you do know how to operate, your brain automatically creates abstracted instructions, like "turn the handle," and "push the door," which apply with all or most doors.  When you see a new door, you have certain expectations about how it works, based on the abstract behaviour of doors, and you qu
 ickly figure out how to operate that specific door with a simple visual inspection.  The principle is the same with computer hardware drivers: since the computer already knows how to operate "sound cards," it just needs a few simple instructions (the driver) in order to know how to operate any particular sound card.
+	</para>
+	
+	<section id="sect-Musicians_Guide-Sound_Servers-ALSA">
+		<title>How Linux Deals with Audio: ALSA</title>
+		<para>
+			In Linux, the core of the operating system provides hardware drivers for most audio hardware.  The hardware drivers, and the instructions that other software can use to connect to those drivers, are collectively called "ALSA," which stands for "Advanced Linux Sound Architecture."  ALSA is the most direct way that software applications can interact with audio and MIDI hardware, and it used to be the most common way.  However, in order to include all of the features that a software application might want to use, ALSA is quite complex, and can be error-prone.  For this and many other reasons, another level of abstraction is normally used, and this makes it easier for software applications to take advantage of the features they need.
+		</para>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Sound_Servers_Section">
+		<title>Sound Servers</title>
+		<para>
+			Sound servers are software applications that run "in the background," meaning they are rarely seen by users.  They are used to provide another level of abstraction - essentially to automatically take care of certain aspects of using ALSA, thereby making it easier for software applications to use the audio hardware.  The three sound servers discussed in this guide have distinctly different goals, provide distinctly different features and capabilities, and should not be viewed as though one is universally better than the others.
+		</para>
+		<section id="sect-Musicians_Guide-Sound_Servers-PulseAudio">
+			<title>PulseAudio</title>
+			<para>
+				PulseAudio is an advanced sound server, intended to make audio programming in GNU/Linux operating systems as easy as possible.  The idea behind its design is that an audio application needs only to output audio to PulseAudio, and PulseAudio will take care of the rest: choosing and controlling a particular device, adjusting the volume, working with other applications, and so on.  PulseAudio even has the ability to use "networked sound," which allows two computers using PulseAudio to communicate as though they were one computer - either computer can input from or output to either computer's audio hardware just as easily as its own audio hardware.  This is all controlled within PulseAudio, so no further complication is added to the software.
+			</para>
+			<para>
+				The Fedora Project's integration of PulseAudio as a vital part of the operating system has helped to ensure that audio applications can "just work" for most people under most circumstances.  This has made it much easier for users to carry out basic audio tasks.
+			</para>
+			<!--
+			Fernando Lopez-Lezcano: I don't think that at this point applications can be written for the pulse audio API. Lennart I think discourages it. AFAIK applications still use the ALSA API when talking to pulse audio, it is just that pulse is in the middle, and can manage connections from many applications that basically share the audio hardware (which is only managed by PA). So what is made easier is not "audio programming" but perhaps "shared access to the sound card". You would not want to program for PA (it is very complex, perhaps even more than ALSA as it is all asynchronous and callback based).
+			
+			Christopher Antila: This is useful and interesting information, but I don't think it belongs here.  I''m keeping it as a comment, in case somebody decides to use it in the future.
+			-->
+		</section>
+		<section id="sect-Musicians_Guide-Sound_Servers-JACK">
+			<title>JACK Audio Connection Kit</title>
+			<para>
+				The JACK sound server offers fewer features than other sound servers, but they are tailor-made to allow the functionality required by audio creation applications.  JACK also makes it easier for users to configure the options that are most important for such situations.  The server supports only one sample rate and format at a time, and allows applications and hardware to easily [[User:Crantila/FSC/Sound_Cards#Routing_and_Multiplexing|connect and multiplex]] in ways that other sound servers do not.  It is also optimized to run with consistently low latencies.  Although using JACK requires a better understanding of the underlying hardware, the "QjackCtl" application provides a graphical user interface to ease the process.
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-Sound_Servers-Phonon">
+			<title>Phonon</title>
+			<para>
+				Phonon is a sound server built into the KDE Software Compilation, and is one of the core components of KDE.  By default on Fedora Linux, Phonon feeds output to PulseAudio, but on other platforms (like Mac OS X, Windows, other versions of Linux, FreeBSD, and any other system that supports KDE), Phonon can be configured to feed its output anywhere.  This is its greatest strength - that KDE applications like Amarok and Dragon Player need only be programmed to use Phonon, and they can rely on Phonon to take care of everything else.  As KDE applications increasingly find their place in Windows and especially Mac OS X, this cross-platform capability is turning out to be very useful.
+			</para>
+		</section>
+	</section>
+		
+	<section id="sect-Musicians_Guide-Using_JACK">
+		<title>Using the JACK Audio Connection Kit</title>
+		<para>
+			!! What to say here depends on whether jack2 will be available with Fedora 14.  If it is, no need for CCRMA solution.  If it isn't, need for CCRMA solution. !!
+		</para>
+		<section id="sect-Musicians_Guide-Install_and_Configure_JACK">
+			<title>Installing and Configuring JACK</title>
+			<para>
+				<orderedlist>
+				<listitem><para>Ensure that you have installed the Planet CCRMA at Home repositories.  For instructions, refer to [[User:Crantila/FSC/CCRMA/Everything#Installing_the_Repository|this section]].</para></listitem>
+				<listitem><para>Use PackageKit or KPackageKit to install the "jack-audio-connection-kit" and "qjackctl" packages, or run the following command in a terminal: [pre]sudo -c 'yum install jack-audio-connection-kit qjackctl'[/pre]</para></listitem>
+				<listitem><para>Review and approve the installation, making sure that it completes correctly.</para></listitem>
+				<listitem><para>Run QjackCtl from the KMenu or the Applications menu.</para></listitem>
+				<listitem><para>To start the JACK server, press the 'Start' button; to stop it, press the 'Stop' button.</para></listitem>
+				<listitem><para>Use the 'Messages' button to see messages, which are usually errors or warnings.</para></listitem>
+				<listitem><para>Use the 'Status' button to see various statistics about the currently-running server.</para></listitem>
+				<listitem><para>Use the 'Connections' button to see and adjust the connections between applications and audio hardware.</para></listitem>
+				</orderedlist>
+			</para>
+			<para>
+				JACK will operate without following this procedure, but users are strongly encouraged to follow these three steps, for security reasons.  They will help to allow optimal performance of the JACK sound server, while greatly reducing the risk that an application or user will accidentally or malicious take advantage of the capability.
+				<orderedlist>
+				<listitem><para>Add all of the users who will use JACK to the "audio" group.  For help with this, see the !!Fedora 14 Deployment Guide, Chapter 22!!</para></listitem>
+				<listitem><para>The default installation automatically enables real-time priority to be requested by any user or process.  This is undesirable, so we will edit it.
+					<orderedlist>
+					<listitem><para>Open a terminal, and run the following command: [pre]sudo -c 'gedit /etc/security/limits.conf'[/pre]</para></listitem>
+					<listitem><para>Be careful!  You're editing this important system file as the root user!</para></listitem>
+					<listitem><para>Edit the last lines, so that they read:[pre]
+					  @audio - rtprio 99
+					  @audio - memlock 4194304
+					  @audio - nice -10[/pre]</para></listitem>
+				   </orderedlist></para></listitem>
+			   </orderedlist>
+			</para>
+			<para>
+				With the default configuration of QjackCtl, it chooses the "default" sound card, which actually goes through the ALSA sound server.  We can avoid this, and use the ALSA drivers without the sound server, which will help JACK to maintain accurately low latencies.  The following procedure configures JACK to connect to the ALSA driver directly.
+				<orderedlist>
+				<listitem><para>In a terminal, run the following command: [pre]cat /proc/asound/cards[/pre]</para></listitem>
+				<listitem><para>The command will output a list of sound cards in your system, similar to this:
+				[pre]
+0 [SB             ]: HDA-Intel - HDA ATI SB
+                  HDA ATI SB at 0xf7ff4000 irq 16
+1 [MobilePre      ]: USB-Audio - MobilePre
+                  M Audio MobilePre at usb-0000:00:13.0-2
+[/pre]
+				The left-most number is the sound card's number.  The portion in square brackets is the name (these cards are called "SB" and "MobilePre").</para></listitem>
+				<listitem><para>Identify your desired sound card in the list.  If it's not in the list, then it's not currently detected and configured by your system.</para></listitem>
+				<listitem><para>Open the "QjackCtl" application, and press the 'Setup' button.</para></listitem>
+				<listitem><para>In the "Interface" field, input the name of your preferred sound card with "hw:" in front.  For example, with the sound card information posted above, you might write <code>hw:MobilePre</code></para></listitem>
+				<listitem><para>To save your settings, close "QjackCtl".</para></listitem>
+				</orderedlist>
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-Using_QjackCtl">
+			<title>Using QjackCtl</title>
+			<para>
+				The QjackCtl application offers many more features and configuration options.  The patch bay is a notable feature, which lets users save configurations of the "Connections" window, and restore them later, to help avoid the lengthy set-up time that might be required in complicated routing and multiplexing situations.
+			</para>
+			<para>
+				For more information on QjackCtl, you can refer to [http://www.64studio.com/manual/audio/jack this] web site.
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-Integrating_PulseAudio_with_JACK">
+			<title>Integrating PulseAudio with JACK</title>
+			<para>
+				The default configuration of PulseAudio yields control of the audio equipment to JACK when the JACK server starts.  PulseAudio will not be able to receive input or send output of any audio signals on the audio interface used by JACK.  This is fine for occasional users of JACK, but many users will want to use JACK and PulseAudio simultaneously, or switch between the two frequently.  The following instructions will configure PulseAudio so that its input and output is routed through JACK.
+				<orderedlist>
+				<listitem><para>Use PackageKit or KPackageKit to install the "pulseaudio-module-jack" package, or in a terminal run the following command: [pre]su -c 'yum install pulseaudio-module-jack'[/pre]</para></listitem>
+				<listitem><para>Approve the installation and ensure that it is carried out properly.</para></listitem>
+				<listitem><para>You'll need to edit the PulseAudio configuration file to use the JACK module.
+					<orderedlist>
+					<listitem><para>Be careful! You will be editing an important system file as the root user!</para></listitem>
+					<listitem><para>Run the following command in a terminal: [pre]sudo -c 'gedit /etc/pulse/default.pa'[/pre]</para></listitem>
+					<listitem><para>Add the following lines, underneath the line that says [code]#load-module module-alsa-sink[/code]:
+					[pre]load-module module-jack-sink
+					load-module module-jack-source[/pre]</para></listitem>
+					</orderedlist></para></listitem>
+				<listitem><para>Restart PulseAudio by running the following command in a terminal: [pre]killall pulseaudio[/pre]  It will start again automatically.</para></listitem>
+				<listitem><para>Confirm that this has worked by opening QjackCtl.  The display should confirm that JACK is "Active".</para></listitem>
+				<listitem><para>In the "Connect" window, on the "Audio" tab, there should be PulseAudio devices on each side, and they should be connected to "system" devices on the opposite sides.</para></listitem>
+				<listitem><para>Open QjackCtl's "Setup" window, then click on the "Options" tab.  Uncheck "Execute script after Shutdown: killall jackd".  If you did not make this change, then QjackCtl would stop the JACK server from running every time the program quits.  Since PulseAudio is still expecting to use JACK after that, you shouldn't do this any more.</para></listitem>
+				<listitem><para>When PulseAudio starts JACK, it uses the command found in the <code>~/.jackdrc</code> file.  QjackCtl automatically updates this file when you change settings, but you may have to restart both PulseAudio and JACK in order to get the new changes to take effect.  If they refuse to take effect, you can edit that file yourself.</para></listitem>
+				<listitem><para>Be careful about using a very high sample rate with PulseAudio, since it will tend to use a lot of CPU power.</para></listitem>
+				</orderedlist>
+			</para>
+		</section>
+	</section>
+	
+</chapter>


More information about the docs-commits mailing list