[musicians-guide] Revised sound cards and added forgotten audio terms file

crantila crantila at fedoraproject.org
Mon Aug 2 06:29:39 UTC 2010


commit ef8cffe2351576f4f3fff6b28dceaa38df1c418e
Author: Christopher Antila <crantila at fedoraproject.org>
Date:   Mon Aug 2 02:29:07 2010 -0400

    Revised sound cards and added forgotten audio terms file

 en-US/Audio_Vocabulary.xml           |  140 ++++++++++++++++++++++++++++++++++
 en-US/Digital_Audio_Workstations.xml |   15 ++--
 en-US/Sound_Cards.xml                |  104 +++++++++++++++----------
 3 files changed, 210 insertions(+), 49 deletions(-)
---
diff --git a/en-US/Audio_Vocabulary.xml b/en-US/Audio_Vocabulary.xml
new file mode 100644
index 0000000..051be36
--- /dev/null
+++ b/en-US/Audio_Vocabulary.xml
@@ -0,0 +1,140 @@
+<?xml version='1.0' encoding='utf-8' ?>
+<!DOCTYPE section PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
+<!ENTITY % BOOK_ENTITIES SYSTEM "Musicians_Guide.ent">
+%BOOK_ENTITIES;
+]>
+
+<section id="chap-Musicians_Guide-Vocabulary">
+	<title>Digital Audio Concepts</title>
+	<para>
+		These terms are used in many different audio contexts.  Understanding them is important to knowing how to operate audio equipment in general, whether computer-based or not.
+	</para>
+	
+	<section id="sect-Musicians_Guide-Vocabulary-MIDI_Sequencer">
+		<title>MIDI Sequencer</title>
+		<para>
+			A '''sequencer''' is a device or software program that produces signals that a synthesizer turns into sound.  You can also use a sequencer to arrange MIDI signals into music.  The Musicians' Guide covers two digital audio workstations (DAWs) that are primarily MIDI sequencers, Qtractor and Rosegarden.  All three DAWs in this guide use MIDI signals to control other devices or effects.
+		</para>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Vocabulary-Bus">
+		<title>Busses, Master Bus, and Sub-Master Bus</title>
+		<para>
+			<!-- [[File:FMG-bus.xcf]] -->
+			<!-- [[File:FMG-master_sub_bus.xcf]] -->
+			[[File:FMG-bus.png|200px|How audio busses work.]]
+			[[File:FMG-master_sub_bus.png|200px|The relationship between the master bus and sub-master busses.]]
+		</para>
+		<para>
+			An '''audio bus''' sends audio signals from one place to another.  Many different signals can be inputted to a bus simultaneously, and many different devices or applications can read from a bus simultaneously.  Signals inputted to a bus are mixed together, and cannot be separated after entering a bus.  All devices or applications reading from a bus receive the same signal.
+		</para>
+		<para>
+			All audio routed out of a program passes through the master bus.  The '''master bus''' combines all audio tracks, allowing for final level adjustments and simpler mastering.  The primary purpose of the master bus is to mix all of the tracks into two channels.
+		</para>
+		<para>
+			A '''sub-master bus''' combines audio signals before they reach the master bus.  Using a sub-master bus is optional.  They allow you to adjust more than one track in the same way, without affecting all the tracks.
+		</para>
+		<para>
+			Audio busses are also used to send audio into effects processors.
+		</para>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Vocabulary-Level">
+		<title>Level (Volume/Loudness)</title>
+		<para>
+			The perceived '''volume''' or '''loudness''' of sound is a complex phenomenon, not entirely understood by experts.  One widely-agreed method of assessing loudness is by measuring the sound pressure level (SPL), which is measured in decibels (dB) or bels (B, equal to ten decibels).  In audio production communities, this is called "level."  The '''level''' of an audio signal is one way of measuring the signal's perceived loudness.  The level is part of the information stored in an audio file.
+		</para>
+		<para>
+			There are many different ways to monitor and adjust the level of an audio signal, and there is no widely-agreed practice.  One reason for this situation is the technical limitations of recorded audio.  Most level meters are designed so that the average level is -6&nbsp;dB on the meter, and the maximum level is 0&nbsp;dB.  This practice was developed for analog audio.  We recommend using an external meter and the "K-system," described in a link below.  The K-system for level metering was developed for digital audio.
+		</para>
+		<para>
+			In the Musicians' Guide, this term is called "volume level," to avoid confusion with other levels, or with perceived volume or loudness.
+		</para>
+		<para>
+			<itemizedlist>
+			<listitem><para>[http://www.digido.com/level-practices-part-2-includes-the-k-system.html "Level Practices"] (the type of meter described here is available in the "jkmeter" package from Planet CCRMA at Home).</para></listitem>
+			<listitem><para>[http://en.wikipedia.org/wiki/K-system "K-system"]</para></listitem>
+			<listitem><para>[http://en.wikipedia.org/wiki/Headroom_%28audio_signal_processing%29 "Headroom"]</para></listitem>
+			<listitem><para>[http://en.wikipedia.org/wiki/Equal-loudness_contour "Equal-loudness contour"]</para></listitem>
+			<listitem><para>[http://en.wikipedia.org/wiki/Sound_level_meter "Sound level meter"]</para></listitem>
+			<listitem><para>[http://en.wikipedia.org/wiki/Listener_fatigue "Listener fatigue"]</para></listitem>
+			<listitem><para>[http://en.wikipedia.org/wiki/Dynamic_range_compression "Dynamic range compression"]</para></listitem>
+			<listitem><para>[http://en.wikipedia.org/wiki/Alignment_level "Alignment level"]</para></listitem>
+			</itemizedlist>
+		</para>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Vocabulary-Panning_and_Balance">
+		<title>Panning and Balance</title>
+		<para>
+			[[File:FMG-Balance_and_Panning.png|200px|left|The difference between adjusting panning and adjusting balance.]]
+			<!-- [[File:FMG-Balance_and_Panning.xcf]] -->
+		</para>
+		<para>
+			'''Panning''' adjusts the portion of a channel's signal that is sent to each output channel.  In a stereophonic (two-channel) setup, the two channels represent the "left" and the "right" speakers.  Two channels of recorded audio are available in the DAW, and the default setup sends all of the "left" recorded channel to the "left" output channel, and all of the "right" recorded channel to the "right" output channel.  Panning sends some of the left recorded channel's level to the right output channel, or some of the right recorded channel's level to the left output channel.  Each recorded channel has a constant total output level, which is divided between the two output channels.
+		</para>
+		<para>
+			The default setup for a left recorded channel is for "full left" panning, meaning that 100% of the output level is output to the left output channel.  An audio engineer might adjust this so that 80% of the recorded channel's level is output to the left output channel, and 20% of the level is output to the right output channel.  An audio engineer might make the left recorded channel sound like it is in front of the listener by setting the panner to "center," meaning that 50% of the output level is output to both the left and right output channels.
+		</para>
+		<para>
+			Balance is sometimes confused with panning, even on commercially-available audio equipment.  Adjusting the '''balance''' changes the volume level of the output channels, without redirecting the recorded signal.  The default setting for balance is "center," meaning 0% change to the volume level.  As you adjust the dial from "center" toward the "full left" setting, the volume level of the right output channel is decreased, and the volume level of the left output channel remains constant.  As you adjust the dial from "center" toward the "full right" setting, the volume level of the left output channel is decreased, and the volume level of the right output channel remains constant.  If you set the dial to "20% left," the audio equipment would reduce the volume level of the right output channel by 20%, increasing the perceived loudness of the left output channel by approximately 20%.
+		</para>
+		<para>
+			You should adjust the balance so that you perceive both speakers as equally loud.  Balance compensates for poorly set up listening environments, where the speakers are not equal distances from the listener.  If the left speaker is closer to you than the right speaker, you can adjust the balance to the right, which decreases the volume level of the left speaker.  This is not an ideal solution, but sometimes it is impossible or impractical to set up your speakers correctly.  You should adjust the balance only at final playback.
+		</para>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Vocabulary-Time">
+		<title>Time, Timeline, and Time-Shifting</title>
+		<para>
+			There are many ways to measure musical time.  The four most popular time scales for digital audio are:
+			<itemizedlist>
+			<listitem><para>Bars and Beats: Usually used for MIDI work, and called "BBT," meaning "Bars, Beats, and Ticks."  A tick is a partial beat.</para></listitem>
+			<listitem><para>Minutes and Seconds: Usually used for audio work.</para></listitem>
+			<listitem><para>SMPTE Timecode: Invented for high-precision coordination of audio and video, but can be used with audio alone.</para></listitem>
+			<listitem><para>Samples: Relating directly to the format of the underlying audio file, a sample is the shortest possible length of time in an audio file.  See [[User:Crantila/FSC/Sound_Cards#Sample_Rate|this section]] for more information on samples.</para></listitem>
+			</itemizedlist>
+		</para>
+		<para>
+			Most audio software, particularly digital audio workstations (DAWs), allow the user to choose which scale they prefer.  DAWs use a '''timeline''' to display the progression of time in a session, allowing you to do '''time-shifting'''; that is, adjust the time in the timeline when a region starts to be played.
+		</para>
+		<para>
+			Time is represented horizontally, where the leftmost point is the beginning of the session (zero, regardless of the unit of measurement), and the rightmost point is some distance after the end of the session.
+		</para>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Vocabulary-Synchronization">
+		<title>Synchronization</title>
+		<para>
+			'''Synchronization''' is synchronizing the operation of multiple tools, frequently the movement of the transport.  Synchronization also controls automation across applications and devices.  MIDI signals are usually used for synchronization.
+		</para>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Vocabulary-Routing_and_Multiplexing">
+		<title>Routing and Multiplexing</title>
+		<para>
+			[[File:FMG-routing_and_multiplexing.png|200px|left|Illustration of routing and multiplexing in the "Connections" window of the QjackCtl interface.]]
+			<!-- [[FMG-routing_and_multiplexing.xcf]] -->
+		</para>
+		<para>
+			'''Routing''' audio transmits a signal from one place to another - between applications, between parts of applications, or between devices.  On Linux systems, the JACK Audio Connection Kit is used for audio routing.  JACK-aware applications (and PulseAudio ones, if so configured) provide inputs and outputs to the JACK server, depending on their configuration.  The QjackCtl application can adjust the default connections.  You can easily reroute the output of a program like FluidSynth so that it can be recorded by Ardour, for example, by using QjackCtl.
+		</para>
+		<para>
+			'''Multiplexing''' allows you to connect multiple devices and applications to a single input or output.  QjackCtl allows you to easily perform multiplexing.  This may not seem important, but remember that only one connection is possible with a physical device like an audio interface.  Before computers were used for music production, multiplexing required physical devices to split or combine the signals.
+		</para>
+	</section>
+	
+	<section id="sect-Musicians_Guide-Multichannel_Audio">
+		<title>Multichannel Audio</title>
+		<para>
+			An '''audio channel''' is a single path of audio data.  '''Multichannel audio''' is any audio which uses more than one channel simultaneously, allowing the transmission of more audio data than single-channel audio.
+		</para>
+		<para>
+			Audio was originally recorded with only one channel, producing "monophonic," or "mono" recordings.  Beginning in the 1950s, stereophonic recordings, with two independent channels, began replacing monophonic recordings.  Since humans have two independent ears, it makes sense to record and reproduce audio with two independent channels, involving two speakers.  Most sound recordings available today are stereophonic, and people have found this mostly satisfying.
+		</para>
+		<para>
+			There is a growing trend toward five- and seven-channel audio, driven primarily by "surround-sound" movies, and not widely available for music.  Two "surround-sound" formats exist for music: DVD Audio (DVD-A) and Super Audio CD (SACD).  The development of these formats, and the devices to use them, is held back by the proliferation of headphones with personal MP3 players, a general lack of desire for improvement in audio quality amongst consumers, and the copy-protection measures put in place by record labels.  The result is that, while some consumers are willing to pay higher prices for DVD-A or SACD recordings, only a small number of recordings are available.  Even if you buy a DVD-A or SACD-capable player, you would need to replace all of your audio equipment with models that support proprietary copy-protection software.  Without this equipment, the player is often forbidden from outputting audio with a higher sample rate or sample format than a conventional audio CD. 
  None of these factors, unfortunately, seem like they will change in the near future.
+		</para>
+	</section>
+	
+</section>
diff --git a/en-US/Digital_Audio_Workstations.xml b/en-US/Digital_Audio_Workstations.xml
index 3b626f8..637917b 100644
--- a/en-US/Digital_Audio_Workstations.xml
+++ b/en-US/Digital_Audio_Workstations.xml
@@ -9,6 +9,9 @@
 	<para>
 		The term '''Digital Audio Workstation''' (henceforth '''DAW''') refers to the entire hardware and software setup used for professional (or professional-quality) audio recording, manipulation, synthesis, and production.  It originally referred to devices purpose-built for the task, but as personal computers have become more powerful and wide-spread, certain specially-designed personal computers can also be thought of as DAWs.  The software running on these computers, especially software capable of multi-track recording, playback, and synthesis, is simply called "DAW software," which is often shortened to "DAW."  So, the term "DAW" and its usage are moderately ambiguous, but generally refer to one of the things mentioned.
    </para>
+   <para>
+	   The !!L!! "Sound Cards and Digital Audio" Section !!L!! has other words that are important to know.
+   </para>
 	
 	<section id="sect-Musicians_Guide-Knowing_Which_DAW_to_Use">
 		<title>Knowing Which DAW to Use</title>
@@ -82,14 +85,10 @@
       		</para>
       	</section>
 	</section>
-	
-	<section id="sect-Musicians_Guide-DAW_Audio_Vocabulary_Transclusion">
-		<title>Audio Vocabulary</title>
-		<para>
-			This part is going to be transcluded form the "Audio Vocabulary" page.
-		</para>
-	</section>
-	
+<!--
+	Transclusion of "Audio Vocabulary"
+	<xi:include href="Audio_Vocabulary.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
+-->
 	<section id="sect-Musicians_Guide-DAW_Interface_Vocabulary">
 		<title>Interface Vocabulary</title>
 		<para>
diff --git a/en-US/Sound_Cards.xml b/en-US/Sound_Cards.xml
index e8fe2c0..2605d1c 100644
--- a/en-US/Sound_Cards.xml
+++ b/en-US/Sound_Cards.xml
@@ -7,36 +7,36 @@
 <chapter id="chap-Musicians_Guide-Sound_Cards">
 	<title>Sound Cards and Digital Audio</title>
 	<para>
-		Everybody has a vague idea of what sound cards are, how they work, and what they do.  Sometimes, especially when doing professional-quality audio work, a vague understanding is no longer sufficient.  This chapter introduces the technical vocabulary used when discussing computer audio hardware.
+		This chapter introduces the technical vocabulary used for computer audio hardware.
 	</para>
 	
 	<section id="sect-Musicians_Guide-What_Sound_Cards_Are">
 		<title>Defining Sound Cards</title>
 		<para>
-			Broadly defined, a sound card is any computer-connected device which allows the computer to process audio in some way.  There are two general categories into which most sound cards fit, described below.
+			A sound card is a hardware device which allows a computer to process sound.  Most sound cards are either audio interfaces or MIDI interfaces.  These two kinds of interfaces are described below.
 		</para>
 		<section id="sect-Musicians_Guide-Audio_Interfaces">
 			<title>Audio Interfaces</title>
 			<para>
-				This is a hardware device that allows audio equipment to be connected to your computer, including microphones and speakers.  Typically audio entering or leaving an audio interface from/to an external device requires conversion between digital and analogue formats.  However, with the rise of external digital audio equipment, there are an increasing number of devices that connect digitally to an audio interface.
+				An audio interface is a hardware device that provides a connection between your computer and audio equipment, including microphones and speakers.  Audio interfaces usually convert audio signals between analog and digital formats: signals entering the computer are passed through an analog-to-digital convertor, and signals leaving the computer are passed through a digital-to-analog convertor.  Some audio interfaces have digital input and output ports, which means that other devices perform the conversion between analog and digital signal formats.
 			</para>
 			<para>
-				The conversion between analogue and digital signals is a prerequisite for computers to be able to process audio signals, so it is the primary function of audio interfaces.  The real world creates sound with a limitless range of possibilities for pitch, volume, and duration.  The digital nature of computers requires these limitless possibilities to be reduced to finite limits.  The best digital/analogue converters are capable of using these limits in such a way that humans don't notice anything missing - much like the best computer monitors and graphics adapters are able to disguise the fact that only about half of the colours our eyes can see are display-able on computers.  This problem is discussed further in the "Bit Rates and Sample Rates" section.
+				The conversion between analog and digital audio signal formats is the primary function of audio interfaces.  Real sound has an infinite range of pitch, volume, and durational possibilities.  Computers cannot process infinite information, and require sound to be converted to a digital format.  Digital sound signals have a limited range of pitch, volume, and durational possibilities.  High-quality analog-to-digital and digital-to-analog convertors change the signal format in a way that keeps the original, analog signal as closely as possible.  These quality of the convertors is very important in determining the quality of an audio interface.
 			</para>
 			<para>
-				Audio interfaces also amplify signals for directly-connected analogue devices (like headphones).  Some offer power for microphones, too (pre-amplification and/or phantom power).
+				Audio interfaces also provide connectors for external audio equipment, like microphones, speakers, headphones, and electric instruments like electric guitars.
 			</para>
 		</section>
 		<section id="sect-Musicians_Guide-MIDI_Interfaces">
 			<title>MIDI Interfaces</title>
 			<para>
-				MIDI stands for "Musical Instrument Digital Interface," and is commonly associated with low-quality imitations of acoustic instruments.  This association is unfortunate, since high-quality audio is indeed possible with MIDI, and MIDI-driven devices have played a part in many mainstream and non-mainstream audio environments.  Whereas audio signals specify the sounds themselves, MIDI signals contain instructions on how to make the sounds.  It is a synthesizer's responsibility to follow these instructions, turning them into sounds.  Going even further, the MIDI specification allows for the control of many audio-related devices, like mixers, sequencers, and Digital Audio Workstations.  Although the signals used to control these devices (or software applications) do not directly cause the generation of music, they still follow  the definition of "MIDI signals": instructions on how to make sounds.
+				Musical Instrument Digital Interface (MIDI) is a standard used to control digital musical devices.  Many people associate the term with low-quality imitations of acoustic instruments.  This is unfortunate, because MIDI signals themselves do not have a sound.  MIDI signals are instructions to control devices: they tell a synthesizer when to start and stop a note, how long the note should be, and what pitch it should have.  The synthesizer follows these instructions and creates an audio signal.  Many MIDI-controlled synthesizers are low-quality imitations of acoustic instruments, but many are high-quality imitations.  MIDI-powered devices are used in many mainstream and non-mainstream musical situations, and can be nearly indistinguishable from actual acoustic instruments.  MIDI interfaces only transmit MIDI signals, not audio signals.  Some audio interfaces have built-in MIDI interfaces, allowing both interfaces to share the same physical device.
 			</para>
 			<para>
-				Whereas audio interfaces allow the input and output of audio signals ("normal sound") from a computer, MIDI interfaces allow the input and output of MIDI signals.  Some audio interfaces have MIDI capabilities built-in, and some MIDI interfaces also transform MIDI signals into audio signals.  The latter kind of device is performing "MIDI synthesis," a task for which there exist many software-only solutions.  "FluidSynth," covered in [[User:Crantila/FSC/Synthesizers/FluidSynth|this section]] of the Musicians' Guide, is one such software solution.
+				In order to create sound from MIDI signals, you need a "MIDI synthesizer."  Some MIDI synthesizers have dedicated hardware, and some use only software.  A software-only MIDI synthesizer, based on SoundFont technology, is discussed in the !!L!! FluidSynth Section !!L!! of the Musicians' Guide.
 			</para>
 			<para>
-				Having a hardware-based MIDI interface is not a requirement for working with MIDI signals and applications.  The costly nature of most MIDI hardware makes it impractical for occasional or beginning MIDI users and computer music enthusiasts.  Much of the software in this Guide is capable of working with MIDI signals, and supports but does not require MIDI-capable hardware.
+				You can use MIDI signals, synthesizers, and applications without a hardware-based MIDI interface.  All of the MIDI-capable applications in the Musicians' Guide work well with software-based MIDI solutions, and are also compatible with hardware-based MIDI devices.
 			</para>
 		</section>
 	</section>
@@ -44,86 +44,108 @@
 	<section id="sect-Musicians_Guide-Sound_Card_Connections">
 		<title>Methods of Connection</title>
 		<para>
-			The following connection methods can be used by either audio or MIDI interfaces, so they are collectively referred to as "sound cards," in this section.
+			Audio interfaces and MIDI interfaces can both use the following connection methods.  In this section, "sound card" means "audio interface or MIDI interface."
 		</para>
 		
 		<section id="sect-Musicians_Guide-Motherboard_Integrated">
 			<title>Integrated into the Motherboard</title>
 			<para>
-				These sound cards are built into the computer's motherboard.  In recent years, the quality of audio produced by these sound cards has greatly increased, but the best integrated solutions are still not as good as the best non-integrated solutions.  Good integrated sound cards should be good enough for most audio work; if you want a professional-sounding sound card, or especially if you want to connect high-quality input devices, then an additional sound card is recommended.
+				Integrated sound cards are built into a computer's motherboard.  The quality of audio produced by these sound cards has been increasing, and they are sufficient for most non-professional computer audio work.  If you want a professional-sounding audio interface, or if you want to connect high-quality devices, then we recommend an additional audio interface.
 			</para>
 			<para>
-				Hardware MIDI interfaces are rarely, if ever, integrated into the motherboard.
+				MIDI interfaces are rarely integrated into a motherboard.
 			</para>
 		</section>
 		<section id="sect-Musicians_Guide-PCI_Sound_Cards">
 			<title>Internal PCI Connection</title>
 			<para>
-				Sound cards connected to the motherboard by PCI (or PCI-Express, etc.) will probably offer higher performance, and lower latencies, than USB- or FireWire-connected devices.  Professional-quality sound cards often have insufficient space for connectors on the card itself, so they often include a proprietary, external component specifically for adding connectors.  The biggest disadvantage of PCI-connected sound cards is that they cannot be used with notebooks or netbooks, and that they are only as portable as the computer in which they're installed.
+				Sound cards connected to a motherboard by PCI or PCI-Express offer better performance and lower latency than USB or FireWire-connected sound cards.  Professional-quality sound cards often include an external device, connected to the sound card, to which the audio equipment is connected.  You cannot use these sound cards with a notebook or netbook computer.
 			</para>
 		</section>
-		<section id="sect-Musicians_Guide-USB_Sound_Cards">
-			<title>External USB Connection</title>
+		<section id="sect-Musicians_Guide-FireWire_Sound_Cards">
+			<title>External FireWire Connection</title>
 			<para>
-				USB-connected sound cards are becoming more popular, especially with the increasing bandwidth possibilities of USB connections.  The quality can be as good as internally-connected sound cards, although the USB connection may add additional latency, which may or may not be a concern.  The biggest advantages of USB-connected sound cards is that they can be used with notebooks and netbooks, and that they are usually easier to transport than an entire desktop computer.
+				FireWire-connected sound cards are not as popular as USB-connected sound cards, but they are generally higher quality.  This is partly because FireWire-connected sound cards use FireWire's "guaranteed bandwidth" and "bus-mastering" capabilities, which both reduce latency.  High-speed FireWire connections are also available on older computers without a high-speed USB connection.
+			</para>
+			<para>
+				FireWire devices are sometimes incompatible with the standard Fedora Linux kernel.  If you have a FireWire-connected sound card, you should use the kernel from Planet CCRMA at Home.  Installation instructions for this kernel are available !!L!! here !!L!! .
 			</para>
 		</section>
-		<section id="sect-Musicians_Guide-FireWire_Sound_Cards">
-			<title>External FireWire Connection</title>
+		<section id="sect-Musicians_Guide-USB_Sound_Cards">
+			<title>External USB Connection</title>
 			<para>
-				FireWire-connected sound cards are not as popular as USB sound cards, but they tend to be of higher quality.  In addition, unlike USB-connected sound cards, FireWire-connected sound cards are able to take advantage of FireWire's "guaranteed bandwidth" and "bus-mastering" capabilities.  Having guaranteed bandwidth ensures that the sound card will be able to send data when it chooses; the sound card will not have to compete with other devices connected with the same connection type.  Using bus-mastering enables the FireWire-connected device to read and write directly to and from the computer's main memory, without first going through the CPU.  High-speed FireWire connections are also available on older computers where a USB 2.0 connection is not available.
+				Sound cards connected by USB are becoming more popular, especially because notebook and netbook computer are becoming more popular.  The quality can be as good as an internally-connected sound card, but the USB connection may add additional latency.  USB-connected sound cards are generally the most affordable sound card for amateur musicians who want a high-quality sound card.
 			</para>
 		</section>
 		<section id="sect-Musicians_Guide-Choose_Sound_Card_Connection">
 			<title>Choosing a Connection Type</title>
 			<para>
-				The method of connection should not by itself determine which sound card is appropriate for you.  Which connection type is right for you will depend on a wide range of factors, but the actual sound quality is significantly more important than the theoretical advantages or disadvantages of the connection type.  If possible, you should try out potential devices with your computer before you buy one.
+				The connection type is only one of the considerations when choosing a sound card.  If you have a desktop computer, and you will not be using a notebook or netbook computer for audio, you should consider an internal PCI or PCI-Express connection.  If you want an external sound card, you should consider a FireWire connection.  If FireWire-connected sound cards are more too expensive, you should consider a USB connection.
+				
+				The connection type is not the most important consideration when choosing a sound card.  The subjective quality of the analog-to-digital and digital-to-analog convertors is the most important consideration.
 			</para>
 		</section>
 	</section>
-
-	<section id="sect-Musicians_Guide-Bit_Rates_and_Sample_Rates">
-		<title>Bit Rates and Sample Rates</title>
+	
+	<section id="sect-Musicians_Guide-Sample_Rate_and_Sample_Format">
+		<title>Sample, Sample Rate, Sample Format, and Bit Rate</title>
 		<para>
-			As mentioned in the !!Audio Interface section!!, the primary job of audio interfaces is to carry out the transformation of audio signals between digital and analogue forms.  This diagram from Wikipedia illustrates the "digital problem," when it comes to audio: [http://en.wikipedia.org/wiki/File:Pcm.svg here].  Although the wave-shape of the analogue signal, which is what is produced by most acoustic instruments and by the human voice, is shown in red, computers cannot store that information.  Instead, they usually store some approximation, which is represented in that diagram by the gray, shaded area.  Note that the diagram is simply an example, and not meant to depict a particular real-world recording.
+			The primary function of audio interfaces is to convert signals between analog and digital formats.  As mentioned earlier, real sound has an infinite possibility of pitches, volumes, and durations.  Computers cannot process infinite information, so the audio signal must be converted before they can use it.  This diagram from Wikipedia illustrates the situation: [http://en.wikipedia.org/wiki/File:Pcm.svg here].  The red wave shape represents a sound wave that could be produced by a singer or an acoustic instrument.  The gradual change of the red wave cannot be processed by a computer, which must use an approximation, represented by the gray, shaded area of the diagram.  This diagram is an exaggerated example, and it does not represent a real recording.
 		</para>
 		<para>
-			It is the conversion between digital and analogue signals that distinguishes low- and high-quality audio interfaces.  High-quality convertors will be able to record and reproduce a signal that is nearly identical to the original.  Bit and sample rates are tied to the closeness of approximation that an audio interface can make, and they are explained below.  There are other factors involved in overall sound quality.
+			The conversion between analog and digital signals distinguishes low-quality and high-quality audio interfaces.  The sample rate and sample format control the amount of audio information that is stored by the computer.  The greater the amount of information stored, the better the audio interface can approximate the original signal from the microphone.  The possible sample rates and sample formats only partially determine the quality of the sound captured or produced by an audio interface.  For example, an audio interface integrated into a motherboard may be capable of a 24-bit sample format and 192&nbsp;kHz sample rate, but a professional-level, FireWire-connected audio interface capable of a 16-bit sample format and 44.1&nbsp;kHz sample rate may sound better.
 		</para>
-		<section id="sect-Musicians_Guide-Bit_Rate">
-			<title>Bit Rate (Sample Format)</title>
+		<section id="sect-Musicians_Guide-Sample">
+			<title>Sample</title>
+			<para>
+				A sample is a unit of audio data.  Computers store video data as a series of still images (each called a "frame"), and displays them one after the other, changing at a pre-determined rate (called the "frame rate").  Computers store audio data as a series of still sound images (each called a "sample"), and plays them one after the other, changing at a pre-determined rated (called the "sample rate").
+			</para>
 			<para>
-				This is the number of bits used to describe the audio in a length of time.  The higher the number of bits, the greater the detail that will be stored.  For most uses, the bit-rate is usually measured in "bits per second," as in the often-used 128&nbsp;kb/s bit-rate for MP3 audio.  Professional audio is more often referred to as "bits per sample," which is usually simply called "bits."  CDs have a 16&nbsp;bit/sample bit-rate, professional audio is usually recorded at a 24&nbsp;bit/sample bit-rate, and a 32&nbsp;bit/sample bit-rate is supported by some hardware and software, but not widely used.  Due to technical limitations, 20-bit audio is also widely used.  See Wikipedia for more information (get a link??)
+				The frame format and frame rate used to store video data do not vary much.  The sample format and sample rate used to store audio data vary widely.
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-Sample_Format">
+			<title>Sample Format</title>
+			<para>
+				The sample format is the number of bits used to describe each sample.  The greater the number of bits, the more data will be stored in each sample.  Common sample formats are 16&nbsp;bits and 24&nbsp;bits.  8&nbsp;bit samples are low-quality, and not used often.  20&nbsp;bit samples are not commonly used on computers.  32&nbsp;bit samples are possible, but not supported by most audio interfaces.
 			</para>
 		</section>
 		<section id="sect-Musicians_Guide-Sample_Rate">
 			<title>Sample Rate</title>
 			<para>
-				A sample is a collection of a number of bits, representing a sound at an instantaneous point in time.  The number of bits contained in a sample is determined by the bit-rate (usually 16 or 24 bits per sample).  The sample rate is a measure of how many samples occupy one second - that is, how many "instants" of sound are catalogued for each second.  Theoretically, a higher sample rate results in a higher-quality audio signal.  The sample rate is measured in Hertz, which means "samples per second."  CDs have a 44&nbsp;100&nbsp;Hz sample rate, but audio is often recorded at 48&nbsp;000&nbsp;Hz, 96&nbsp;000&nbsp;Hz, or even 192&nbsp;000&nbsp;Hz.  These are often indicated as 44.1&nbsp;kHz, 48&nbsp;kHz, 96&nbsp;kHz, and 192&nbsp;kHz, respectively.
+				The sample rate is the number of samples played in each second.  Sample rates are measured in "Hertz" (abbreviated "Hz"), which means "per second," or in "kilohertz" (abbreviated "kHz"), which means "per second, times one thousand."  The sample rate used on audio CDs can be written as 44&nbsp;100&nbsp;Hz, or 44.1&nbsp;kHz, which both have the same meaning.  Common sample rates are 44.1&nbsp;kHz, 48&nbsp;kHz, and 96&nbsp;kHz.  Other possible sample rates include 22&nbsp;kHz, 88.2&nbsp;kHz, and 192&nbsp;kHz.
+			</para>
+		</section>
+		<section id="sect-Musicians_Guide-Bit_Rate">
+			<title>Bit Rate</title>
+			<para>
+				Bit rate is the number of bits in a given time period.  Bit rate is usually measured in kilobits per second (abbreviated "kbps" or "kb/s").  This measurement is generally used to refer to amount of information stored in a lossy, compressed audio format.
+			</para>
+			<para>
+				In order to calculate the bit rate, multiply the sample rate and the sample format.  For example, the bit rate of an audio CD (705.6&nbsp;kb/s) is the sample rate (44.1&nbsp;kHz) multiplied by the sample format (16&nbsp;bits).  MP3-format files are commonly encoded with a 128&nbsp;kb/s bit rate.
 			</para>
 		</section>
 		<section id="sect-Musicians_Guide-Bit_and_Sample_Rate_Conclusions">
 			<title>Conclusions</title>
 			<para>
-				Both of these factors have an impact on potential sound quality.  Depending on the limitations and capabilities of your equipment, you may be more inclined to use particular settings than others.  Here are some comparisons:
+				Both sample rate and sample format have an impact on potential sound quality.  The capabilities of your audio equipment, and your intended use of the audio signal will determine the settings you should use.
+			</para>
+			<para>
+				Here are some widely-used sample rates and sample formats.  You can use these to help you decide which sample rate and sample format to use.
 				<itemizedlist>
-				<listitem><para>16-bit bit rate, and 44.1&nbsp;kHz sample rate (CD audio; good for wide distribution and maximum compatibility; 705.6&nbsp;kb/s)</para></listitem>
-				<listitem><para>24-bit bit rate, and 96&nbsp;kHz sample rate (CDs are usually recorded at these rates, then "down-mixed" later; 2304&nbsp;kb/s)</para></listitem>
-				<listitem><para>24-bit bit rate, and 192&nbsp;kHz sample rate (DVD Audio; not widely compatible; 4608&nbsp;kb/s)</para></listitem>
-				<listitem><para>1-bit bit rate, and 2822.4&nbsp;kHz sample rate (Super Audio CD; not widely compatible; 2822.4&nbsp;kb/s)</para></listitem>
+				<listitem><para>16-bit samples, 44.1&nbsp;kHz sample rate.  Used for audio CDs. Widely compatible.  Bit rate of 705.6&nbsp;kb/s)</para></listitem>
+				<listitem><para>24-bit samples, and 96&nbsp;kHz sample rate.  Audio CDs are recorded with these settings, and "down-mixed" later.  Bit rate of 2304&nbsp;kb/s)</para></listitem>
+				<listitem><para>24-bit samples, and 192&nbsp;kHz sample rate.  Maximum settings for DVD Audio, but not widely compatible.  Bit rate of 4608&nbsp;kb/s)</para></listitem>
+				<listitem><para>1-bit bit rate, and 2822.4&nbsp;kHz sample rate.  Used for SuperAudio CDs.  Very rare elsewhere.  Bit rate of 2822.4&nbsp;kb/s)</para></listitem>
 				</itemizedlist>
 			</para>
 			<para>
-				In the end, bit rate and sample rate are only part of what determines overall sound quality.  Moreover, sound quality is subjective, and you will need to experiment to find the equipment and rates that work best for what you do.
+				Sample rate and sample format are only part of what determines overall sound quality.  Sound quality is subjective, so you must experiment to find the audio interface and settings that work best for what you do.
 			</para>
 		</section>
 	</section>
-
-	<section id="sect-Musicians_Guide-Sound_Cards_Audio_Vocabular">
-		<title>Audio Vocabulary</title>
-		<para>
-			This part will transclude the "audio vocabulary" file.
-		</para>
-	</section>
+	
+	<!-- Transclusion of "Audio Vocabulary" -->
+	<xi:include href="Audio_Vocabulary.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
 
 </chapter>
\ No newline at end of file


More information about the docs-commits mailing list