Difference between revisions of "Sample frequency"
Brad Johnson (talk | contribs) |
Brad Johnson (talk | contribs) |
||
Line 7: | Line 7: | ||
By sampling the analog audio signal at a fixed time interval; the need to record when the sample was taken along with the digitized voltage information is eliminated. As long as the sample frequency (SF) of a recording is known and the playback system operates at virtually the same SF; it can be assumed that the timing will be accurate when the analog waveform is reconstructed during [[digital to analog conversion]]. It also makes it extremely critical that the "[[clock]]" signal that is used as the timing reference during both analog to digital conversion and digital to analog conversion be extremely accurate in order to reduce distortion to an acceptable level. | By sampling the analog audio signal at a fixed time interval; the need to record when the sample was taken along with the digitized voltage information is eliminated. As long as the sample frequency (SF) of a recording is known and the playback system operates at virtually the same SF; it can be assumed that the timing will be accurate when the analog waveform is reconstructed during [[digital to analog conversion]]. It also makes it extremely critical that the "[[clock]]" signal that is used as the timing reference during both analog to digital conversion and digital to analog conversion be extremely accurate in order to reduce distortion to an acceptable level. | ||
+ | |||
+ | In order to record and reproduce audio properly, the SF must be high enough to allow recording a pure tone at the highest frequency the human ear can hear. The waveform of a pure tone is the sine wave, which contains only one frequency; unlike the more complex waveforms that most acoustic sources produce. Because each full cycle of a sine wave has two distinct “half-cycles;” two samples are required to represent it in a manner that allows the original waveform to be reconstructed. Because it is commonly accepted that the highest frequency the human ear can hear is 20,000 Hertz or 20kHz (20 kiloHertz); the sample frequency must be a minimum of twice that- higher than 40kHz. | ||
+ | |||
+ | When a signal containing frequencies higher than one-half the SF are digitized; additional information is generated in the form of sum and difference frequencies. The "sum" frequencies would be supersonic with a 44.1kHz SF; but the difference frequencies would be lower than 20kHZ and would appear as gross distortion of a non-harmonic nature (not pleasing to hear like some forms of harmonic distortion). This means that some form of audio filter is required to prevent frequencies higher than one-half the SF from reaching the input of the AD converter. Both because it is difficult to make a filter with a very "steep cut-off" and filters with steep cut-offs tend to cause audible degradation such as phase response errors; the SF is typically higher than exactly twice the highest frequency to be recorded to allow for the range in which the filter is starting to take effect but has not completely removed the problematic signals. This is the primary reason why the CD standard is 44.1kHz in stead of 44kHz. |
Revision as of 15:49, 10 January 2012
Overview
In order to digitize analog audio, most contemporary systems use a system referred to as "sampling" to repeatedly measure the voltage of the analog audio waveform at a regular time interval. Each voltage measurement results in a binary number of a given wordlength. The series of binary “words” are typically stored consecutively in a file for later reconstruction of the analog voltage waveform by a digital to analog converter. The sample frequency is the rate at which the samples are generated and is measured in Hertz (cycles per second). The term sample rate is used interchangeably with sample frequency.
Basics
Virtually all contemporary analog audio equipment operates on the principle of an analog voltage waveform being analogous to the original sound's air pressure "waveform." Typically; the original sound is translated from the pressure variation to electrical variations by a microphone (a type of transducer). The resulting voltage waveform can be transmitted on wires to an amplifier and a power amplifier; then translated back into sound pressure variations by a speaker.
One important consideration is how this analog waveform can be stored for later reproduction or transmission. All analog storage and transmission schemes are prone to loss of signal quality; with storage being particularly problematic. As the technology became available, digital audio systems were developed to address these issues. In order to generate digital information that can be used for these purposes; the analog voltage waveform is sampled repeatedly at a specific time interval; normally referred to as the sample frequency. Please refer to analog to digital conversion for more details.
By sampling the analog audio signal at a fixed time interval; the need to record when the sample was taken along with the digitized voltage information is eliminated. As long as the sample frequency (SF) of a recording is known and the playback system operates at virtually the same SF; it can be assumed that the timing will be accurate when the analog waveform is reconstructed during digital to analog conversion. It also makes it extremely critical that the "clock" signal that is used as the timing reference during both analog to digital conversion and digital to analog conversion be extremely accurate in order to reduce distortion to an acceptable level.
In order to record and reproduce audio properly, the SF must be high enough to allow recording a pure tone at the highest frequency the human ear can hear. The waveform of a pure tone is the sine wave, which contains only one frequency; unlike the more complex waveforms that most acoustic sources produce. Because each full cycle of a sine wave has two distinct “half-cycles;” two samples are required to represent it in a manner that allows the original waveform to be reconstructed. Because it is commonly accepted that the highest frequency the human ear can hear is 20,000 Hertz or 20kHz (20 kiloHertz); the sample frequency must be a minimum of twice that- higher than 40kHz.
When a signal containing frequencies higher than one-half the SF are digitized; additional information is generated in the form of sum and difference frequencies. The "sum" frequencies would be supersonic with a 44.1kHz SF; but the difference frequencies would be lower than 20kHZ and would appear as gross distortion of a non-harmonic nature (not pleasing to hear like some forms of harmonic distortion). This means that some form of audio filter is required to prevent frequencies higher than one-half the SF from reaching the input of the AD converter. Both because it is difficult to make a filter with a very "steep cut-off" and filters with steep cut-offs tend to cause audible degradation such as phase response errors; the SF is typically higher than exactly twice the highest frequency to be recorded to allow for the range in which the filter is starting to take effect but has not completely removed the problematic signals. This is the primary reason why the CD standard is 44.1kHz in stead of 44kHz.