Difference between revisions of "Analog to digital converter"

From LavryEngineering
Jump to navigation Jump to search
Line 7: Line 7:
 
==History==
 
==History==
 
Prior to the development of practical [[digital audio]] recording systems; AD converters were used in applications such as medical testing and monitoring equipment, and instrumentation (industrial measurement and monitoring). These early converters were limited either by the converter technology at the time or by the amount of data that associated system could handle to much lower resolution than typically used to encode audio. The resolution both in the [[amplitude domain]] (typically voltage of the input waveform) and [[time domain]] of these converters was often quite limited when compared to contemporary digital audio standards.
 
Prior to the development of practical [[digital audio]] recording systems; AD converters were used in applications such as medical testing and monitoring equipment, and instrumentation (industrial measurement and monitoring). These early converters were limited either by the converter technology at the time or by the amount of data that associated system could handle to much lower resolution than typically used to encode audio. The resolution both in the [[amplitude domain]] (typically voltage of the input waveform) and [[time domain]] of these converters was often quite limited when compared to contemporary digital audio standards.
 
  
 
Before storage of the huge amount information generated by [[CD quality]] AD converters became practical, the earliest application in music recording was in "[[outboard]]" equipment such as digital delay or effects processors. Largely because the output of these early units was mixed in with the original (unprocessed) source at a low level as an ambient effect; the less-than high fidelity quality of the converters was acceptable. Even with the noise and distortion present in analog recordings, the perceived quality of the analog tape recordings was far better than the signal processed through these early converters. One of the more popular early digital delay units employed a novel for of digital encoding "[[sigma-delta]]" where, in contrast to the "[[linear PCM]]" format where each "[[sample]]" of the analog input waveform is represented by a digital [[word]] made of a number of [[bits]]; sigma-delta encoded only one bit at a relatively high [[sample frequency]]. Compared to the relatively inaccurate PCM-based units, most recording engineers felt that the sigma-delta digital delay unit sounded closer to the source.
 
Before storage of the huge amount information generated by [[CD quality]] AD converters became practical, the earliest application in music recording was in "[[outboard]]" equipment such as digital delay or effects processors. Largely because the output of these early units was mixed in with the original (unprocessed) source at a low level as an ambient effect; the less-than high fidelity quality of the converters was acceptable. Even with the noise and distortion present in analog recordings, the perceived quality of the analog tape recordings was far better than the signal processed through these early converters. One of the more popular early digital delay units employed a novel for of digital encoding "[[sigma-delta]]" where, in contrast to the "[[linear PCM]]" format where each "[[sample]]" of the analog input waveform is represented by a digital [[word]] made of a number of [[bits]]; sigma-delta encoded only one bit at a relatively high [[sample frequency]]. Compared to the relatively inaccurate PCM-based units, most recording engineers felt that the sigma-delta digital delay unit sounded closer to the source.
  
With the introduction of [[Compact Disc]] technology by Sony/Phillips in the early 1980's came the standard of recording audio in [[16 bit]] linear PCM format. AD converter technology was still evolving at the time and even though many AD converters were nominally "16 bit" they were not truly accurate to 16 bit resolution. Contemporary AD converters are typically "[[24 bit]]" and are accurate to approximately 22-23 bits. The sample frequency capability of AD converters has also increased since the original [[CD format]] of 44.1 kHz was introduced; with contemporary AD converters supporting output sample frequencies as high as 384 kHz. Although there are a number of advantages to AD conversion at sample frequencies higher than 44.1 kHz, these advantages are gained at sample frequencies of 88.2 or 96 kHz. Increasing the sample frequency beyond 96 kHz will degrade the conversion accuracy in the audio frequency range, while the only advantage is the ability to record supersonic frequencies beyond the range even dogs can hear.  
+
With the introduction of [[Compact Disc]] technology by Sony/Phillips in the early 1980's came the standard of recording audio in [[16 bit]] linear PCM format. AD converter technology was still evolving at the time and even though many AD converters were nominally "16 bit" they were not truly accurate to 16 bit resolution. Contemporary AD converters are typically "[[24 bit]]" and are accurate to approximately 22-23 bits. The sample frequency capability of AD converters has also increased since the original [[CD format]] of 44.1 kHz was introduced; with contemporary AD converters supporting output sample frequencies as high as 384 kHz. Although there are a number of advantages to AD conversion at sample frequencies higher than 44.1 kHz, these advantages are gained at sample frequencies of 88.2 or 96 kHz. Increasing the sample frequency beyond 96 kHz will degrade the conversion accuracy in the audio frequency range, while the only advantage is the ability to record supersonic frequencies beyond the range even dogs can hear.
  
 
==Basics==
 
==Basics==

Revision as of 21:09, 23 January 2012

Overview

The term "analog to digital converter" is used to describe a device that accepts analog audio inputs and outputs a digital code that represents the original analog input. This code is typically linear PCM format; but may also be other formats such as DSD or I2S (typically used internally in analog to digital converter units). Once encoded, the information can be stored, transmitted, or copied in a lossless manner. In most instances; further processing is used to generate other formats that employ data compression of both the lossless and lossy variety.

The term can be used to describe the actual analog to digital converter IC or circuit, or an entire unit that incorporates all of the necessary support circuitry to accept line level analog input signals and output the encoded digital audio signal in one or more formats.

For brevity, the term "AD converter" or "ADC" will be used interchangeably with "analog to digital converter" in the following discussion.

History

Prior to the development of practical digital audio recording systems; AD converters were used in applications such as medical testing and monitoring equipment, and instrumentation (industrial measurement and monitoring). These early converters were limited either by the converter technology at the time or by the amount of data that associated system could handle to much lower resolution than typically used to encode audio. The resolution both in the amplitude domain (typically voltage of the input waveform) and time domain of these converters was often quite limited when compared to contemporary digital audio standards.

Before storage of the huge amount information generated by CD quality AD converters became practical, the earliest application in music recording was in "outboard" equipment such as digital delay or effects processors. Largely because the output of these early units was mixed in with the original (unprocessed) source at a low level as an ambient effect; the less-than high fidelity quality of the converters was acceptable. Even with the noise and distortion present in analog recordings, the perceived quality of the analog tape recordings was far better than the signal processed through these early converters. One of the more popular early digital delay units employed a novel for of digital encoding "sigma-delta" where, in contrast to the "linear PCM" format where each "sample" of the analog input waveform is represented by a digital word made of a number of bits; sigma-delta encoded only one bit at a relatively high sample frequency. Compared to the relatively inaccurate PCM-based units, most recording engineers felt that the sigma-delta digital delay unit sounded closer to the source.

With the introduction of Compact Disc technology by Sony/Phillips in the early 1980's came the standard of recording audio in 16 bit linear PCM format. AD converter technology was still evolving at the time and even though many AD converters were nominally "16 bit" they were not truly accurate to 16 bit resolution. Contemporary AD converters are typically "24 bit" and are accurate to approximately 22-23 bits. The sample frequency capability of AD converters has also increased since the original CD format of 44.1 kHz was introduced; with contemporary AD converters supporting output sample frequencies as high as 384 kHz. Although there are a number of advantages to AD conversion at sample frequencies higher than 44.1 kHz, these advantages are gained at sample frequencies of 88.2 or 96 kHz. Increasing the sample frequency beyond 96 kHz will degrade the conversion accuracy in the audio frequency range, while the only advantage is the ability to record supersonic frequencies beyond the range even dogs can hear.

Basics

In order to make a useful digital audio system; the method used to encode and decode the analog audio signal must: 1.) Be reciprocal for encoding (recording) and decoding (playback). 2.) Must be able to "re-construct" the original analog information to a minimum level of accurately. 3.) Ideally incorporates a "standard" that facilitates interchange between systems made by different manufacturers.

A typical AD converter is actually a system made up of a number of stages: a.) The Line input stage b.) The level-shifting stage c.) The sample-and-hold circuitry d.) The AD converter e.) The digital signal processor f.) The clock circuitry g.) The digital output circuitry

To achieve (1), contemporary digital audio systems use a method referred to as "sampling" which, in a manner analogous to film or video cameras, takes a contiguous series of "snapshots" of the audio waveform at a specific frequency (the sample frequency). Analog audio derives its name from the manner in which the acoustic pressure variation of the original sound is represented by a voltage waveform with the same variations- the voltage variation is "analogous" to the pressure variation at every point in time. Although it is possible that at specific points in an audio system the signal is represented by current variations as versus voltage variations; the analog signal is typically a voltage waveform when it is transmitted from one piece of audio equipment to another.

This is where (c) comes into the picture- a voltage that is constantly changing over time must somehow be measured while it is changing! The only practical way to do this is to "sample" the input voltage by charging a small capacitor to the same voltage as the waveform and disconnecting it from the input. The capacitor then "holds" the voltage while a very high impedance amplifier makes a "copy" of the voltage to feed to the AD converter input while preventing the capacitor from discharging (which would result in the voltage dropping steadily from the level at which it was sampled). The AD converter then makes an extremely accurate measurement of the voltage and outputs a digital "word" that represents the voltage.

The digital "words" are recorded in sequence as a file, and can be stored or transmitted without change to the information. In order for the playback DA to accurately reconstruct the voltage waveform; it must output the voltage of each sample at exactly the same voltage level and exactly the same relative time. This means the sample frequency must be very close to the same frequency and, more importantly, the sample clock must have extremely even time periods for each sample. This where the discussion of "jitter" comes in- jitter is the term that is used to describe short-term variations in the clock cycle period caused by real-world issues common to the transmission of very high frequency signals over signal conductors (cables or even signal "traces" on printed circuit boards). Although voltage (amplitude domain) accuracy has increased dramatically since the early days of digital audio; the performance of even extremely accurate converters can be compromised by inaccurate clocking of the conversion either during AD conversion, during DA conversion, or both.

Once the digital information is generated by the AD converter; it must be transmitted to the next device for storage or processing. Internal to the AD converter system; the I2S format is common and typically consists of three signals: I.) The "Bit Clock" which has one cycle for each "bit" in the serial data output of the AD converter. II.) The "Word Clock" which is at the sample frequency and each half cycle is used to define whether the serial data is the left channel or right channel data (most contemporary converters are "stereo" two channel units). III.) The "Serial Data" which is the digital code containing each sample's voltage level information.

This format has advantages for transmission of the digital audio information between IC's located in close proximity to each other on the same PC board; but is subject to the same quality issues as any other high frequency signal traveling down a conductor. It was not intended for transmission between pieces of equipment. The AES (Audio Engineering Society) began the process of standardizing the format of transmission for digital audio- both digital coding and the physical/electrical connections, in the 1980's. Most contemporary digital audio devices incorporate the AES3 standard and the corresponding IEC consumer standard which is nearly identical in coding. The primary difference is the professional AES3 standard employs either "balanced" XLR connections carrying differential "TTL" 5 volt signals or BNC coaxial single-ended ("unbalanced) TTL level signals. The consumer formats are either RCA coaxial 0.5V unbalanced signals or optical signals typically employing "Toslink" connectors. In some cases BNC connectors are substituted for RCA connectors or other physical forms of optical connectors are sued in place of Toslink.

Unlike the I2S format; the AES3 and IEC Consumer formats (former known as S-PDIF) are designed specifically to transmit digital audio between equipment. The AES3 standard is capable of transmitting digital audio over 100 meters of cable when properly implemented.

As the use of personal computers (both Windows and Apple OS) for digital audio became practical, a USB Audio standard emerged as a method to connect digital audio equipment to the computer. Because USB is a general purpose computer interface; it is subject to length restrictions and "sharing of resources" by other devices on the same USB buss that can affect the audio performance. As the speed and processing power of personal computers increases; the USB audio performance has increased in reliability. There are a number of systems used for USB audio connection, which currently include synchronous, asynchronous, and adaptive asynchronous.