Difference between revisions of "Analog to digital converter"
Brad Johnson (talk | contribs) |
Brad Johnson (talk | contribs) |
||
(22 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
==Overview== | ==Overview== | ||
− | The term "analog to digital converter" is used to describe a device that accepts analog audio inputs and outputs a digital code that represents the original analog input. This code is typically linear [[PCM]] format; but may also be other formats such as [[DSD]] or [[I2S]] (typically used internally in analog to digital converter units). Once encoded, the information can be stored, transmitted, or copied in a lossless manner. In most instances; further processing is used to generate other formats that employ [[data compression]] of both the [[lossless]] and [[lossy]] variety. | + | The term "analog to digital converter" is used to describe a device that accepts analog audio inputs and outputs a digital code that represents the original analog input. This code is typically linear [[PCM]] format; but may also be other formats such as [[DSD]] or [[I2S]] (typically used internally in analog to digital converter units). Once encoded, the information can be stored, transmitted, or copied in a lossless manner. In most instances; further processing is used to generate other formats that employ [[data compression]] of both the [[lossless]] and [[lossy]] variety. Analog to digital converters can also be used for non-audio applications; but that is beyond the scope of this discussion. |
The term can be used to describe the actual analog to digital converter IC or circuit, or an entire unit that incorporates all of the necessary support circuitry to accept [[line level]] analog input signals and output the encoded [[digital audio]] signal in one or more formats. | The term can be used to describe the actual analog to digital converter IC or circuit, or an entire unit that incorporates all of the necessary support circuitry to accept [[line level]] analog input signals and output the encoded [[digital audio]] signal in one or more formats. | ||
Line 8: | Line 8: | ||
Prior to the development of practical [[digital audio]] recording systems; AD converters were used in applications such as medical testing and monitoring equipment, and instrumentation (industrial measurement and monitoring). These early converters were limited either by the converter technology at the time or by the amount of data that associated system could handle to much lower resolution than typically used to encode audio. The resolution both in the [[amplitude domain]] (typically voltage of the input waveform) and [[time domain]] of these converters was often quite limited when compared to contemporary digital audio standards. | Prior to the development of practical [[digital audio]] recording systems; AD converters were used in applications such as medical testing and monitoring equipment, and instrumentation (industrial measurement and monitoring). These early converters were limited either by the converter technology at the time or by the amount of data that associated system could handle to much lower resolution than typically used to encode audio. The resolution both in the [[amplitude domain]] (typically voltage of the input waveform) and [[time domain]] of these converters was often quite limited when compared to contemporary digital audio standards. | ||
+ | Before storage of the huge amount information generated by [[CD quality]] AD converters became practical, the earliest application in music recording was in "[[outboard]]" equipment such as digital delay or effects processors. Largely because the output of these early units was mixed in with the original (unprocessed) source at a low level as an ambient effect; the less-than high fidelity quality of the converters was acceptable. Even with the noise and distortion present in analog recordings, the perceived quality of the analog tape recordings was far better than the signal processed through these early converters. One of the more popular early digital delay units employed a novel form of digital encoding "[[sigma-delta]]" where, in contrast to the "[[linear PCM]]" format where each "[[sample]]" of the analog input waveform is represented by a digital [[word]] made of a number of [[bits]]; sigma-delta encoded only one bit at a relatively high [[sample frequency]]. Compared to the relatively inaccurate PCM-based units, most recording engineers felt that the sigma-delta digital delay unit sounded closer to the source. | ||
− | + | With the introduction of [[Compact Disc]] technology by Sony/Phillips in the early 1980's came the standard of recording audio in [[16 bit]] linear PCM format. AD converter technology was still evolving at the time and even though many AD converters were nominally "16 bit" they were not truly accurate to 16 bit resolution. Contemporary AD converters are typically "[[24 bit]]" and are accurate to approximately 22-23 bits. The sample frequency capability of AD converters has also increased since the original [[CD format]] of 44.1 kHz was introduced; with contemporary AD converters supporting output sample frequencies as high as 384 kHz. Although there are a number of advantages to AD conversion at sample frequencies higher than 44.1 kHz, these advantages are gained at sample frequencies of 88.2 or 96 kHz. Increasing the sample frequency beyond 96 kHz will degrade the conversion accuracy in the audio frequency range, while the only advantage is the ability to record hypersonic frequencies beyond the range even dogs can hear. | |
− | |||
− | With the introduction of [[Compact Disc]] technology by Sony/Phillips in the early 1980's came the standard of recording audio in [[16 bit]] linear PCM format. AD converter technology was still evolving at the time and even though many AD converters were nominally "16 bit" they were not truly accurate to 16 bit resolution. Contemporary AD converters are typically "[[24 bit]]" and are accurate to approximately 22-23 bits. The sample frequency capability of AD converters has also increased since the original [[CD format]] of 44.1 kHz was introduced; with contemporary AD converters supporting output sample frequencies as high as 384 kHz. Although there are a number of advantages to AD conversion at sample frequencies higher than 44.1 kHz, these advantages are gained at sample frequencies of 88.2 or 96 kHz. Increasing the sample frequency beyond 96 kHz will degrade the conversion accuracy in the audio frequency range, while the only advantage is the ability to record | ||
==Basics== | ==Basics== | ||
In order to make a useful digital audio system; the method used to encode and decode the analog audio signal must: | In order to make a useful digital audio system; the method used to encode and decode the analog audio signal must: | ||
− | + | # Be reciprocal for encoding (recording) and decoding (playback). | |
− | + | # Be able to "re-construct" the original analog information to a minimum level of accurately. | |
− | + | # Incorporate "standard" input and output connection formats that facilitates interconnection in systems made up of equipment made by more than one manufacturer. | |
− | + | To achieve (1), contemporary digital audio systems use a method referred to as "[[sampling]]" which, in a manner analogous to film or video cameras, takes a contiguous series of "snapshots" of the audio [[waveform]] at a specific frequency (the [[sample frequency]]). [[Analog audio]] derives its name from the manner in which the acoustic pressure variation of the original sound is represented by a voltage waveform with the same variations- the voltage variation is "analogous" to the pressure variation at every point in time. Although it is possible that at specific points in an audio system the signal is represented by current variations as versus voltage variations; the analog signal is typically a voltage waveform when it is transmitted from one piece of audio equipment to another. | |
− | a | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | Linear PCM encoding incorporates a digital "[[word]]" of a specific [[wordlength]] which is typically either 16 or 24 bits. One word represents a high-precision "[[sample]]" of the input voltage waveform taken once every [[sample period]]. | |
− | + | The digital "words" are recorded in sequence as a file, and can be stored or transmitted without change to the information. In order for the playback DA to accurately reconstruct the voltage waveform; it must output the voltage of each sample at exactly the same voltage level and exactly the same relative time. This means the DA conversion sample frequency must be very close to the same frequency used in AD conversion, and more importantly, the sample clock must have extremely even time periods for each sample. This where the discussion of "[[jitter]]" comes in- jitter is the term that is used to describe short-term variations in the clock cycle period caused by real-world issues common to the transmission of very high frequency signals over signal conductors (cables or even signal "traces" on printed circuit boards). Although voltage (amplitude domain) accuracy has increased dramatically since the early days of digital audio; the performance of even extremely accurate converters can be compromised by inaccurate clocking of the conversion either during AD conversion, during DA conversion, or both. | |
+ | Once the digital information is generated by the AD converter; it must be transmitted to the next device for storage or processing. Internal to the AD converter system; the [[I2S]] format is common and typically consists of three signals: | ||
+ | # The [[Bit Clock]] which has one cycle for each "bit" in the [[serial data]] output of the AD converter. | ||
+ | # The [[Word Clock]] which is at the sample frequency and each half cycle is used to define whether the serial data is the left channel or right channel data (most contemporary converters are "stereo" two channel units). | ||
+ | # The [[Serial data]] which is the digital code containing each sample's voltage level information. | ||
− | + | This format has advantages for transmission of the digital audio information between IC's located in close proximity to each other on the same PC board; but is subject to the same quality issues as any other high frequency signal traveling down a conductor. It was not intended for transmission between pieces of equipment. The [[AES]] (Audio Engineering Society) began the process of standardizing the format of transmission for digital audio- both digital coding and the physical/electrical connections, in the 1980's. Most contemporary digital audio devices incorporate the AES3 standard and the corresponding IEC consumer standard which is nearly identical in coding. The primary difference is the professional AES3 standard employs either "[[balanced]]" XLR connections carrying differential "TTL" 5 volt signals or [[BNC]] coaxial single-ended ("unbalanced”) TTL level signals. The consumer formats are either RCA coaxial 0.5V unbalanced signals or optical signals typically employing "[[Toslink]]" connectors. In some cases BNC connectors are substituted for RCA connectors or other physical forms of optical connectors are used in place of Toslink. | |
+ | |||
+ | Unlike the I2S format; the AES3 and IEC Consumer formats (former known as [[S-PDIF]]) are designed specifically to transmit digital audio between equipment. The AES3 standard is capable of transmitting digital audio over 100 meters of cable when properly implemented. | ||
− | + | As the use of personal computers (both Windows and Apple OS) for digital audio became practical, a USB Audio standard emerged as a method to connect digital audio equipment to the computer. Because USB is a general purpose computer interface; it is subject to length restrictions and "sharing of resources" by other devices on the same USB buss that can affect the audio performance. As the speed and processing power of personal computers increases; the USB audio performance has increased in reliability. There are a number of systems used for USB audio connection, which currently include synchronous, asynchronous, and adaptive asynchronous. | |
− | |||
− | |||
− | |||
− | + | For more information, please see [http://en.wikipedia.org/wiki/Analog-to-digital_converter] | |
− | + | ==Lavry Products== | |
+ | *LavryGold AD122-96 MkIII [http://www.lavryengineering.com/products/pro-audio/ad122-96-mkiii.html for more information click here] | ||
+ | *LavryBlue MAD-824 [http://www.lavryengineering.com/products/pro-audio/lavryblue-m-ad-824.html for more information click here] | ||
+ | *LavryBlack AD10 [http://www.lavryengineering.com/products/pro-audio/ad10.html for more information click here] | ||
+ | *LavryBlack AD11 [http://www.lavryengineering.com/products/pro-audio/ad11.html for more information click here] | ||
− | + | [[Category:Terminology]] | |
+ | [[Category:Audio conversion]] |
Latest revision as of 11:15, 19 May 2017
Overview
The term "analog to digital converter" is used to describe a device that accepts analog audio inputs and outputs a digital code that represents the original analog input. This code is typically linear PCM format; but may also be other formats such as DSD or I2S (typically used internally in analog to digital converter units). Once encoded, the information can be stored, transmitted, or copied in a lossless manner. In most instances; further processing is used to generate other formats that employ data compression of both the lossless and lossy variety. Analog to digital converters can also be used for non-audio applications; but that is beyond the scope of this discussion.
The term can be used to describe the actual analog to digital converter IC or circuit, or an entire unit that incorporates all of the necessary support circuitry to accept line level analog input signals and output the encoded digital audio signal in one or more formats.
For brevity, the term "AD converter" or "ADC" will be used interchangeably with "analog to digital converter" in the following discussion.
History
Prior to the development of practical digital audio recording systems; AD converters were used in applications such as medical testing and monitoring equipment, and instrumentation (industrial measurement and monitoring). These early converters were limited either by the converter technology at the time or by the amount of data that associated system could handle to much lower resolution than typically used to encode audio. The resolution both in the amplitude domain (typically voltage of the input waveform) and time domain of these converters was often quite limited when compared to contemporary digital audio standards.
Before storage of the huge amount information generated by CD quality AD converters became practical, the earliest application in music recording was in "outboard" equipment such as digital delay or effects processors. Largely because the output of these early units was mixed in with the original (unprocessed) source at a low level as an ambient effect; the less-than high fidelity quality of the converters was acceptable. Even with the noise and distortion present in analog recordings, the perceived quality of the analog tape recordings was far better than the signal processed through these early converters. One of the more popular early digital delay units employed a novel form of digital encoding "sigma-delta" where, in contrast to the "linear PCM" format where each "sample" of the analog input waveform is represented by a digital word made of a number of bits; sigma-delta encoded only one bit at a relatively high sample frequency. Compared to the relatively inaccurate PCM-based units, most recording engineers felt that the sigma-delta digital delay unit sounded closer to the source.
With the introduction of Compact Disc technology by Sony/Phillips in the early 1980's came the standard of recording audio in 16 bit linear PCM format. AD converter technology was still evolving at the time and even though many AD converters were nominally "16 bit" they were not truly accurate to 16 bit resolution. Contemporary AD converters are typically "24 bit" and are accurate to approximately 22-23 bits. The sample frequency capability of AD converters has also increased since the original CD format of 44.1 kHz was introduced; with contemporary AD converters supporting output sample frequencies as high as 384 kHz. Although there are a number of advantages to AD conversion at sample frequencies higher than 44.1 kHz, these advantages are gained at sample frequencies of 88.2 or 96 kHz. Increasing the sample frequency beyond 96 kHz will degrade the conversion accuracy in the audio frequency range, while the only advantage is the ability to record hypersonic frequencies beyond the range even dogs can hear.
Basics
In order to make a useful digital audio system; the method used to encode and decode the analog audio signal must:
- Be reciprocal for encoding (recording) and decoding (playback).
- Be able to "re-construct" the original analog information to a minimum level of accurately.
- Incorporate "standard" input and output connection formats that facilitates interconnection in systems made up of equipment made by more than one manufacturer.
To achieve (1), contemporary digital audio systems use a method referred to as "sampling" which, in a manner analogous to film or video cameras, takes a contiguous series of "snapshots" of the audio waveform at a specific frequency (the sample frequency). Analog audio derives its name from the manner in which the acoustic pressure variation of the original sound is represented by a voltage waveform with the same variations- the voltage variation is "analogous" to the pressure variation at every point in time. Although it is possible that at specific points in an audio system the signal is represented by current variations as versus voltage variations; the analog signal is typically a voltage waveform when it is transmitted from one piece of audio equipment to another.
Linear PCM encoding incorporates a digital "word" of a specific wordlength which is typically either 16 or 24 bits. One word represents a high-precision "sample" of the input voltage waveform taken once every sample period.
The digital "words" are recorded in sequence as a file, and can be stored or transmitted without change to the information. In order for the playback DA to accurately reconstruct the voltage waveform; it must output the voltage of each sample at exactly the same voltage level and exactly the same relative time. This means the DA conversion sample frequency must be very close to the same frequency used in AD conversion, and more importantly, the sample clock must have extremely even time periods for each sample. This where the discussion of "jitter" comes in- jitter is the term that is used to describe short-term variations in the clock cycle period caused by real-world issues common to the transmission of very high frequency signals over signal conductors (cables or even signal "traces" on printed circuit boards). Although voltage (amplitude domain) accuracy has increased dramatically since the early days of digital audio; the performance of even extremely accurate converters can be compromised by inaccurate clocking of the conversion either during AD conversion, during DA conversion, or both. Once the digital information is generated by the AD converter; it must be transmitted to the next device for storage or processing. Internal to the AD converter system; the I2S format is common and typically consists of three signals:
- The Bit Clock which has one cycle for each "bit" in the serial data output of the AD converter.
- The Word Clock which is at the sample frequency and each half cycle is used to define whether the serial data is the left channel or right channel data (most contemporary converters are "stereo" two channel units).
- The Serial data which is the digital code containing each sample's voltage level information.
This format has advantages for transmission of the digital audio information between IC's located in close proximity to each other on the same PC board; but is subject to the same quality issues as any other high frequency signal traveling down a conductor. It was not intended for transmission between pieces of equipment. The AES (Audio Engineering Society) began the process of standardizing the format of transmission for digital audio- both digital coding and the physical/electrical connections, in the 1980's. Most contemporary digital audio devices incorporate the AES3 standard and the corresponding IEC consumer standard which is nearly identical in coding. The primary difference is the professional AES3 standard employs either "balanced" XLR connections carrying differential "TTL" 5 volt signals or BNC coaxial single-ended ("unbalanced”) TTL level signals. The consumer formats are either RCA coaxial 0.5V unbalanced signals or optical signals typically employing "Toslink" connectors. In some cases BNC connectors are substituted for RCA connectors or other physical forms of optical connectors are used in place of Toslink.
Unlike the I2S format; the AES3 and IEC Consumer formats (former known as S-PDIF) are designed specifically to transmit digital audio between equipment. The AES3 standard is capable of transmitting digital audio over 100 meters of cable when properly implemented.
As the use of personal computers (both Windows and Apple OS) for digital audio became practical, a USB Audio standard emerged as a method to connect digital audio equipment to the computer. Because USB is a general purpose computer interface; it is subject to length restrictions and "sharing of resources" by other devices on the same USB buss that can affect the audio performance. As the speed and processing power of personal computers increases; the USB audio performance has increased in reliability. There are a number of systems used for USB audio connection, which currently include synchronous, asynchronous, and adaptive asynchronous.
For more information, please see [1]
Lavry Products
- LavryGold AD122-96 MkIII for more information click here
- LavryBlue MAD-824 for more information click here
- LavryBlack AD10 for more information click here
- LavryBlack AD11 for more information click here