Difference between revisions of "Wordlength"
Brad Johnson (talk | contribs) (Created page with "==Overview== The term <nowiki>wordlength</nowiki> is used to describe the format of digital audio signal and relates directly to the resolution or quality of the encoding. ==His...") |
Brad Johnson (talk | contribs) |
||
Line 11: | Line 11: | ||
Although excellent quality audio can be encoded to 24 bit digital audio; virtually any time a process is applied to the original 24 bit signal, more than 24 bits will be needed to accurately represent the processed signal. For this reason; most processing is performed with 32 bit or 64 bit precision, and the output is only reduced to 24 bit wordlength after dither and noise shaping is applied to retain quality. | Although excellent quality audio can be encoded to 24 bit digital audio; virtually any time a process is applied to the original 24 bit signal, more than 24 bits will be needed to accurately represent the processed signal. For this reason; most processing is performed with 32 bit or 64 bit precision, and the output is only reduced to 24 bit wordlength after dither and noise shaping is applied to retain quality. | ||
− | [[Category: | + | [[Category:Terminology]] |
[[Category:Audio conversion]] | [[Category:Audio conversion]] |
Revision as of 16:41, 24 February 2012
Overview
The term wordlength is used to describe the format of digital audio signal and relates directly to the resolution or quality of the encoding.
History
In binary computer technology, information is contained in digital "words" which is simply a pre-defined collection of a number of individual "bits." A single bit can only define two "states" as represented by a "1" or a "0." Although useful in limited situations where all-or-nothing yes-or-no information is required; for broader applications more "states" or increments increase are necessary. By associating a number of bits in a pre-defined manner; a digital word can be used to represent a useful amount of information.
Early binary computer used 8-bit words to represent up to 256 increments; which made them useful for basic calculations and industrial control; but not for audio. As technology advanced; 16-bit computers became the norm, and digital audio became a possibility. At this time converter technology limited the early 16 bit digital audio systems' quality to much less than the potential.
Computer technology continued to advance; and 32 bit computers became available. Due in part to the evolution of computer technology; it was not unusual for 32 bit computers to move information between the CPU and hard drive in pairs of 16 bit words. Practically speaking; digital audio converters with an accuracy of 24 bits are capable of encoding analog audio to the level of accuracy obtainable with very high quality analog circuitry; so even though 24 bit wordlength is not typical of computer technology; it has become a standard in digital audio. In most cases; computer software simply uses two 16 bit words (or a "32 bit word")to represent a 24 bit digital audio signal to simplify operations.
Although excellent quality audio can be encoded to 24 bit digital audio; virtually any time a process is applied to the original 24 bit signal, more than 24 bits will be needed to accurately represent the processed signal. For this reason; most processing is performed with 32 bit or 64 bit precision, and the output is only reduced to 24 bit wordlength after dither and noise shaping is applied to retain quality.