Wordlength

From LavryEngineering
Revision as of 18:19, 1 March 2019 by Brad Johnson (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Overview

The term wordlength is used to describe the format of digital audio signal and relates directly to the resolution or quality of the encoding.

History

In binary computer technology, information is contained in a digital "word" which is simply a pre-defined collection of a number of individual "bits." A single bit can only define two "states" as represented by a "1" or a "0." Although useful in limited situations where all-or-nothing yes-or-no information is required; for broader applications more "states" or increments are necessary. By associating a number of bits in a pre-defined manner; a digital word can be used to represent a useful amount of information.

Similar to the "base ten" system used commonly for day-to-day arithmetic, the “columns” on the left in a binary have a higher value (“more significant”) and the columns on the right have a lower value (“less significant”). For example; in a binary word, the left-most bit is referred to as the “most significant bit” or “msb” and the right-most bit is referred to as the “least significant bit" or “lsb.” This is because a “zero” in the lsb column only represents a value of zero and a “one” can only represent a value of one. By contrast, a “zero” in the msb column of a 16 bit word represents a value of zero and a “one” represents a value of 32,786 (or one-half of 2 raised to the sixteenth power). One exception is in the encoding of digital audio using the “two’s compliment” format; in this system the msb is used as the “sign bit” to represent whether the numerical value represented by the other bits in the word refer to a positive or negative portion of the waveform. There are numerous advantages to using two’s compliment encoding for digital audio coding; which are beyond the scope of this discussion.

Early binary computers used 8-bit words to represent up to 256 increments; which made them useful for basic calculations and industrial control; but not for audio. The term “byte” was coined to describe a data word, and although the term byte commonly referred to an 8-bit word, it could also refer to words with a different number of bits. As technology advanced; 16-bit computers became the norm, and high fidelity digital audio became a possibility. At this time converter technology limited the early 16 bit digital audio systems' quality to much less than its true potential.

Computer technology continued to advance; and 32 bit computers became available. Due in part to the evolution of computer technology; it was not unusual for 32 bit computers to move information between the CPU and hard drive in pairs of 16 bit words. Practically speaking; digital audio converters with an accuracy of 24 bits are capable of encoding analog audio to the level of accuracy obtainable with very high quality analog circuitry; so even though 24 bit wordlength is not typical of computer technology; it has become a standard in digital audio. In most cases; computer software simply uses two 16 bit words (or a "32 bit word") to represent a 24 bit digital audio word and thus simplify operations. The first 16 bit word contains the most-significant digital audio bits, and the 16 bit word containing the 8 least-significant audio bits simply contains "zeros" in the other (unused) 8 bits.

Although excellent quality audio can be encoded to 24 bit digital audio; virtually any time a process is applied to the original 24 bit signal, more than 24 bits will be needed to accurately represent the processed signal. For this reason; most processing is performed with 32 bit or 64 bit precision, and to retain quality the output is only reduced to 24 bit wordlength after dither and noise shaping has been applied.

for more information on "two's compliment" click here