Analog to digital conversion is a fundamental process in modern applications from capacitative sensing to digital calipers and from industrial sensors to professional and consumer audio/video. However, it is not without its errors.
Nonlinearity is a major contributor to converter error. To reduce nonlinearity, you can use a better converter or digitally correct the error using dithering. This is particularly useful if you want to convert old tapes to digital formats.
Mistakes in the Input
When continuous electrical signals with varying time and amplitude are converted to digital form, they must be sampled and represented with discrete values. This process is known as sampling and quantization. The number of bits that are used to represent the discrete signal is known as the bit depth. A higher bit depth provides more discrete values, which results in a more accurate conversion of the analog signal to digital.
In an ideal converter, the error from both systematic and random nonlinearities should be zero. However, this is not always the case. Systematic error is error inherent to the converter function and is typically reflected as differential nonlinearity (DNL). This is the deviation of the actual step width of the conversion output from an ideal value that is one least significant bit (LSB).
A more common problem is gain error, which occurs when the actual transfer function of the converter differs from its desired transfer function. This error manifests as a negative or positive voltage at the input to the converter.
One of the best ways to correct for these errors is by using a method called dithering. This method replaces the original binary data with a series of different values to avoid the distortion caused by quantization noise and introduce fewer error terms when the binary data is converted back into an analog signal. Dithering can also improve the fidelity of audio signal transmission. For example, a simple dithering scheme can transform digital audio data that has many LSBs into a high-fidelity sound signal without introducing any quantization error. This allows the listener to experience the same fidelity of the original digital audio signal as they would when hearing the original analogue signal.
Mistakes in the Output
ADCs sample and translate analog input signals into digital representations, or binary codes. These are then translated back into an analog output signal (current or voltage) by DACs. In this process, the continuous analog signal is approximated by discrete digital values, so some error is inevitable. This error is called quantization distortion, or simply quantization noise, and it can cause the digital output to deviate from its actual analog value.
To reduce the impact of quantization noise, the ADC typically uses oversampling, which increases the number of bits used to represent each sampled analog input. However, this can introduce other errors that affect the signal quality of the output. For example, it is common for the transfer function to exhibit code-edge noise. This is a form of noise that appears at each transition on the transfer function, and can cause code flicker in the LSBs.
Other types of distortion may also occur during the conversion process, such as sign magnitude distortion and asymmetrical gain error. These types of distortions are usually caused by the ADC’s comparator circuit, and can be reduced by proper design.
Sign magnitude distortion occurs when the differential analog input is either very large positive or very small negative, and the comparator has difficulty making a decision. If the comparator takes too long to settle, it can create a metastable output, and the first-stage DAC will produce an output that is not representative of the original analog signal. Asymmetrical gain error is a more subtle form of error, and occurs when the ADC does not correctly calculate the ratio between the analog input and the reference. The result is a distortion in the output that can be difficult to detect without additional measurement equipment.
Mistakes in the Sampling Rate
When converting analog to digital, the continuous analog signal is sampled and then translated into discrete values. Each sample is then digitized into a digital binary code that can be converted back to an analog output (current or voltage) by a DAC. The number of samples required to accurately reconstruct the original analog signal is known as the sampling rate.
This process is based on the Nyquist sampling theorem: The signal must be sampled at a rate at least twice the highest frequency contained in the analog input. When the sampling rate is not at this minimum rate, the resulting sampled signal bears little resemblance to the original analog signal and the problem is known as aliasing.
If the comparator in the ADC takes too long to settle, it can cause a difference between the actual and the simulated digital output. This difference can be used to calculate the error rate, a measure of how often the ADC makes an error. A high error rate can make it difficult to operate the device at the desired speed.
Another source of error in the ADC is quantization noise, also known as quantization uncertainty. This is caused by the fact that each bit in a digital representation represents only one of two possible values, so there are some errors due to rounding. This error is typically minimized by using a larger number of bits to represent the signal.
It’s important to carefully consider the sample rate and resolution required for a particular application before selecting an ADC. This requires careful consideration of the type of analog signal to be digitized and an examination of the digital resources available to process the data.
Mistakes in the Gain
During analog-to-digital conversion, an analog signal is sampled at a rate defined by the Nyquist sampling theorem. This sampled signal is then converted into a stream of digital values. Then, the original analog signal is reproduced from these digital values using a reconstruction filter. This is done to ensure that the digital representation of the analog signal is faithful to the original analog waveform.
During the conversion process, each bit of data in the digital representation is assigned a particular value based on the amplitude of the analog input. This process is called quantization, and it results in a small amount of error. The amount of error is proportional to the number of bits used for quantization. The error is often referred to as quantization distortion.
To minimize the error, it is often useful to use a high-pass filter at the output of the ADC. This helps to filter out the noise that is caused by the quantization process and improves the SQNR of the converter. However, this can also lead to distortion in the output signal.
A good way to reduce the quantization error is by increasing the number of bits used for digitization. This allows for a larger range of amplitudes to be represented. However, this increases the overall complexity of the ADC and may also increase the power consumption.
Analog-to-digital converters are found in a wide variety of electronic devices, including laptop computers and smartphone. They are also commonly used in audio applications, such as recording and music production. A high-quality analog-to-digital converter can provide a high-quality and accurate representation of an analog signal that can be copied and transmitted over and over again without losing quality.
Mistakes in the Resolution
As the demand for analog-to-digital converters (ADCs) continues to increase, it is important for makers to understand how an ADC works and what makes a good ADC. Resolution and sample rate are two important features that need to be carefully examined and considered when choosing an ADC for a particular application.
ADCs take a snapshot of an analogue voltage at one point in time and then convert it into a sequence of digital output codes that represent the analogue signal’s amplitude. The number of binary digits that can be used to represent the analogue voltage is called the ADC’s resolution. A lower resolution will produce more quantization distortion while a higher resolution will yield a smaller quantization error.
Quantization distortion is caused by the fact that each binary digit can only represent a limited number of different amplitudes. This distortion causes the digital output code to contain bits that are not present in the original analogue signal. These bits are often referred to as the least significant bits (LSB). A small amount of dither can be applied to an ADC to reduce this quantization distortion.
Another source of error when using an ADC is code flicker, which is caused by the noise that occurs right at the transition between consecutive LSBs in the output. This is sometimes not specified in the datasheet of an ADC and can be quite large, even with high-resolution converters (16+ bits). Luckily, this can be overcome by adding a simple op amp buffer before and after the ADC, which can significantly reduce the code flicker and improve the ADC’s resolution. Dithering is also used to dither photographic images when converting them to a lower number of bits per pixel–the result may be noisier, but the image looks far more realistic to the human eye than it would without dithering.