<< Chapter < Page Chapter >> Page >
Differential pulse code modulation (DPCM) is described. First, quantized predictive encoding is motivated but then shown to suffer from amplification of quantization error at the decoder. This problem is avoided by DPCM, which places the quantizer in the prediction loop.
  • Many information signals, including audio, exhibit significant redundancy between successive samples.In these situations, it is advantageous to transmit only the difference between predicted and true versions of the signal:with a “good” predictions, the quantizer input will have variance less than the original signal, allowing a quantizer with smallerdecision regions and hence higher SNR. (See [link] for an example of such a structure.)
  • Linear Prediction: There are various methods of prediction, but we focus on forward linear prediction of order N , illustrated by [link] and described by the following equation, where x ^ ( n ) is a linear estimate of x ( n ) based on N previous versions of x ( n ) :
    x ^ ( n ) = i = 1 N h i x ( n - i ) .
    It will be convenient to collect the prediction coefficients into the vector h = ( h 1 , h 2 , , h N ) t .
    This flowchart begins with the variable x(n), which is followed by a series of boxes labeled z^(-1), each followed by arrows pointing to the right at an identical box. A dashed line along this level of the flowchart indicates an infinite or large number of boxes. In between the boxes are arrows pointing down at smaller circles labeled h1, h2, and presumably continuing to a final hN. Connected to these circles, below the row, is a large rounded box labeled with a plus sign. To the right of this box is an arrow pointing towards the right at a final expression, xhat(n). This flowchart begins with the variable x(n), which is followed by a series of boxes labeled z^(-1), each followed by arrows pointing to the right at an identical box. A dashed line along this level of the flowchart indicates an infinite or large number of boxes. In between the boxes are arrows pointing down at smaller circles labeled h1, h2, and presumably continuing to a final hN. Connected to these circles, below the row, is a large rounded box labeled with a plus sign. To the right of this box is an arrow pointing towards the right at a final expression, xhat(n).
    Linear Prediction
  • Lossless Predictive Encoding: Consider first the system in [link] .
    This figure contains two separate flow charts. The first begins with a horizontal arrow pointing to the right. To the left of the arrow is the variable x(n). A line connected near the origin of the horizontal arrow points down and to the right at a large box, labeled predictor h. This box is followed by an arrow, labeled xhat(n), that points to the right and then up at a small circle containing a plus sign. Near the end of this arrow is a second label, a minus sign.  The original horizontal arrow points at the same small circle. An arrow from the small circle points to the right at the second flowchart, and is labeled e(n). The second flowchart is identical except that the movement of the arrows is to the left, and x(n) and xhat(n) are replaced with y(n) and yhat(n). There is also no minus sign labeling the arrow that points at the small circle. Finally, while the majority of the second flowchart points in the opposite direction of the first flowchart, the large horizontal line points again to the right, this time away from the small circle. This figure contains two separate flow charts. The first begins with a horizontal arrow pointing to the right. To the left of the arrow is the variable x(n). A line connected near the origin of the horizontal arrow points down and to the right at a large box, labeled predictor h. This box is followed by an arrow, labeled xhat(n), that points to the right and then up at a small circle containing a plus sign. Near the end of this arrow is a second label, a minus sign.  The original horizontal arrow points at the same small circle. An arrow from the small circle points to the right at the second flowchart, and is labeled e(n). The second flowchart is identical except that the movement of the arrows is to the left, and x(n) and xhat(n) are replaced with y(n) and yhat(n). There is also no minus sign labeling the arrow that points at the small circle. Finally, while the majority of the second flowchart points in the opposite direction of the first flowchart, the large horizontal line points again to the right, this time away from the small circle.
    Lossless Predictive Data Transmission System
    The system equations are
    e ( n ) = x ( n ) - x ^ ( n ) y ( n ) = e ( n ) + y ^ ( n )
    In the z -domain (i.e., X ( z ) = n x ( n ) z - n and H ( z ) = i h i z - i ),
    E ( z ) = X ( z ) - X ^ ( z ) = X ( z ) 1 - H ( z ) Y ( z ) = E ( z ) + Y ^ ( z ) = E ( z ) + H ( z ) Y ( z )
    We call this transmission system lossless because, from above,
    Y ( z ) = E ( z ) 1 - H ( z ) = X ( z )
    Without quantization, however, the prediction error e ( n ) takes on a continuous range of values, and so this scheme is not applicable todigital transmission.
  • Quantized Predictive Encoding: Quantizing the prediction error in [link] , we get the system of [link] .
    This figure is generally similar in shape and construction to the previous figure, Lossless Predictive Data Transmission System, with one exception: the arrow that points to the right from the small circle now points to a new box, labeled Quantizer. This box is now followed by a new arrow pointing to the right, labeled e-tilde(n). This figure is generally similar in shape and construction to the previous figure, Lossless Predictive Data Transmission System, with one exception: the arrow that points to the right from the small circle now points to a new box, labeled Quantizer. This box is now followed by a new arrow pointing to the right, labeled e-tilde(n).
    Quantized Predictive Coding System
    Here the equations are
    q ( n ) = e ˜ ( n ) - e ( n ) e ( n ) = x ( n ) - x ^ ( n ) y ( n ) = e ˜ ( n ) + y ^ ( n )
    In the z -domain we find that
    E ˜ ( z ) = X ( z ) 1 - H ( z ) + Q ( z ) Y ( z ) = E ˜ ( z ) 1 - H ( z ) = X ( z ) + Q ( z ) 1 - H ( z ) .
    Thus the reconstructed output is corrupted by a filtered version of the quantization error where the filter 1 - H ( z ) - 1 is expected to amplify the quantization error; recall that Y ( z ) = E ( z ) 1 - H ( z ) - 1 where the goal of prediction was to make σ e 2 < σ y 2 . This problem results from the fact that the quantization noise appearsat the decoder's predictor input but not at the encoder's predictor input.But we can avoid this...
  • DPCM: Including quantization in the encoder's prediction loop, we obtain the system in [link] , known as differential pulse code modulation .
    This figure contains two flowcharts, with one on the left pointing to a second on the right. The first flowchart begins with an arrow, labeled x(n), pointing to the right at a small circle containing a plus sign. To the right of the circle is another arrow pointing to the right, labeled e(n). This arrow points at a larger box, labeled quantizer. This box is followed to the right by a longer arrow that points at the second flowchart, and the arrow is labeled e-tilde(n). Before the end of the arrow, however, is a new arrow pointing downward at a second circle containing a plus sign. To the left of this circle is an arrow, labeled x-tilde(n), that points at a box labeled predictor h. To the left of this box is a line segment that connects the box to a large arrow pointing back at both small circles. The intersection between the short line extending from the box and the long line pointing at both circles is labeled x-tilde-hat(n). The side of the long arrow that points back at the first circle is labeled with a minus sign. To the right of the aforementioned arrow labeled e-tilde(n) is the second flowchart, which begins with another arrow pointing to the right, labeled e-tilde(n). This arrow points at another small circle containing a plus sign. To the right of the circle is a long arrow that points to the right, labeled y(n). Before the completion of this arrow is a segment drown downward and back to the left at a box labeled predictor h. From the left side of this box and pointing back up and to the left at the small circle is a final arrow, labeled y-hat(n). This figure contains two flowcharts, with one on the left pointing to a second on the right. The first flowchart begins with an arrow, labeled x(n), pointing to the right at a small circle containing a plus sign. To the right of the circle is another arrow pointing to the right, labeled e(n). This arrow points at a larger box, labeled quantizer. This box is followed to the right by a longer arrow that points at the second flowchart, and the arrow is labeled e-tilde(n). Before the end of the arrow, however, is a new arrow pointing downward at a second circle containing a plus sign. To the left of this circle is an arrow, labeled x-tilde(n), that points at a box labeled predictor h. To the left of this box is a line segment that connects the box to a large arrow pointing back at both small circles. The intersection between the short line extending from the box and the long line pointing at both circles is labeled x-tilde-hat(n). The side of the long arrow that points back at the first circle is labeled with a minus sign. To the right of the aforementioned arrow labeled e-tilde(n) is the second flowchart, which begins with another arrow pointing to the right, labeled e-tilde(n). This arrow points at another small circle containing a plus sign. To the right of the circle is a long arrow that points to the right, labeled y(n). Before the completion of this arrow is a segment drown downward and back to the left at a box labeled predictor h. From the left side of this box and pointing back up and to the left at the small circle is a final arrow, labeled y-hat(n).
    A Typical Differential PCM System
    System equations are
    q ( n ) = e ˜ ( n ) - e ( n ) e ( n ) = x ( n ) - x ˜ ^ ( n ) x ˜ ( n ) = e ˜ ( n ) + x ˜ ^ ( n ) y ( n ) = e ˜ ( n ) + y ^ ( n ) .
    In the z -domain we find that
    E ˜ ( z ) = X ( z ) - H ( z ) X ˜ ( z ) + Q ( z ) X ˜ ( z ) = Q ( z ) + E ( z ) + ( X ( z ) - E ( z ) ) = X ( z ) + Q ( z ) Y ( z ) = E ˜ ( z ) 1 - H ( z )
    so that
    Y ( z ) = X ( z ) - H ( z ) X ( z ) + Q ( z ) + Q ( z ) 1 - H ( z ) = X ( z ) + Q ( z ) = X ˜ ( z ) .
    Thus, the reconstructed output is corrupted only by the quantization error. Another significant advantage to placing the quantizer inside theprediction loop is realized if the predictor made self-adaptive (in the same spirit as the adaptive quantizers we studied).As illustrated in [link] , adaptation of the prediction coefficients can take place simulateously at the encoder anddecoder with no transmission of side-information (e.g. h ( n ) )! This is a consequence of the fact that both algorithms have accessto identical signals.
    This figure is generally similar to the flowcharts in the previous figure, titled A Typical Differential PCM System. None of the labels or arrows have changed, except for two new rounded boxes, labeled adapt, that include large arrows that point directly at the two boxes labeled predictor h. There are also two dashed lines pointing at each of the adapt boxes, one extending from the middle of an arrow pointing at a small circle, and one extending from the middle of an arrow pointing at the predictor h boxes. This figure is generally similar to the flowcharts in the previous figure, titled A Typical Differential PCM System. None of the labels or arrows have changed, except for two new rounded boxes, labeled adapt, that include large arrows that point directly at the two boxes labeled predictor h. There are also two dashed lines pointing at each of the adapt boxes, one extending from the middle of an arrow pointing at a small circle, and one extending from the middle of an arrow pointing at the predictor h boxes.
    Adaptive DPCM System

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding. OpenStax CNX. Sep 25, 2009 Download for free at http://cnx.org/content/col11121/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding' conversation and receive update notifications?

Ask