<< Chapter < Page Chapter >> Page >
Motivated by the practical problem of non-stationary sources, adaptation of the uniform quantizer's stepsize is discussed. In particular, adaptive quantization based on forward estimation (AQF) and backward estimation (AQB) are discussed, in both block-based and recursive forms.
  • Previously have considered the case of stationary source processes, though in reality the source signal may be highly non-stationary.For example, the variance, pdf, and/or mean may vary significantly with time.
  • Here we concentrate on the problem of adapting uniform quantizer stepsize Δ to a signal with unknown variance. This is accomplished by estimating the input variance σ ^ x ( n ) and setting the quantizer stepsize appropriately:
    Δ ( n ) = 2 φ x σ ^ x ( n ) L .
    Here φ x is a constant that depends on the distribution of the input signal x whose function is to prevent input values greater than σ x ( n ) from being clipped by the quantizer (see [link] ); comparing to non-adaptive step size relation Δ = 2 x max / L , we see that φ x σ ^ x ( n ) x max .
     this figure contains one cartesian graph and one distribution function aligned above and below with a common vertical axis. The cartesian graph has horizontal axis x and vertical axis Q(x) (capital Q). Along the vertical axis of the graph are eight evenly-spaced dots centered at the origin, with the distance between them labeled 1. There are also nine hash marks on the horizontal axis, also centered at the origin, with the distance between them labeled Δ = (2Φ_x σhat_x)/L. The graph contains one curve composed of seven steps of equal length and height that correspond to the distances between the dots and hash marks. Below this graph is a distribution function of a normal bell curve. The vertical axis is aligned with the above graph, but this vertical axis is labeled p_x(x). The horizontal axis is labeled x. There are two horizontal values marked along the positive side of the bell curve. The first is σhat^x, at an unspecific point along the horizontal axis. The second is  shown to be even with the rightmost hash mark of the above graph, and is labeled Φ_xσhat_x.  this figure contains one cartesian graph and one distribution function aligned above and below with a common vertical axis. The cartesian graph has horizontal axis x and vertical axis Q(x) (capital Q). Along the vertical axis of the graph are eight evenly-spaced dots centered at the origin, with the distance between them labeled 1. There are also nine hash marks on the horizontal axis, also centered at the origin, with the distance between them labeled Δ = (2Φ_x σhat_x)/L. The graph contains one curve composed of seven steps of equal length and height that correspond to the distances between the dots and hash marks. Below this graph is a distribution function of a normal bell curve. The vertical axis is aligned with the above graph, but this vertical axis is labeled p_x(x). The horizontal axis is labeled x. There are two horizontal values marked along the positive side of the bell curve. The first is σhat^x, at an unspecific point along the horizontal axis. The second is  shown to be even with the rightmost hash mark of the above graph, and is labeled Φ_xσhat_x.
    Adaptive quantization stepsize δ ( n ) = 2 φ x σ ^ x / L
  • As long as the reconstruction levels { y k } are the same at encoder and decoder, the actual values chosen for quantizer designare arbitrary. Assuming integer values as in [link] , the quantization rule becomes
    y ( n ) = x ( n ) Δ ( n ) - 1 2 midrise , x ( n ) Δ ( n ) - 1 2 midtread .
  • AQF and AQB: [link] shows two structures for stepsize adaptation: (a) adaptive quantization with forward estimation (AQF) and (b) adaptive quantization with backward estimation (AQB). The advantage of AQF is that variance estimationmay be accomplished more accurately, as it is operates directly on the source as opposed to a quantized (noisy) version of the source.The advantage of AQB is that the variance estimates do not need to be transmitted as side information for decoding.In fact, practical AQF encoders transmit variance estimates only occasionally, e.g., once per block.
    This figure contains two flowcharts, parts a and b. Part a is to the left, and it begins with a long arrow pointing to the right at a box Q. This arrow is labeled x(n). This line also splits below to a second row of the flowchart, and points with a second arrow at a box labeled Variance Estimator. To the right of this box is a dashed line that splits, both pointing up at the aforementioned box Q and also to the right at the caption, channel. To the right of the q-box is an arrow pointing to the right, labeled y(n), at an identical caption, channel. To the right of the upper channel is another arrow labeled y(n) pointing to the right at a box labeled Q^-1. To the right of the lower channel is a dashed line that points up at the aforementioned Q^-1 box. To the right of this box is an arrow labeled x-tilde(n) that points ambiguously to the right towards the second flowchart. Part b is a flow chart that begins with a shorter arrow, labeled x(n), that points to the right at a box labeled Q. Following this box is an arrow pointing to the right labeled y(n), although before the termination of this arrow, the arrow splits down and to the left at a box labeled Variance Estimator. This box is followed by a dashed line with an arrow pointing back at the Q-box. To the right of the arrow labeled y(n) is a caption, channel, which is followed by a much longer arrow also labeled y(n) that points at a box labeled Q^-1. This arrow splits before its completion and moves down and to the right, pointing at a second box labeled Variance Estimator. To the right of this box is a dashed line that points back up at the box labeled Q^-1. To the right of this box is a final arrow, labeled x-tilde(n), pointing ambiguously to the right. This figure contains two flowcharts, parts a and b. Part a is to the left, and it begins with a long arrow pointing to the right at a box Q. This arrow is labeled x(n). This line also splits below to a second row of the flowchart, and points with a second arrow at a box labeled Variance Estimator. To the right of this box is a dashed line that splits, both pointing up at the aforementioned box Q and also to the right at the caption, channel. To the right of the q-box is an arrow pointing to the right, labeled y(n), at an identical caption, channel. To the right of the upper channel is another arrow labeled y(n) pointing to the right at a box labeled Q^-1. To the right of the lower channel is a dashed line that points up at the aforementioned Q^-1 box. To the right of this box is an arrow labeled x-tilde(n) that points ambiguously to the right towards the second flowchart. Part b is a flow chart that begins with a shorter arrow, labeled x(n), that points to the right at a box labeled Q. Following this box is an arrow pointing to the right labeled y(n), although before the termination of this arrow, the arrow splits down and to the left at a box labeled Variance Estimator. This box is followed by a dashed line with an arrow pointing back at the Q-box. To the right of the arrow labeled y(n) is a caption, channel, which is followed by a much longer arrow also labeled y(n) that points at a box labeled Q^-1. This arrow splits before its completion and moves down and to the right, pointing at a second box labeled Variance Estimator. To the right of this box is a dashed line that points back up at the box labeled Q^-1. To the right of this box is a final arrow, labeled x-tilde(n), pointing ambiguously to the right.
    (a) AQF and (b) AQB
  • Block Variance Estimation: When operating on finite blocks of data, the structures in [link] perform variance estimation as follows:
    Block AQF: σ ^ x 2 ( n ) = 1 N i = 1 N x 2 ( n - i ) Block AQB: σ ^ x 2 ( n ) = 1 N i = 1 N y ( n - i ) · Δ ( n - i ) 2
    N is termed the learning period and its choice may significantly impact quantizer SNR performance:choosing N too large prevents the quantizer from adapting to the local statistics of the input, while choosing N too small results in overly noisy AQB variance estimates and excessive AQF sideinformation. [link] demonstrates these two schemes for two choices of N .
    This figure consists of four graphs, each plotting a solid and dashed line along an axis. The graphs all look similar with relatively mellow sections of peaks throughout the first two-thirds of the graph, and a higher, more variable mountainous region in the latter third. Both graphs of N=128 have very similar dashed and solid lines. Both graph of N=32 have very similar solid lines, but the dashed line for AQB is much more volatile. The solid lines in the graphs of N=32 are much more volatile than those in the graphs of N=128. This figure consists of four graphs, each plotting a solid and dashed line along an axis. The graphs all look similar with relatively mellow sections of peaks throughout the first two-thirds of the graph, and a higher, more variable mountainous region in the latter third. Both graphs of N=128 have very similar dashed and solid lines. Both graph of N=32 have very similar solid lines, but the dashed line for AQB is much more volatile. The solid lines in the graphs of N=32 are much more volatile than those in the graphs of N=128.
    Block AQF and AQB estimates of σ x ( n ) superimposed on | x ( n ) | for N = 128 , 32 . SNR acheived: (a) 22.6 dB, (b) 28.8 dB, (c) 21.2 dB, and (d) 28.8 dB.
  • Recursive Variance Estimation: The recursive method of estimating variance is as follows
    Recursive AQF: σ ^ x 2 ( n ) = α σ ^ x 2 ( n - 1 ) + ( 1 - α ) x 2 ( n - 1 ) Recursive AQB: σ ^ x 2 ( n ) = α σ ^ x 2 ( n - 1 ) + ( 1 - α ) y ( n - 1 ) · Δ ( n - 1 ) 2 .
    where α is a forgetting factor in the range 0 < α < 1 and typically near to 1. This leads to an exponential data window, as can be seen below.Plugging the expression for σ ^ x 2 ( n - 1 ) into that for σ ^ x 2 ( n ) ,
    σ ^ x 2 ( n ) = α α σ ^ x 2 ( n - 2 ) + ( 1 - α ) x 2 ( n - 2 ) + ( 1 - α ) x 2 ( n - 1 ) = α 2 σ ^ x 2 ( n - 2 ) + ( 1 - α ) x 2 ( n - 1 ) + α x 2 ( n - 2 ) .
    Then plugging σ ^ x 2 ( n - 2 ) into the above,
    σ ^ x 2 ( n ) = α 2 α σ ^ x 2 ( n - 3 ) + ( 1 - α ) x 2 ( n - 3 ) + ( 1 - α ) x 2 ( n - 1 ) + α x 2 ( n - 2 ) = α 3 σ ^ x 2 ( n - 3 ) + ( 1 - α ) x 2 ( n - 1 ) + α x 2 ( n - 2 ) + α 2 x 2 ( n - 3 ) .
    Continuing this process N times, we arrive at
    σ ^ x 2 ( n ) = ( 1 - α ) i = 1 N α i - 1 x 2 ( n - i ) + α N σ ^ x 2 ( n - N ) .
    Taking the limit as N , α < 1 ensures that
    σ ^ x 2 ( n ) = ( 1 - α ) i = 1 α i - 1 x 2 ( n - i ) .
    This figure consists of four graphs, each plotting a solid and dashed line along an axis. The graphs all look similar with relatively mellow sections of peaks throughout the first two-thirds of the graph, and a higher, more variable mountainous region in the latter third. Both AQF and AQB for lambda=0.9 have highly volatile dashed and solid lines, with the peaks of AQB even more sharp than those of AQF. For lambda=0.99, the solid lines have a much smoother appearance in both AQF and AQB, and the dashed lines have not changed much from lambda=0.9. This figure consists of four graphs, each plotting a solid and dashed line along an axis. The graphs all look similar with relatively mellow sections of peaks throughout the first two-thirds of the graph, and a higher, more variable mountainous region in the latter third. Both AQF and AQB for lambda=0.9 have highly volatile dashed and solid lines, with the peaks of AQB even more sharp than those of AQF. For lambda=0.99, the solid lines have a much smoother appearance in both AQF and AQB, and the dashed lines have not changed much from lambda=0.9.
    Exponential AQF and AQB estimates of σ x ( n ) superimposed on | x ( n ) | for λ = 0 . 9 , 0 . 99 . (a) 20.5 dB, (b) 28.0 dB, (c) 22.2 dB, (d) 24.1 dB.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding. OpenStax CNX. Sep 25, 2009 Download for free at http://cnx.org/content/col11121/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding' conversation and receive update notifications?

Ask