<< Chapter < Page Chapter >> Page >
Motivated by the cascade of memoryless quantization and entropy coding, the entropy-minimizing scalar memoryless quantizer is derived. Using a compander formulation and tools from the calculus of variations, it is shown that the entropy-minimizing quantizer is the simple uniform quantizer. The penalty associated with memoryless quantization is then analyzed in the asymptotic case of many quantization levels.
  • Say that we are designing a system with a memoryless quantizer followed by an entropy coder, and our goal is to minimize theaverage transmission rate for a given σ q 2 (or vice versa). Is it optimal to cascade a σ q 2 -minimizing (Lloyd-Max) quantizer with a rate-minimizing code?In other words, what is the optimal memoryless quantizer if the quantized outputs are to be entropy coded?
  • A Compander Formulation: To determine the optimal quantizer,
    1. consider a companding system: a memoryless nonlinearity c ( x ) followed by uniform quantizer,
    2. find c ( x ) minimizing entropy H y for a fixed error variance σ q 2 .
    Compander curve: nonuniform input regions mapped to uniform output regions (for subsequent uniform quantization)
  • First we must express σ q 2 and H y in terms of c ( x ) . [link] suggests that, for large L , the slope c ' ( x ) : = d c ( x ) / d x obeys
    c ' ( x ) | x X k = 2 x max / L Δ k ,
    so that we may write
    Δ k = 2 x max L c ' ( x ) | x X k .
    Assuming large L , the σ q 2 -approximation equation 9 from MSE-Optimal Memoryless Scalar Quantization (lower equation) can be transformed as follows.
    σ q 2 = 1 12 k = 1 L P k Δ k 2 = x max 2 3 L 2 k = 1 L P k c ' ( x ) 2 | x X k = x max 2 3 L 2 k = 1 L p x ( x ) c ' ( x ) 2 Δ k | x X k since P k = p x ( x ) | x X k Δ k for large L = x max 2 3 L 2 - x max x max p x ( x ) c ' ( x ) 2 d x .
    Similarly,
    H y = - k = 1 L P k log 2 P k = - k = 1 L p x ( x ) Δ k log 2 p x ( x ) Δ k | x X k = - k = 1 L p x ( x ) Δ k log 2 p x ( x ) | x X k - k = 1 L p x ( x ) Δ k log 2 Δ k | x X k = - - x max x max p x ( x ) log 2 p x ( x ) d x h x : ``differential entropy'' - - x max x max p x ( x ) log 2 2 x max L c ' ( x ) Δ k d x = h x - log 2 2 x max L - x max x max p x ( x ) d x = 1 + - x max x max p x ( x ) log 2 c ' ( x ) d x = constant + - x max x max p x ( x ) log 2 c ' ( x ) d x
  • Entropy-Minimizing Quantizer: Our goal is to choose c ( x ) which minimizes the entropy rate H y subject to fixed error variance σ q 2 . We employ a Lagrange technique again, minimizingthe cost - x max x max p x ( x ) log 2 c ' ( x ) d x under the constraint that the quantity - x max x max p x ( x ) c ' ( x ) - 2 d x equals a constant C . This yields the unconstrained cost function
    J u c ' ( x ) , λ = - x max x max p x ( x ) log 2 c ' ( x ) + λ p x ( x ) c ' ( x ) - 2 - C φ ( c ' ( x ) , λ ) d x ,
    with scalar λ , and the unconstrained optimization problem becomes
    min c ' ( x ) , λ J u ( c ' ( x ) , λ ) .
    The following technique is common in variational calculus (see, e.g., Optimal Systems Control by Sage&White). Say a ( x ) minimizes a (scalar) cost J a ( x ) . Then for any (well-behaved) variation η ( x ) from this optimal a ( x ) , we must have
    ϵ J a ( x ) + ϵ η ( x ) | ϵ = 0 = 0
    where ϵ is a scalar. Applying this principle to our optimization problem, we search for c ' ( x ) such that
    η ( x ) , ϵ J u c ' ( x ) + ϵ η ( x ) , λ | ϵ = 0 = 0 .
    From [link] we find (using log 2 a = log 2 e · log e a )
    J u ϵ | ϵ = 0 = - x max x max ϵ φ c ' ( x ) + ϵ η ( x ) , λ | ϵ = 0 d x = - x max x max ϵ p x ( x ) log 2 ( e ) log e c ' ( x ) + ϵ η ( x ) + λ p x ( x ) c ' ( x ) + ϵ η ( x ) - 2 - C | ϵ = 0 d x = - x max x max log 2 ( e ) p x ( x ) c ' ( x ) + ϵ η ( x ) - 1 η ( x ) - 2 λ p x ( x ) c ' ( x ) + ϵ η ( x ) - 3 η ( x ) | ϵ = 0 d x = - x max x max p x ( x ) c ' ( x ) - 1 log 2 ( e ) - 2 λ c ' ( x ) - 2 η ( x ) d x
    and to allow for any η ( x ) we require
    log 2 ( e ) - 2 λ c ' ( x ) - 2 = 0 c ' ( x ) = 2 λ log 2 e a constant! .
    Applying the boundary conditions,
    c ( x max ) = x max c ( - x max ) = - x max c ( x ) = x .
    Thus, for large- L , the quantizer that minimizes entropy rate H y for a given quantization error variance σ q 2 is the uniform quantizer. Plugging c ( x ) = x into [link] , the rightmost integral disappears and we have
    H y | uniform = h x - log 2 2 x max L Δ ,
    and using the large- L uniform quantizer error variance approximation equation 6 from Memoryless Scalar Quantization ,
    H y | uniform = h x - 1 2 log 2 12 σ q 2 .
    It is interesting to compare this result to the information-theoretic minimal average rate for transmission of a continuous-amplitudememoryless source x of differential entropy h x at average distortion σ q 2 (see Jayant&Noll or Berger):
    R min = h x - 1 2 log 2 2 π e σ q 2 .
    Comparing the previous two equations, we find that (for a continous-amplitude memoryless source) uniform quantizationprior to entropy coding requires
    1 2 log 2 π e 6 0 . 255 bits/sample
    more than the theoretically optimum transmission scheme, regardless of the distribution of x . Thus, 0.255 bits/sample (or 1 . 5 dB using the 6 . 02 R relationship) is the price paid for memoryless quantization .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding. OpenStax CNX. Sep 25, 2009 Download for free at http://cnx.org/content/col11121/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding' conversation and receive update notifications?

Ask