Entropy-Minimizing Quantizer: Our goal is to choose
$c\left(x\right)$ which minimizes the entropy rate
H
_{y} subject to fixed error variance
σ
_{q}
^{2} .
We employ a Lagrange technique again, minimizingthe cost
${\int}_{-{x}_{max}}^{{x}_{max}}{p}_{x}\left(x\right){log}_{2}{c}^{\text{'}}\left(x\right)dx$ under the constraint that the quantity
${\int}_{-{x}_{max}}^{{x}_{max}}{p}_{x}\left(x\right){\left({c}^{\text{'}},\left(x\right)\right)}^{-2}dx$ equals a constant
C .
This yields the unconstrained cost function
$$\begin{array}{c}\hfill {J}_{u}\left({c}^{\text{'}},\left(x\right),,,\lambda \right)\phantom{\rule{3.33333pt}{0ex}}=\phantom{\rule{3.33333pt}{0ex}}{\int}_{-{x}_{max}}^{{x}_{max}}\underset{\phi ({c}^{\text{'}}\left(x\right),\lambda )}{\underbrace{\left[{p}_{x},\left(x\right),{log}_{2},{c}^{\text{'}},\left(x\right),+,\lambda ,\left({p}_{x},\left(x\right),{\left({c}^{\text{'}},\left(x\right)\right)}^{-2},-,C\right)\right]}}dx,\end{array}$$
with scalar
λ , and the unconstrained optimization problem becomes
$$\underset{{c}^{\text{'}}\left(x\right),\lambda}{min}{J}_{u}({c}^{\text{'}}\left(x\right),\lambda ).$$
The following technique is common in variational calculus (see, e.g., Optimal Systems Control by Sage&White).
Say
${a}^{\star}\left(x\right)$ minimizes a (scalar) cost
$J\left(a,\left(,x,\right)\right)$ .
Then for
any (well-behaved) variation
$\eta \left(x\right)$ from this optimal
${a}^{\star}\left(x\right)$ , we must have
$$\frac{\partial}{\partial \u03f5}J\left({a}^{\star},\left(x\right),+,\u03f5,\eta ,\left(x\right)\right){|}_{\u03f5=0}=0$$
where
ϵ is a scalar.
Applying this principle to our optimization problem, we search for
${c}^{\text{'}}\left(x\right)$ such that
$$\forall \eta \left(x\right),\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\frac{\partial}{\partial \u03f5}{J}_{u}\left({c}^{\text{'}},\left(x\right),+,\u03f5,\eta ,\left(x\right),,,\lambda \right){|}_{\u03f5=0}=0.$$
From
[link] we find (using
${log}_{2}a={log}_{2}e\xb7{log}_{e}a$ )
$$\begin{array}{ccc}\hfill \frac{\partial {J}_{u}}{\partial \u03f5}{|}_{\u03f5=0}& =& {\int}_{-{x}_{max}}^{{x}_{max}}\frac{\partial}{\partial \u03f5}\phi \left({c}^{\text{'}},\left(x\right),+,\u03f5,\eta ,\left(x\right),,,\lambda \right){|}_{\u03f5=0}dx\hfill \\ & =& {\int}_{-{x}_{max}}^{{x}_{max}}\frac{\partial}{\partial \u03f5}\left[{p}_{x},\left(x\right),{log}_{2},\left(e\right),\phantom{\rule{0.166667em}{0ex}},{log}_{e},\left({c}^{\text{'}},\left(x\right),+,\u03f5,\eta ,\left(x\right)\right),+,\lambda ,\left({p}_{x},\left(x\right),{\left({c}^{\text{'}},\left(x\right),+,\u03f5,\eta ,\left(x\right)\right)}^{-2},-,C\right)\right]{|}_{\u03f5=0}dx\hfill \\ & =& {\int}_{-{x}_{max}}^{{x}_{max}}\left[{log}_{2},\left(e\right),\phantom{\rule{0.166667em}{0ex}},{p}_{x},\left(x\right),{\left({c}^{\text{'}},\left(x\right),+,\u03f5,\eta ,\left(x\right)\right)}^{-1},\eta ,\left(x\right),-,2,\lambda ,{p}_{x},\left(x\right),{\left({c}^{\text{'}},\left(x\right),+,\u03f5,\eta ,\left(x\right)\right)}^{-3},\eta ,\left(x\right)\right]{|}_{\u03f5=0}dx\hfill \\ & =& {\int}_{-{x}_{max}}^{{x}_{max}}{p}_{x}\left(x\right){\left({c}^{\text{'}},\left(x\right)\right)}^{-1}\left[{log}_{2},\left(e\right),-,2,\lambda ,{\left({c}^{\text{'}},\left(x\right)\right)}^{-2}\right]\phantom{\rule{0.166667em}{0ex}}\eta \left(x\right)\phantom{\rule{0.166667em}{0ex}}dx\hfill \end{array}$$
and to allow for any
$\eta \left(x\right)$ we require
$${log}_{2}\left(e\right)-2\lambda {\left({c}^{\text{'}},\left(x\right)\right)}^{-2}=0\phantom{\rule{1.em}{0ex}}\iff \phantom{\rule{1.em}{0ex}}{c}^{\text{'}}\left(x\right)=\underset{\text{a}\phantom{\rule{4.pt}{0ex}}\text{constant!}}{\underbrace{\sqrt{\frac{2\lambda}{{log}_{2}e}}}}.$$
Applying the boundary conditions,
$$\left\{\begin{array}{c}c\left({x}_{max}\right)={x}_{max}\\ c(-{x}_{max})=-{x}_{max}\end{array}\right\}\phantom{\rule{1.em}{0ex}}\to \phantom{\rule{1.em}{0ex}}\begin{array}{|c|}\hline c\left(x\right)=x\\ \hline\end{array}.$$
Thus, for large-
L ,
the quantizer that minimizes
entropy rate
H
_{y} for a given quantization error variance
σ
_{q}
^{2} is the uniform quantizer. Plugging
$c\left(x\right)=x$ into
[link] , the rightmost integral disappears
and we have
$${H}_{y}{|}_{\text{uniform}}\phantom{\rule{3.33333pt}{0ex}}=\phantom{\rule{3.33333pt}{0ex}}{h}_{x}-{log}_{2}\underset{\Delta}{\underbrace{\frac{2{x}_{max}}{L}}},$$
and using the large-
L uniform quantizer error variance approximation
equation 6 from Memoryless Scalar Quantization ,
$${H}_{y}{|}_{\text{uniform}}\phantom{\rule{3.33333pt}{0ex}}=\phantom{\rule{3.33333pt}{0ex}}{h}_{x}-\frac{1}{2}{log}_{2}\left(12,{\sigma}_{q}^{2}\right).$$
It is interesting to compare this result to the information-theoretic
minimal average rate for transmission of a continuous-amplitudememoryless source
x of differential entropy
h
_{x} at average distortion
σ
_{q}
^{2} (see Jayant&Noll or Berger):
$${R}_{min}\phantom{\rule{3.33333pt}{0ex}}=\phantom{\rule{3.33333pt}{0ex}}{h}_{x}-\frac{1}{2}{log}_{2}\left(2,\pi ,e,\phantom{\rule{0.166667em}{0ex}},{\sigma}_{q}^{2}\right).$$
Comparing the previous two equations, we find that (for a
continous-amplitude memoryless source) uniform quantizationprior to entropy coding requires
$$\frac{1}{2}{log}_{2}\left(\frac{\pi e}{6}\right)\phantom{\rule{3.33333pt}{0ex}}\approx \phantom{\rule{3.33333pt}{0ex}}\begin{array}{|c|}\hline 0.255\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\text{bits/sample}\\ \hline\end{array}$$
more than the theoretically optimum transmission scheme, regardless
of the distribution of
x .
Thus,
0.255 bits/sample (or
$\sim 1.5$ dB using the
$6.02R$ relationship) is the price paid for memoryless quantization .