<< Chapter < Page Chapter >> Page >

Chapter 10 describes the implementation of audio transform codes. Image transform codesin block cosine bases and wavelet bases are introduced, together with the JPEG and JPEG-2000 compression standards.

Denoising

Signal-acquisition devices add noise that can be reduced by estimators using prior information on signalproperties. Signal processing has long remained mostly Bayesian and linear.Nonlinear smoothing algorithms existed in statistics, but these procedures were often ad hoc and complex.Two statisticians, Donoho andJohnstone (DonohoJ:94), changed the “game” by proving that simple thresholdingin sparse representations can yield nearly optimal nonlinear estimators.This was the beginning of a considerable refinement of nonlinear estimation algorithms that is still ongoing.

Let us consider digital measurements that add a random noise W [ n ] to the original signal f [ n ] :

X [ n ] = f [ n ] + W [ n ] for 0 n < N .

The signal f is estimated by transforming the noisy data X with an operator D :

F ˜ = D X .

The risk of the estimator F ˜ of f is the average error, calculated with respect to the probability distribution of noise W :

r ( D , f ) = E { f - D X 2 } .

Bayes versus minimax

To optimize the estimation operator D , one must take advantage of priorinformation available about signal f . In a Bayes framework, f is considered a realization of a random vector F and the Bayes risk is the expected risk calculated with respect to theprior probability distribution π of the random signal model F :

r ( D , π ) = E π { r ( D , F ) } .

Optimizing D among all possible operators yields the minimum Bayes risk :

r n ( π ) = inf a l l D r ( D , π ) .

In the 1940s, Wald brought in a new perspective on statistics with a decision theory partly importedfrom the theory of games. This point of view uses deterministic models, wheresignals are elements of a set Θ , without specifying their probability distribution in this set.To control the risk for any f Θ , we compute the maximum risk:

r ( D , Θ ) = sup f Θ r ( D , f ) .

The minimax risk is the lower bound computed over all operators D :

r n ( Θ ) = inf a l l D r ( D , Θ ) .

In practice, the goal is to find an operator D that is simple to implement and yields a risk close to the minimax lower bound.

Thresholding estimators

It is tempting to restrict calculations to linear operators D because of their simplicity. Optimal linear Wiener estimators areintroduced in Chapter 11. Figure (a)is an image contaminated by Gaussian white noise. Figure (b)shows an optimized linear filtering estimation F ˜ = X h [ n ] , which is therefore diagonal in a Fourier basis B . This convolution operator averages the noise but also blurs the imageand keeps low-frequency noise by retaining the image's low frequencies.

If f has a sparse representation in a dictionary, then projecting X on the vectors of this sparse support can considerably improve linear estimators. The difficulty is identifyingthe sparse support of f from the noisy data X . Donoho and Johnstone (DonohoJ:94) proved that,in an orthonormal basis, a simple thresholding of noisy coefficients does the trick. Noisy signal coefficients in an orthonormal basis B = { g m } m Γ are

X , g m = f , g m + W , g m for m γ .

Thresholding these noisy coefficients yields an orthogonal projection estimator

F ˜ = X Λ ˜ T = m Λ ˜ T X , g m g m with Λ ˜ T = { m γ : | X , g m | T } .

The set Λ ˜ T is an estimate of an approximation support of f . It is hopefully close to the optimal approximation support λ T = { m γ : | f , g m | T } .

[link] (b) shows the estimated approximation set λ ˜ T of noisy-wavelet coefficients, | X , ψ j , n | T , that can be compared to the optimal approximation support Λ T shown in [link] (b). The estimation in [link] (d) from wavelet coefficients in λ ˜ T has considerably reduced the noise in regular regions while keeping the sharpness of edges by preserving large-waveletcoefficients. This estimation is improved with a translation-invariant procedure that averages this estimator over several translated waveletbases. Thresholding wavelet coefficients implements an adaptivesmoothing, which averages the data X with a kernel that depends on the estimated regularity of the original signal f .

Donoho and Johnstone proved that for Gaussian white noise of variance σ 2 , choosing T = σ 2 log e N yields a risk E { f - F ˜ 2 } of the order of f - f Λ T 2 , up to a log e N factor. This spectacular result shows that theestimated support λ ˜ T does nearly as well as the optimal unknown support λ T . The resulting risk is small if the representation is sparse and precise.

The set λ ˜ T in [link] (b) “looks” different from the λ T in [link] (b) because it has more isolated points. This indicates that some prior information onthe geometry of λ T could be used to improve the estimation. For audio noise-reduction, thresholding estimators areapplied in sparse representations provided by time-frequency bases.Similar isolated time-frequency coefficients produce a highly annoying “musical noise.”Musical noise is removed with a block thresholding that regularizes the geometry of theestimated support λ ˜ T and avoids leaving isolated points. Block thresholding also improves wavelet estimators.

If W is a Gaussian noise and signals in Θ have a sparse representation in B , then Chapter 11 proves that thresholding estimators can produce a nearly minimax risk.In particular, wavelet thresholding estimators have a nearly minimax risk for large classes of piecewise smoothsignals, including bounded variation images.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, A wavelet tour of signal processing, the sparse way. OpenStax CNX. Sep 14, 2009 Download for free at http://cnx.org/content/col10711/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'A wavelet tour of signal processing, the sparse way' conversation and receive update notifications?

Ask