<< Chapter < Page Chapter >> Page >

Sparsity with regularity

Sparse representations are obtained in a basis that takes advantage of some form of regularity of the input signals,creating many small-amplitude coefficients. Since wavelets have localized support,functions with isolated singularities produce few large-amplitude wavelet coefficients in the neighborhoodof these singularities. Nonlinear wavelet approximation produces a small error over spaces of functions that do not have “too many”sharp transitions and singularities. Chapter 9 shows that functionshaving a bounded total variation norm are useful models for images with nonfractal (finite length) edges.

Edges often define regular geometric curves. Waveletsdetect the location of edges but their square support cannot take advantage of their potential geometric regularity.More sparse representations are defined in dictionaries of curvelets or bandlets, which have elongated support in multiple directions,that can be adapted to this geometrical regularity. In such dictionaries, the approximation support λ T is smaller but provides explicit information about edges' local geometricalproperties such as their orientation. In this context, geometry does not just apply to multidimensional signals.Audio signals, such as musical recordings, also have a complex geometric regularity in time-frequency dictionaries.

Compression

Storage limitations and fast transmission through narrow bandwidth channels require compression ofsignals while minimizing degradation. Transform codes compress signals by coding a sparse representation.Chapter 10 introduces the information theory needed to understand these codes and to optimize their performance.

In a compression framework, the analog signal has already been discretized intoa signal f [ n ] of size N . This discrete signal isdecomposed in an orthonormal basis B = { g m } m Γ of C N :

f = m Γ f , g m g m .

Coefficients f , g m are approximated by quantized values Q ( f , g m ) . If Q is auniform quantizer of step Δ , then | x - Q ( x ) | Δ / 2 ; and if | x | < Δ / 2 , then Q ( x ) = 0 . The signal f ˜ restored from quantized coefficients is

f ˜ = m Γ Q ( f , g m ) g m .

An entropy code records these coefficients with R bits. The goal is to minimize the signal-distortion rate d ( R , f ) = f ˜ - f 2 .

The coefficients not quantized to zero correspond to the set λ T = { m γ : | f , g m | T } with T = Δ / 2 . For sparse signals, Chapter 10shows that the bit budget R is dominated by the number of bitsto code λ T in γ , which is nearly proportional to its size | λ T | . This means that the “information” about a sparse representationismostly geometric. Moreover, the distortion isdominated by the nonlinear approximation error f - f Λ T 2 , for f Λ T = m λ T f , g m g m . Compression is thus a sparse approximation problem. For a given distortion d ( R , f ) , minimizing R requires reducing | λ T | and thus optimizing the sparsity.

The number of bits to code Λ T can take advantage of any prior information on the geometry. [link] (b) shows that large wavelet coefficients are not randomly distributed. They have a tendency to be aggregated towardlarger scales, and at fine scales they are regrouped along edge curves or in texture regions. Using suchprior geometric models is a source of gain in coders such as JPEG-2000.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, A wavelet tour of signal processing, the sparse way. OpenStax CNX. Sep 14, 2009 Download for free at http://cnx.org/content/col10711/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'A wavelet tour of signal processing, the sparse way' conversation and receive update notifications?

Ask