<< Chapter < Page Chapter >> Page >
This collection reviews fundamental concepts underlying the use of concise models for signal processing. Topics are presented from a geometric perspective and include low-dimensional linear, sparse, and manifold-based signal models, approximation, compression, dimensionality reduction, and Compressed Sensing.

A new theory known as Compressed Sensing (CS) has recently emerged that can also be categorized as a type of dimensionalityreduction. Like manifold learning, CS is strongly model-based (relying on sparsity in particular).However, unlike many of the standard techniques in dimensionality reduction (such as manifold learning or the JL lemma), the goal ofCS is to maintain a low-dimensional representation of a signal x from which a faithful approximation to x can be recovered. In a sense, this more closely resembles the traditional problem ofdata compression (see Compression ). In CS, however, the encoder requires no a priori knowledge of thesignal structure. Only the decoder uses the model (sparsity) to recover the signal. Wejustify such an approach again using geometric arguments.

Motivation

Consider a signal x R N , and suppose that the basis Ψ provides a K -sparse representation of x

x = Ψ α ,
with α 0 = K . (In this section, we focus on exactly K -sparse signals, though many of the key ideas translate to compressible signals  [link] , [link] . In addition, we note that the CS concepts are also extendable totight frames.)

As we discussed in Compression , the standard procedure for compressing sparse signals, known as transformcoding, is to (i) acquire the full N -sample signal x ; (ii) compute the complete set of transform coefficients α ; (iii) locate the K largest, significant coefficients and discard the (many) small coefficients; (iv) encode the values and locations of the largest coefficients.

This procedure has three inherent inefficiencies: First, for a high-dimensional signal, we must start with a large number ofsamples N . Second, the encoder must compute all N of the transform coefficients α , even though it will discard all but K of them. Third, the encoder must encode the locations of the large coefficients, which requiresincreasing the coding rate since the locations change with each signal.

Incoherent projections

This raises a simple question: For a given signal, is it possible to directly estimate the set of large α ( n ) 's that will not be discarded? While this seems improbable, Candès, Romberg,and Tao  [link] , [link] and Donoho [link] have shown that a reduced set of projections can contain enoughinformation to reconstruct sparse signals. An offshoot of this work, often referred to as Compressed Sensing (CS) [link] , [link] , [link] , [link] , [link] , [link] , [link] , has emerged that builds on this principle.

In CS, we do not measure or encode the K significant α ( n ) directly. Rather, we measure and encode M < N projections y ( m ) = < x , φ m T > of the signal onto a second set of functions { φ m } , m = 1 , 2 , ... , M . In matrix notation, we measure

y = Φ x ,
where y is an M × 1 column vector and the measurement basis matrix Φ is M × N with each row a basis vector φ m . Since M < N , recovery of the signal x from the measurements y is ill-posed in general; however the additional assumption of signal sparsity makes recovery possible and practical.

Get the best College algebra course in your pocket!





Source:  OpenStax, Concise signal models. OpenStax CNX. Sep 14, 2009 Download for free at http://cnx.org/content/col10635/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Concise signal models' conversation and receive update notifications?

Ask