# 0.7 Compressed sensing

 Page 1 / 5
This collection reviews fundamental concepts underlying the use of concise models for signal processing. Topics are presented from a geometric perspective and include low-dimensional linear, sparse, and manifold-based signal models, approximation, compression, dimensionality reduction, and Compressed Sensing.

A new theory known as Compressed Sensing (CS) has recently emerged that can also be categorized as a type of dimensionalityreduction. Like manifold learning, CS is strongly model-based (relying on sparsity in particular).However, unlike many of the standard techniques in dimensionality reduction (such as manifold learning or the JL lemma), the goal ofCS is to maintain a low-dimensional representation of a signal $x$ from which a faithful approximation to $x$ can be recovered. In a sense, this more closely resembles the traditional problem ofdata compression (see Compression ). In CS, however, the encoder requires no a priori knowledge of thesignal structure. Only the decoder uses the model (sparsity) to recover the signal. Wejustify such an approach again using geometric arguments.

## Motivation

Consider a signal $x\in {\mathbb{R}}^{N}$ , and suppose that the basis $\Psi$ provides a $K$ -sparse representation of $x$

$x=\Psi \alpha ,$
with ${\parallel \alpha \parallel }_{0}=K$ . (In this section, we focus on exactly $K$ -sparse signals, though many of the key ideas translate to compressible signals  [link] , [link] . In addition, we note that the CS concepts are also extendable totight frames.)

As we discussed in Compression , the standard procedure for compressing sparse signals, known as transformcoding, is to (i) acquire the full $N$ -sample signal $x$ ; (ii) compute the complete set of transform coefficients $\alpha$ ; (iii) locate the $K$ largest, significant coefficients and discard the (many) small coefficients; (iv) encode the values and locations of the largest coefficients.

This procedure has three inherent inefficiencies: First, for a high-dimensional signal, we must start with a large number ofsamples $N$ . Second, the encoder must compute all $N$ of the transform coefficients $\alpha$ , even though it will discard all but $K$ of them. Third, the encoder must encode the locations of the large coefficients, which requiresincreasing the coding rate since the locations change with each signal.

## Incoherent projections

This raises a simple question: For a given signal, is it possible to directly estimate the set of large $\alpha \left(n\right)$ 's that will not be discarded? While this seems improbable, Candès, Romberg,and Tao  [link] , [link] and Donoho [link] have shown that a reduced set of projections can contain enoughinformation to reconstruct sparse signals. An offshoot of this work, often referred to as Compressed Sensing (CS) [link] , [link] , [link] , [link] , [link] , [link] , [link] , has emerged that builds on this principle.

In CS, we do not measure or encode the $K$ significant $\alpha \left(n\right)$ directly. Rather, we measure and encode $M projections $y\left(m\right)=$ of the signal onto a second set of functions $\left\{{\phi }_{m}\right\},m=1,2,...,M$ . In matrix notation, we measure

$y=\Phi x,$
where $y$ is an $M×1$ column vector and the measurement basis matrix $\Phi$ is $M×N$ with each row a basis vector ${\phi }_{m}$ . Since $M , recovery of the signal $x$ from the measurements $y$ is ill-posed in general; however the additional assumption of signal sparsity makes recovery possible and practical.