<< Chapter < Page Chapter >> Page >
In this module we provide an overview of some of the most common greedy algorithms and their application to the problem of sparse recovery.

Setup

As opposed to solving a (possibly computationally expensive) convex optimization program, an alternate flavor to sparse recovery is to apply methods of sparse approximation . Recall that the goal of sparse recovery is to recover the sparsest vector x which explains the linear measurements y . In other words, we aim to solve the (nonconvex) problem:

min I | I | : y = i I φ i x i ,

where I denotes a particular subset of the indices i = 1 , ... , N , and φ i denotes the i th column of Φ . It is well known that searching over the power set formed by the columns of Φ for the optimal subset I * with smallest cardinality is NP-hard. Instead, classical sparse approximation methods tackle this problem by greedily selecting columns of Φ and forming successively better approximations to y .

Matching pursuit

Matching Pursuit (MP), named and introduced to the signal processing community by Mallat and Zhang  [link] , [link] , is an iterative greedy algorithm that decomposes a signal into a linear combination of elements from a dictionary. In sparse recovery, this dictionary is merely the sampling matrix Φ R M × N ; we seek a sparse representation ( x ) of our “signal” y .

MP is conceptually very simple. A key quantity in MP is the residual r R M ; the residual represents the as-yet “unexplained” portion of the measurements. At each iteration of the algorithm, we select a vector from the dictionary that is maximally correlated with the residual r :

λ k = arg max λ r k , φ λ φ λ φ λ 2 .

Once this column is selected, we possess a “better” representation of the signal, since a new coefficient indexed by λ k has been added to our signal approximation. Thus, we update both the residual and the approximation as follows:

r k = r k - 1 - r k - 1 , φ λ k φ λ k φ λ k 2 , x ^ λ k = x ^ λ k + r k - 1 , φ λ k .

and repeat the iteration. A suitable stopping criterion is when the norm of r becomes smaller than some quantity. MP is described in pseudocode form below.

Inputs: Measurement matrix Φ , signal measurements y Outputs: Sparse signal x ^ initialize: x ^ 0 = 0 , r = y , i = 0 . while ħalting criterion false do 1. i i + 1 2. b Φ T r {form residual signal estimate} 3. x ^ i x ^ i - 1 + T ( 1 ) {update largest magnitude coefficient} 4. r r - Φ x ^ i {update measurement residual} end while return x ^ x ^ i

Although MP is intuitive and can find an accurate approximation of the signal, it possesses two major drawbacks: (i) it offers no guarantees in terms of recovery error; indeed, it does not exploit the special structure present in the dictionary Φ ; (ii) the required number of iterations required can be quite large. The complexity of MP is O ( M N T )   [link] , where T is the number of MP iterations

Orthogonal matching pursuit (omp)

Matching Pursuit (MP) can prove to be computationally infeasible for many problems, since the complexity of MP grows linearly in the number of iterations T . By employing a simple modification of MP, the maximum number of MP iterations can be upper bounded as follows. At any iteration k , Instead of subtracting the contribution of the dictionary element with which the residual r is maximally correlated, we compute the projection of r onto the orthogonal subspace to the linear span of the currently selected dictionary elements. This quantity thus better represents the “unexplained” portion of the residual, and is subtracted from r to form a new residual, and the process is repeated. If Φ Ω is the submatrix formed by the columns of Φ selected at time step t , the following operations are performed:

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Introduction to compressive sensing. OpenStax CNX. Mar 12, 2015 Download for free at http://legacy.cnx.org/content/col11355/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Introduction to compressive sensing' conversation and receive update notifications?

Ask