<< Chapter < Page Chapter >> Page >
This module provides an overview of convex optimization approaches to sparse signal recovery.

An important class of sparse recovery algorithms fall under the purview of convex optimization . Algorithms in this category seek to optimize a convex function f ( · ) of the unknown variable x over a (possibly unbounded) convex subset of R N .

Setup

Let J ( x ) be a convex sparsity-promoting cost function (i.e., J ( x ) is small for sparse x .) To recover a sparse signal representation x ^ from measurements y = Φ x , Φ R M × N , we may either solve

min x { J ( x ) : y = Φ x } ,

when there is no noise, or solve

min x { J ( x ) : H ( Φ x , y ) ϵ }

when there is noise in the measurements. Here, H is a cost function that penalizes the distance between the vectors Φ x and y . For an appropriate penalty parameter μ , [link] is equivalent to the unconstrained formulation:

min x J ( x ) + μ H ( Φ x , y )

for some μ > 0 . The parameter μ may be chosen by trial-and-error, or by statistical techniques such as cross-validation [link] .

For convex programming algorithms, the most common choices of J and H are usually chosen as follows: J ( x ) = x 1 , the 1 -norm of x , and H ( Φ x , y ) = 1 2 Φ x - y 2 2 , the 2 -norm of the error between the observed measurements and the linear projections of the target vector x . In statistics, minimizing this H subject to x 1 δ is known as the Lasso problem. More generally, J ( · ) acts as a regularization term and can be replaced by other, more complex, functions; for example, the desired signal may be piecewise constant, and simultaneously have a sparse representation under a known basis transform Ψ . In this case, we may use a mixed regularization term:

J ( x ) = T V ( x ) + λ x 1

It might be tempting to use conventional convex optimization packages for the above formulations ( [link] , [link] , and [link] ). Nevertheless, the above problems pose two key challenges which are specific to practical problems encountered in CS : (i) real-world applications are invariably large-scale (an image of a resolution of 1024 × 1024 pixels leads to optimization over a million variables, well beyond the reach of any standard optimization software package); (ii) the objective function is nonsmooth, and standard smoothing techniques do not yield very good results. Hence, for these problems, conventional algorithms (typically involving matrix factorizations) are not effective or even applicable. These unique challenges encountered in the context of CS have led to considerable interest in developing improved sparse recovery algorithms in the optimization community.

Linear programming

In the noiseless case, the 1 -minimization problem (obtained by substituting J ( x ) = x 1 in [link] ) can be recast as a linear program (LP) with equality constraints. These can be solved in polynomial time ( O ( N 3 ) ) using standard interior-point methods [link] . This was the first feasible reconstruction algorithm used for CS recovery and has strong theoretical guarantees, as shown earlier in this course . In the noisy case, the problem can be recast as a second-order cone program (SOCP) with quadratic constraints. Solving LPs and SOCPs is a principal thrust in optimization research; nevertheless, their application in practical CS problems is limited due to the fact that both the signal dimension N , and the number of constraints M , can be very large in many scenarios. Note that both LPs and SOCPs correspond to the constrained formulations in [link] and [link] and are solved using first order interior-point methods.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to compressive sensing. OpenStax CNX. Apr 02, 2011 Download for free at http://legacy.cnx.org/content/col11133/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?

Ask