<< Chapter < Page Chapter >> Page >
In this module we provide an overview of the relationship between L1 minimization and random projections of the cross-polytope.

The analysis of 1 minimization based on the restricted isometry property (RIP) described in "Signal recovery in noise" allows us to establish a variety of guarantees under different noise settings, but one drawback is that the analysis of how many measurements are actually required for a matrix to satisfy the RIP is relatively loose. An alternative approach to analyzing 1 minimization algorithms is to examine them from a more geometric perspective. Towards this end, we define the closed 1 ball, also known as the cross-polytope :

C N = x R N : x 1 1 .

Note that C N is the convex hull of 2 N points { p i } i = 1 2 N . Let Φ C N R M denote the convex polytope defined as either the convex hull of { Φ p i } i = 1 2 N or equivalently as

Φ C N = y R M : y = Φ x , x C N .

For any x Σ K = x : x 0 K , we can associate a K -face of C N with the support and sign pattern of x . One can show that the number of K -faces of Φ C N is precisely the number of index sets of size K for which signals supported on them can be recovered by

x ^ = argmin z z 1 subject to z B ( y ) .

with B ( y ) = { z : Φ z = y } . Thus, 1 minimization yields the same solution as 0 minimization for all x Σ K if and only if the number of K -faces of Φ C N is identical to the number of K -faces of C N . Moreover, by counting the number of K -faces of Φ C N , we can quantify exactly what fraction of sparse vectors can be recovered using 1 minimization with Φ as our sensing matrix. See  [link] , [link] , [link] , [link] , [link] for more details and  [link] for an overview of the implications of this body of work. Note also that by replacing the cross-polytope with certain other polytopes (the simplex and the hypercube), one can apply the same technique to obtain results concerning the recovery of more limited signal classes, such as sparse signals with nonnegative or bounded entries  [link] .

Given this result, one can then study random matrix constructions from this perspective to obtain probabilistic bounds on the number of K -faces of Φ C N with Φ is generated at random, such as from a Gaussian distribution. Under the assumption that K = ρ M and M = γ N , one can obtain asymptotic results as N . This analysis leads to the phase transition phenomenon, where for large problem sizes there are sharp thresholds dictating that the fraction of K -faces preserved will tend to either one or zero with high probability, depending on ρ and γ   [link] .

These results provide sharp bounds on the minimum number of measurements required in the noiseless setting. In general, these bounds are significantly stronger than the corresponding measurement bounds obtained within the RIP-based framework given in "Noise-free signal recovery" , which tend to be extremely loose in terms of the constants involved. However, these sharper bounds also require somewhat more intricate analysis and typically more restrictive assumptions on Φ (such as it being Gaussian). Thus, one of the main strengths of the RIP-based analysis presented in "Noise-free signal recovery" and "Signal recovery in noise" is that it gives results for a broad class of matrices that can also be extended to noisy settings.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to compressive sensing. OpenStax CNX. Apr 02, 2011 Download for free at http://legacy.cnx.org/content/col11133/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?

Ask