<< Chapter < Page Chapter >> Page >
In this module we demonstrate the difficulty of obtaining instance-optimal guarantees in the L2 norm. We then show that it is much easier to obtain such guarantees in the probabilistic setting.

We now briefly return to the noise-free setting to take a closer look at instance-optimal guarantees for recovering non-sparse signals. To begin, recall that in Theorem 1 from "Noise-free signal recovery" we bounded the 2 -norm of the reconstruction error of

x ^ = arg min z z 1 subject to z B ( y ) .

as

x ^ - x 2 C 0 σ K ( x ) 1 / K

when B ( y ) = { z : Φ z = y } . One can generalize this result to measure the reconstruction error using the p -norm for any p [ 1 , 2 ] . For example, by a slight modification of these arguments, one can also show that x ^ - x 1 C 0 σ K ( x ) 1 (see  [link] ). This leads us to ask whether we might replace the bound for the 2 error with a result of the form x ^ - x 2 C σ K ( x ) 2 . Unfortunately, obtaining such a result requires an unreasonably large number of measurements, as quantified by the following theorem of  [link] .

(theorem 5.1 of [link] )

Suppose that Φ is an M × N matrix and that Δ : R M R N is a recovery algorithm that satisfies

x - Δ ( Φ x ) 2 C σ K ( x ) 2

for some K 1 , then M > 1 - 1 - 1 / C 2 N .

We begin by letting h R N denote any vector in N ( Φ ) . We write h = h Λ + h Λ c where Λ is an arbitrary set of indices satisfying | Λ | K . Set x = h Λ c , and note that Φ x = Φ h Λ c = Φ h - Φ h Λ = - Φ h Λ since h N ( Φ ) . Since h Λ Σ K , [link] implies that Δ ( Φ x ) = Δ ( - Φ h Λ ) = - h Λ . Hence, x - Δ ( Φ x ) 2 = h Λ c - ( - h Λ ) 2 = h 2 . Furthermore, we observe that σ K ( x ) 2 x 2 , since by definition σ K ( x ) 2 x - x ˜ 2 for all x ˜ Σ K , including x ˜ = 0 . Thus h 2 C h Λ c 2 . Since h 2 2 = h Λ 2 2 + h Λ c 2 2 , this yields

h Λ 2 2 = h 2 2 - h Λ c 2 2 h 2 2 - 1 C 2 h 2 2 = 1 - 1 C 2 h 2 2 .

This must hold for any vector h N ( Φ ) and for any set of indices Λ such that | Λ | K . In particular, let { v i } i = 1 N - M be an orthonormal basis for N ( Φ ) , and define the vectors { h i } i = 1 N as follows:

h j = i = 1 N - M v i ( j ) v i .

We note that h j = i = 1 N - M e j , v i v i where e j denotes the vector of all zeros except for a 1 in the j -th entry. Thus we see that h j = P N e j where P N denotes an orthogonal projection onto N ( Φ ) . Since P N e j 2 2 + P N e j 2 2 = e j 2 2 = 1 , we have that h j 2 1 . Thus, by setting Λ = { j } for h j we observe that

i = 1 N - M | v i ( j ) | 2 2 = | h j ( j ) | 2 1 - 1 C 2 h j 2 2 1 - 1 C 2 .

Summing over j = 1 , 2 , ... , N , we obtain

N 1 - 1 / C 2 j = 1 N i = 1 N - M | v i ( j ) | 2 = i = 1 N - M j = 1 N | v i ( j ) | 2 = i = 1 N - M v i 2 2 = N - M ,

and thus M 1 - 1 - 1 / C 2 N as desired.

Thus, if we want a bound of the form [link] that holds for all signals x with a constant C 1 , then regardless of what recovery algorithm we use we will need to take M N measurements. However, in a sense this result is overly pessimistic, and we will now see that the results we just established for signal recovery in noise can actually allow us to overcome this limitation by essentially treating the approximation error as noise.

Towards this end, notice that all the results concerning 1 minimization stated thus far are deterministic instance-optimal guarantees that apply simultaneously to all x given any matrix that satisfies the restricted isometry property (RIP). This is an important theoretical property, but as noted in "Matrices that satisfy the RIP" , in practice it is very difficult to obtain a deterministic guarantee that the matrix Φ satisfies the RIP. In particular, constructions that rely on randomness are only known to satisfy the RIP with high probability. As an example, recall Theorem 1 from "Matrices that satisfy the RIP" , which opens the door to slightly weaker results that hold only with high probability.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to compressive sensing. OpenStax CNX. Apr 02, 2011 Download for free at http://legacy.cnx.org/content/col11133/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?

Ask