<< Chapter < Page Chapter >> Page >

Fix δ ( 0 , 1 ) . Let Φ be an M × N random matrix whose entries φ i j are i.i.d. with φ i j drawn according to a strictly sub-Gaussian distribution with c 2 = 1 / M . If

M κ 1 K log N K ,

then Φ satisfies the RIP of order K with the prescribed δ with probability exceeding 1 - 2 e - κ 2 M , where κ 1 is arbitrary and κ 2 = δ 2 / 2 κ * - log ( 42 e / δ ) / κ 1 .

Even within the class of probabilistic results, there are two distinct flavors. The typical approach is to combine a probabilistic construction of a matrix that will satisfy the RIP with high probability with the previous results in this chapter. This yields a procedure that, with high probability, will satisfy a deterministic guarantee applying to all possible signals x . A weaker kind of result is one that states that given a signal x , we can draw a random matrix Φ and with high probability expect certain performance for that signal x . This type of guarantee is sometimes called instance-optimal in probability . The distinction is essentially whether or not we need to draw a new random Φ for each signal x . This may be an important distinction in practice, but if we assume for the moment that it is permissible to draw a new matrix Φ for each x , then we can see that [link] may be somewhat pessimistic. In order to establish our main result we will rely on the fact, previously used in "Matrices that satisfy the RIP" , that sub-Gaussian matrices preserve the norm of an arbitrary vector with high probability. Specifically, a slight modification of Corollary 1 from "Matrices that satisfy the RIP" shows that for any x R N , if we choose Φ according to the procedure in [link] , then we also have that

P Φ x 2 2 2 x 2 2 exp - κ 3 M

with κ 3 = 4 / κ * . Using this we obtain the following result.

Let x R N be fixed. Set δ 2 K < 2 - 1 Suppose that Φ is an M × N sub-Gaussian random matrix with M κ 1 K log N / K . Suppose we obtain measurements of the form y = Φ x . Set ϵ = 2 σ K ( x ) 2 . Then with probability exceeding 1 - 2 exp ( - κ 2 M ) - exp ( - κ 3 M ) , when B ( y ) = { z : Φ z - y 2 ϵ } , the solution x ^ to [link] obeys

x ^ - x 2 8 1 + δ 2 K - ( 1 + 2 ) δ 2 K 1 - ( 1 + 2 ) δ 2 K σ K ( x ) 2 .

First we recall that, as noted above, from [link] we have that Φ will satisfy the RIP of order 2 K with probability at least 1 - 2 exp ( - κ 2 M ) . Next, let Λ denote the index set corresponding to the K entries of x with largest magnitude and write x = x Λ + x Λ c . Since x Λ Σ K , we can write Φ x = Φ x Λ + Φ x Λ c = Φ x Λ + e . If Φ is sub-Gaussian then from Lemma 2 from "Sub-Gaussian random variables" we have that Φ x Λ c is also sub-Gaussian, and one can apply [link] to obtain that with probability at least 1 - exp ( - κ 3 M ) , Φ x Λ c 2 2 x Λ c 2 = 2 σ K ( x ) 2 . Thus, applying the union bound we have that with probability exceeding 1 - 2 exp ( - κ 2 M ) - exp ( - κ 3 M ) , we satisfy the necessary conditions to apply Theorem 1 from "Signal recovery in noise" to x Λ , in which case σ K ( x Λ ) 1 = 0 and hence

x ^ - x Λ 2 2 C 2 σ K ( x ) 2 .

From the triangle inequality we thus obtain

x ^ - x 2 = x ^ - x Λ + x Λ - x 2 x ^ - x Λ 2 + x Λ - x 2 2 C 2 + 1 σ K ( x ) 2

which establishes the theorem.

Thus, although it is not possible to achieve a deterministic guarantee of the form in [link] without taking a prohibitively large number of measurements, it is possible to show that such performance guarantees can hold with high probability while simultaneously taking far fewer measurements than would be suggested by [link] . Note that the above result applies only to the case where the parameter is selected correctly, which requires some limited knowledge of x , namely σ K ( x ) 2 . In practice this limitation can easily be overcome through a parameter selection technique such as cross-validation  [link] , but there also exist more intricate analyses of 1 minimization that show it is possible to obtain similar performance without requiring an oracle for parameter selection  [link] . Note that [link] can also be generalized to handle other measurement matrices and to the case where x is compressible rather than sparse. Moreover, this proof technique is applicable to a variety of the greedy algorithms described later in this course that do not require knowledge of the noise level to establish similar results  [link] , [link] .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to compressive sensing. OpenStax CNX. Apr 02, 2011 Download for free at http://legacy.cnx.org/content/col11133/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?

Ask