<< Chapter < Page Chapter >> Page >

The second statement of the theorem differs from the first in the following respect: when K < M < 2 K , there will necessarily exist K -sparse signals x that cannot be uniquely recovered from the M -dimensional measurement vector y = Φ x . However, these signals form a set of measure zero within the set of all K -sparse signals and can safely be avoided if Φ is randomly generated independently of x .

Unfortunately, as discussed in Nonlinear Approximation from Approximation , solving this 0 optimization problem is prohibitively complex. Yet another challenge is robustness; in the setting ofTheorem "Recovery via ℓ 0 optimization" , the recovery may be very poorly conditioned. In fact, both of these considerations (computational complexity and robustness) can be addressed, but atthe expense of slightly more measurements.

Recovery via convex optimization

The practical revelation that supports the new CS theory is that it is not necessary to solve the 0 -minimization problem to recover α . In fact, a much easier problem yields an equivalent solution (thanks again to the incoherency of thebases); we need only solve for the 1 -sparsest coefficients α that agree with the measurements y [link] , [link] , [link] , [link] , [link] , [link] , [link] , [link]

α ^ = arg min α 1 s.t. y = Φ Ψ α .
As discussed in Nonlinear Approximation from Approximation , this optimization problem, also known as Basis Pursuit [link] , is significantly more approachable and can be solved with traditionallinear programming techniques whose computational complexities are polynomial in N .

There is no free lunch, however; according to the theory, more than K + 1 measurements are required in order to recover sparse signals via Basis Pursuit. Instead, one typically requires M c K measurements, where c > 1 is an oversampling factor . As an example, we quote a result asymptotic in N . For simplicity, we assume that the sparsity scales linearly with N ; that is, K = S N , where we call S the sparsity rate .

Theorem

[link] , [link] , [link] Set K = S N with 0 < S 1 . Then there exists an oversampling factor c ( S ) = O ( log ( 1 / S ) ) , c ( S ) > 1 , such that, for a K -sparse signal x in the basis Ψ , the following statements hold:

  1. The probability of recovering x via Basis Pursuit from ( c ( S ) + ϵ ) K random projections, ϵ > 0 , converges to one as N .
  2. The probability of recovering x via Basis Pursuit from ( c ( S ) - ϵ ) K random projections, ϵ > 0 , converges to zero as N .

In an illuminating series of recent papers, Donoho and Tanner [link] , [link] , [link] have characterized the oversampling factor c ( S ) precisely (see also "The geometry of Compressed Sensing" ). With appropriate oversampling, reconstruction via Basis Pursuit is also provably robust tomeasurement noise and quantization error [link] .

We often use the abbreviated notation c to describe the oversampling factor required in various settings even though c ( S ) depends on the sparsity K and signal length N .

A CS recovery example on the Cameraman test image is shown in [link] . In this case, with M = 4 K we achieve near-perfect recovery of the sparse measured image.

Compressive sensing reconstruction of the nonlinear approximation Cameraman image from [link] (b). Using M = 16384 random measurements of the K -term nonlinear approximation image (where K = 4096 ), we solve an 1 -minimization problem to obtain the reconstruction shown above. The MSE with respect to the measured image is 0.08 , so the reconstruction is virtually perfect.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Concise signal models. OpenStax CNX. Sep 14, 2009 Download for free at http://cnx.org/content/col10635/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Concise signal models' conversation and receive update notifications?

Ask