<< Chapter < Page | Chapter >> Page > |
The second statement of the theorem differs from the first in the following respect: when $K<M<2K$ , there will necessarily exist $K$ -sparse signals $x$ that cannot be uniquely recovered from the $M$ -dimensional measurement vector $y=\Phi x$ . However, these signals form a set of measure zero within the set of all $K$ -sparse signals and can safely be avoided if $\Phi $ is randomly generated independently of $x$ .
Unfortunately, as discussed in Nonlinear Approximation from Approximation , solving this ${\ell}_{0}$ optimization problem is prohibitively complex. Yet another challenge is robustness; in the setting ofTheorem "Recovery via ℓ 0 optimization" , the recovery may be very poorly conditioned. In fact, both of these considerations (computational complexity and robustness) can be addressed, but atthe expense of slightly more measurements.
The practical revelation that supports the new CS theory is that it is not necessary to solve the ${\ell}_{0}$ -minimization problem to recover $\alpha $ . In fact, a much easier problem yields an equivalent solution (thanks again to the incoherency of thebases); we need only solve for the ${\ell}_{1}$ -sparsest coefficients $\alpha $ that agree with the measurements $y$ [link] , [link] , [link] , [link] , [link] , [link] , [link] , [link]
There is no free lunch, however; according to the theory, more than $K+1$ measurements are required in order to recover sparse signals via Basis Pursuit. Instead, one typically requires $M\ge cK$ measurements, where $c>1$ is an oversampling factor . As an example, we quote a result asymptotic in $N$ . For simplicity, we assume that the sparsity scales linearly with $N$ ; that is, $K=SN$ , where we call $S$ the sparsity rate .
Theorem[link] , [link] , [link] Set $K=SN$ with $0<S\ll 1$ . Then there exists an oversampling factor $c\left(S\right)=O(log(1/S\left)\right)$ , $c\left(S\right)>1$ , such that, for a $K$ -sparse signal $x$ in the basis $\Psi $ , the following statements hold:
In an illuminating series of recent papers, Donoho and Tanner [link] , [link] , [link] have characterized the oversampling factor $c\left(S\right)$ precisely (see also "The geometry of Compressed Sensing" ). With appropriate oversampling, reconstruction via Basis Pursuit is also provably robust tomeasurement noise and quantization error [link] .
We often use the abbreviated notation $c$ to describe the oversampling factor required in various settings even though $c\left(S\right)$ depends on the sparsity $K$ and signal length $N$ .
A CS recovery example on the Cameraman test image is shown in [link] . In this case, with $M=4K$ we achieve near-perfect recovery of the sparse measured image.
Notification Switch
Would you like to follow the 'Concise signal models' conversation and receive update notifications?