<< Chapter < Page | Chapter >> Page > |
Additionally, one may view the ${\ell}_{0}/{\ell}_{1}$ equivalence problem geometrically. In particular, given the measurements $y=\Phi x$ , we have an $(N-M)$ -dimensional hyperplane ${\mathcal{H}}_{y}=\{{x}^{\text{'}}\in {\mathbb{R}}^{N}:y=\Phi {x}^{\text{'}}\}=\mathcal{N}\left(\Phi \right)+x$ of feasible signals that could account for the measurements $y$ . Supposing the original signal $x$ is $K$ -sparse, the ${\ell}_{1}$ recovery program will recover the correct solution $x$ if and only if $\parallel {x}^{\text{'}}{\parallel}_{1}>{\parallel x\parallel}_{1}$ for every other signal ${x}^{\text{'}}\in {\mathcal{H}}_{y}$ on the hyperplane. This happens only if the hyperplane ${\mathcal{H}}_{y}$ (which passes through $x$ ) does not “cut into” the ${\ell}_{1}$ -ball of radius ${\parallel x\parallel}_{1}$ . This ${\ell}_{1}$ -ball is a polytope, on which $x$ belongs to a $(K-1)$ -dimensional “face.” If $\Phi $ is a random matrix with i.i.d. Gaussian entries, then the hyperplane ${\mathcal{H}}_{y}$ will have random orientation. To answer the question of how $M$ must relate to $K$ in order to ensure reliable recovery, it helps to observe that a randomlygenerated hyperplane $\mathcal{H}$ will have greater chance to slice into the ${\ell}_{1}$ ball as $\mathrm{dim}\left(\mathcal{H}\right)=N-M$ grows (or as $M$ shrinks) or as the dimension $K-1$ of the face on which $x$ lives grows. Such geometric arguments have been made precise by Donoho andTanner [link] , [link] , [link] and used to establish a series of sharp bounds on CS recovery.
We have also identified [link] a fundamental connection between the CS and the JL lemma. In order to make this connection,we considered the Restricted Isometry Property (RIP), which has been identified as a key property of the CS projectionoperator $\Phi $ to ensure stable signal recovery. We say $\Phi $ has RIP of order $K$ if for every $K$ -sparse signal $x$ ,
While the JL lemma concerns pairwise distances within a finite cloud of points, the RIP concerns isometric embedding of an infinite number of points (comprising a union of $K$ -dimensional subspaces in ${\mathbb{R}}^{N}$ ). However, the RIP can in fact be derived by constructing an effective sampling of $K$ -sparse signals in ${\mathbb{R}}^{N}$ , using the JL lemma to ensure isometric embeddings for each of these points,and then arguing that the RIP must hold true for all $K$ -sparse signals. (See [link] for the full details.)
Finally, we have also shown that the JL lemma can also lead to extensions of CS to other concise signal models. In particular, while conventional CS theory concerns sparse signal models, it is alsopossible to consider manifold-based signal models. Just as random projections can preserve the low- dimensional geometry (the union of hyperplanes) that corresponds to a sparse signal family, randomprojections can also guarantee a stable embedding of a low-dimensional signal manifold. We have the following result, which states that an RIP-like property holds for families of manifold-modeledsignals.
TheoremLet $\mathcal{M}$ be a compact $K$ -dimensional Riemannian submanifold of ${\mathbb{R}}^{N}$ having condition number $\frac{1}{\tau}$ , volume $V$ , and geodesic covering regularity $R$ . Fix $0<\u03f5<1\text{and}0<\rho <1$ . Let Φ be a random M × N orthoprojector with
The proof of this theorem appears in [link] and again involves the JL lemma. Due to the limited complexity of a manifold model, it is possible to adequately characterize the geometry using asufficiently fine sampling of points drawn from the manifold and its tangent spaces. In essence, manifolds with higher volume or with greater curvature have more complexity and require a moredense covering for application of the JL lemma; this leads to an increased number of measurements. The theorem also indicates that the requisite number of measurements depends on the geodesic covering regularity of the manifold, a minor technical concept which is also discussed in [link] .
This theorem establishes that, like the class of $K$ -sparse signals, a collection of signals described by a $K$ -dimensional manifold $\mathcal{M}\subset {\mathbb{R}}^{N}$ can have a stable embedding in an $M$ -dimensional measurement space. Moreover, the requisite number of random measurements $M$ is once again linearly proportional to the information level (or number of degrees of freedom) $K$ in the concise model. This has a number of possible implications for manifold-based signal processing. Manifold-modeledsignals can be recovered from compressive measurements (using a customized recovery algorithm adapted to the manifold model, in contrast with sparsity-based recovery algorithms) [link] , [link] ; unknown parameters in parametric models can be estimated from compressive measurements; multi-class estimation/classification problems can be addressed [link] by considering multiple manifold models; and manifold learning algorithms may be efficiently executed by applying them simply to the projection of a manifold-modeled data set to a low-dimensional measurement space [link] . (As an example, [link] (d) shows the result of applying the ISOMAP algorithm on a random projection of a data set from ${\mathbb{R}}^{4096}$ down to ${\mathbb{R}}^{15}$ ; the underlying parameterization of the manifold is extracted with little sacrifice in accuracy.) In all of this it isnot necessary to adapt the sensing protocol to the model; the only change from sparsity-based CS would be the methods for processing or decoding the measurements. In the future, more sophisticated concise models will likely lead to further improvements in signal understanding from compressive measurements.
Notification Switch
Would you like to follow the 'Concise signal models' conversation and receive update notifications?