<< Chapter < Page | Chapter >> Page > |
Recent years have seen a proliferation of novel techniques for what can loosely be termed “dimensionality reduction.”Like the tasks of approximation and compression discussed above, these methods involve some aspect in which low-dimensionalinformation is extracted about a signal or collection of signals in some high-dimensional ambient space.Unlike the tasks of approximation and compression, however, the goal of these methods is not always to maintain a faithfulrepresentation of each signal. Instead, the purpose may be to preserve some criticalrelationships among elements of a data set or to discover information about a manifold on which the data lives.
In this section, we review two general methods for dimensionality reduction. [link] begins with a brief overview of techniques for manifold learning. [link] then discusses the Johnson-Lindenstrauss (JL) lemma, which concerns the isometric embedding of a cloud points as it isprojected to a lower-dimensional space. Though at first glance the JL lemma does not pertain to any of the low-dimensional signalmodels we have previously discussed, we later see in Connections with dimensionality reduction that the JL lemma plays a critical role in the core theory of CS, and we also employ the JL lemma indeveloping a theory for isometric embeddings of manifolds.
Several techniques have been proposed for solving a problem known as manifold learning in which certain properties of a manifold are inferred from a discrete collection of points sampled from that manifold. A typical manifold learning setup is as follows: an algorithm is presented with a set of $P$ points sampled from a $K$ -dimensional submanifold of ${\mathbb{R}}^{N}$ . The goal of the algorithm is to produce an mapping of these $P$ points into some lower dimension ${\mathbb{R}}^{M}$ (ideally, $M=K$ ) while preserving some characteristic property of the manifold.Example algorithms include ISOMAP [link] , Hessian Eigenmaps (HLLE) [link] , and Maximum Variance Unfolding (MVU) [link] , which attempt to learn isometric embeddings of the manifold (thus preserving pairwise geodesic distances in ${\mathbb{R}}^{M}$ ); Locally Linear Embedding (LLE) [link] , which attempts to preserve local linear neighborhood structures among the embedded points;Local Tangent Space Alignment (LTSA) [link] , which attempts to preserve local coordinates in each tangent space; and a methodfor charting a manifold [link] that attempts to preserve local neighborhood structures.
The internal mechanics of these algorithms differs depending on the objective criterion to be preserved, but as an example, the ISOMAP algorithm operates by first estimating the geodesic distance between each pair of points on the manifold (by approximating geodesic distance as the sum of Euclidean distances between pairs of the available sample points). After the $P\times P$ matrix of pairwise geodesic distances is constructed, a technique known as multidimensional scaling uses an eigendecomposition of the distance matrix to determine the proper $M$ -dimensional embedding space. An example of using ISOMAP to learn a $2$ -dimensional manifold is shown in [link] .
Notification Switch
Would you like to follow the 'Concise signal models' conversation and receive update notifications?