<< Chapter < Page Chapter >> Page >
This collection reviews fundamental concepts underlying the use of concise models for signal processing. Topics are presented from a geometric perspective and include low-dimensional linear, sparse, and manifold-based signal models, approximation, compression, dimensionality reduction, and Compressed Sensing.

Recent years have seen a proliferation of novel techniques for what can loosely be termed “dimensionality reduction.”Like the tasks of approximation and compression discussed above, these methods involve some aspect in which low-dimensionalinformation is extracted about a signal or collection of signals in some high-dimensional ambient space.Unlike the tasks of approximation and compression, however, the goal of these methods is not always to maintain a faithfulrepresentation of each signal. Instead, the purpose may be to preserve some criticalrelationships among elements of a data set or to discover information about a manifold on which the data lives.

In this section, we review two general methods for dimensionality reduction. [link] begins with a brief overview of techniques for manifold learning. [link] then discusses the Johnson-Lindenstrauss (JL) lemma, which concerns the isometric embedding of a cloud points as it isprojected to a lower-dimensional space. Though at first glance the JL lemma does not pertain to any of the low-dimensional signalmodels we have previously discussed, we later see in Connections with dimensionality reduction that the JL lemma plays a critical role in the core theory of CS, and we also employ the JL lemma indeveloping a theory for isometric embeddings of manifolds.

Manifold learning

Several techniques have been proposed for solving a problem known as manifold learning in which certain properties of a manifold are inferred from a discrete collection of points sampled from that manifold. A typical manifold learning setup is as follows: an algorithm is presented with a set of P points sampled from a K -dimensional submanifold of R N . The goal of the algorithm is to produce an mapping of these P points into some lower dimension R M (ideally, M = K ) while preserving some characteristic property of the manifold.Example algorithms include ISOMAP  [link] , Hessian Eigenmaps (HLLE)  [link] , and Maximum Variance Unfolding (MVU)  [link] , which attempt to learn isometric embeddings of the manifold (thus preserving pairwise geodesic distances in R M ); Locally Linear Embedding (LLE)  [link] , which attempts to preserve local linear neighborhood structures among the embedded points;Local Tangent Space Alignment (LTSA)  [link] , which attempts to preserve local coordinates in each tangent space; and a methodfor charting a manifold  [link] that attempts to preserve local neighborhood structures.

The internal mechanics of these algorithms differs depending on the objective criterion to be preserved, but as an example, the ISOMAP algorithm operates by first estimating the geodesic distance between each pair of points on the manifold (by approximating geodesic distance as the sum of Euclidean distances between pairs of the available sample points). After the P × P matrix of pairwise geodesic distances is constructed, a technique known as multidimensional scaling uses an eigendecomposition of the distance matrix to determine the proper M -dimensional embedding space. An example of using ISOMAP to learn a 2 -dimensional manifold is shown in [link] .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Concise signal models. OpenStax CNX. Sep 14, 2009 Download for free at http://cnx.org/content/col10635/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Concise signal models' conversation and receive update notifications?

Ask