<< Chapter < Page Chapter >> Page >

Suppose our inner product space V = R M or C M with the standard inner product (which induces the 2 -norm).

Re-examining what we have just derived, we can write our approximation x ^ = P x = V c , where V is an M × N matrix given by

V = v 1 v 2 v N

and c is an N × 1 vector given by

c 1 c 2 c N .

Given x R M (or C M ), our search for the closest approximation can be written as

min c x - V c 2

or as

min c , e e 2 2 subjectto x = V c + e

Using V , we can replace G = V H V and b = V H x . Thus, our solution can be written as

c = ( V H V ) - 1 V H x ,

which yields the formula

x ^ = V ( V H V ) - 1 V H x .

The matrix V = ( V H V ) - 1 V H is known as the “pseudo-inverse.” Why the name “pseudo-inverse”? Observe that

V V = ( V H V ) - 1 V H V = I .

Note that x ^ = V V x . We can verify that V V is a projection matrix since

V V V V = V ( V H V ) - 1 V H V ( V H V ) - 1 V H = V ( V H V ) - 1 V H = V V

Thus, given a set of N linearly independent vectors in R M or C M ( N < M ), we can use the pseudo-inverse to project any vector onto the subspacedefined by those vectors. This can be useful any time we have a problem of the form:

x = V c + e

where x denotes a set of known “observations”, V is a set of known “expansion vectors”, c are the unknown coefficients, and e represents an unknown “noise” vector. In this case, the least-squares estimate is given by

c = V x , x ^ = V V x .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Digital signal processing. OpenStax CNX. Dec 16, 2011 Download for free at http://cnx.org/content/col11172/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Digital signal processing' conversation and receive update notifications?

Ask