<< Chapter < Page Chapter >> Page >
Definition of orthogonal vectors, sets, and subspaces; properties and benefits.

Recall that a set S is a basis for a subspace X if [ S ] = X and S has linearly independent elements. If S is a basis for X then each x X is uniquely determined by ( a 1 , a 2 , ... , a n ) such that i = 1 n a i s i = x . In this sense, we could operate either with x itself or with the vector a = [ a 1 , ... a n ] = R n . One would wonder then whether particular operations can be performed with a representation a instead of the original vector x .

Example 1 Assume x , y X have representations a , b R n in a basis for X . Can we say that x , y = a , b ?

For the particular example of X = L 2 [ 0 , 1 ] , S = { 1 , t , t 2 } so that [ S ] = Q , the set of all quadratic functions supported on [ 0 , 1 ] . Pick x = 2 + t + t 2 and y = 1 + 2 t + 3 t 2 . One can see then that if we label s 1 = 1 , s 2 = t , s 3 = t 2 , then the coefficient vectors for x and y are a = [ 2 1 1 ] and b = [ 1 2 3 ] , respectively. Let us compute both inner products:

x , y = 0 1 x ( t ) y ( t ) d t = 0 1 ( 2 + t + t 2 ) ( 1 + 2 t + 3 t 2 ) d t = 187 20 9 . 35 , a , b = 2 + 2 + 3 = 7 .

Since 7 9 . 35 , we find that we fail to obtain the desired equivalence between vectors and their representations.

While this example was unsuccessful, simple conditions on the basis S will yield this desired equivalence, plus many more useful properties.

Several definitions of orthogonality will be useful to us during the course.

Definition 1 A pair of vectors x and y in an inner product space are orthogonal (denoted x y ) if the inner product x , y = 0 .

Note that 0 is immediately orthogonal to all vectors.

Definition 2 Let X be an inner product space. A set of vectors S X is orthogonal if x , y = 0 for all x , y S , x y .

Definition 3 Let X be an inner product space. A set of vectors S X is orthonormal if S is an orthogonal set and s = s , s = 1 for all s S .

Definition 4 A vector x in an inner product space X is orthogonal to a set S X (denoted x S ) if x y for all y S .

Definition 5 Let X be an inner product space. Two sets S 1 X and S 2 X are orthogonal (denoted S 1 S 2 ) if x y for all x S 1 and y s 2 .

Definition 6 The orthogonal complement S of a set S is the set of all vectors that are orthogonal to S .

Benefits of orthogonality

Why is orthonormality good? For many reasons. One of them is the equivalence of inner products that we desired in a previous example. Another is that having an orthonormal basis allows us to easily find the coefficients a 1 , ... a n of x in a basis S .

Example 2 Let x X and S be a basis for X (i.e., [ S ] = X ). We wish to find a 1 , ... a n such that x = i = 1 n a i s i . Consider the inner products

x , s i = j = 1 n a j s j , s i = j = 1 n a j s j , s i ,

due to the linearity of the inner product in the first term. If S is orthonormal, then we have that for i j s j , s i = 0 . In that case the sum above becomes

x , s i = a i s i , s i = a i s i 2 = a i ,

due to the orthonormality of S . In other words, for an orthonormal basis S one can find the basis coefficients as a i = x , s i .

If S is not orthonormal, then we can rewrite the sum above as the product of a row vector and a column vector as follows:

x , s i = s 1 , s i s 2 , s i s n , s i a 1 a 2 a n .

We can then stack these equations for i = 1 , ... , n to obtain the following matrix-vector multiplication:

x , s 1 x , s 2 x , s n β = s 1 , s 1 s 2 , s 1 s n , s 1 s 1 , s 2 s 2 , s 3 s n , s 2 s 1 , s n s 2 , s n s n , s n G a 1 a 2 a n a .

The nomenclature given above provides us with the matrix equation β = G · a , where β and G have entries β i = x , s i and G i , j = s j , s i , respectively.

Definition 7 The matrix G above is called the Gram matrix (or Gramian) of the set S .

In the particular case of orthonormal S , it is easy to see that G = I , the identity matrix, and so a = β as given earlier. For invertible Gramians G , one could compute the coefficients in vector form as a = G - 1 β . For square matrices (like G ), invertibility is linked to singularity.

Definition 8 A singular matrix is a non-invertible square matrix. A non-singular matrix is an invertible square matrix.

Theorem 1 A matrix is singular if G · x = 0 for some x 0 . A matrix is non-singular if G · x = 0 only for x = 0 .

The link between this notion of singularity and invertibility is straightforward: if G is singular, then there is some a 0 for which G · a = 0 . Consider the mapping y = G · x ; we would also have y = G ( x + a ) . Since x x + a , one cannot “invert” the mapping provided by G into y .

Theorem 2 S is linearly independent if and only if G is non-singular (i.e. G x = 0 if and only if x = 0 ).

Proof: We will prove an equivalent statement: S is linearly dependent if and only if G is singular, i.e., if and only if there exists a vector x 0 such that G x = 0 .

( ) We first prove that if S is linearly dependent then G is singular. In this case there exist a set { a i } R , with at least one nonzero, such that i = 1 n a i s i = 0 . We then can write i = 1 n a i s i , s j = 0 , s j = 0 for each s j . Linearity allows us to take the sum and the scalar outside the inner product:

i = 1 n a i s i , s j = 0 .

We can rewrite this equation in terms of the entries of the Gram matrix as i = 1 n a i G j i = 0 . This sum, in turn, can be written as the vector inner product

G j 1 G 1 n a 1 a n = 0 ,

which is true for every value of j . We can therefore collect these equations into a matrix-vector product:

G 11 G 1 n G n 1 G n n a 1 a n = 0 0 .

Therefore we have found a nonzero vector a for which G a = 0 , and therefore G is singular. Since all statements here are equalities, we can backtrack to prove the opposite direction of the theorem ( ) .

Pythagorean theorem

There are still more nice proper ties for orthogonal sets of vectors. The next one has well-known geometric applications.

Theorem 3 (Pythagorean theorem) If x and y are orthogonal ( x y ), then x 2 + y 2 = x + y 2 .

Proof:

x + y 2 = x + y , x + y = x , x + x , y + y , x + y , y

Because x and y are orthogonal, x , y = y , x = 0 and we are left with x , x = x 2 and y , y = y 2 . Thus: x + y 2 = x 2 + y 2 .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Introduction to compressive sensing. OpenStax CNX. Mar 12, 2015 Download for free at http://legacy.cnx.org/content/col11355/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Introduction to compressive sensing' conversation and receive update notifications?

Ask