<< Chapter < Page | Chapter >> Page > |
When working with signals many times it is helpful to break up a signal into smaller, more manageable parts. Hopefully bynow you have been exposed to the concept of eigenvectors and there use in decomposing a signal into one of its possible basis.By doing this we are able to simplify our calculations of signals and systems through eigenfunctions of LTI systems .
Now we would like to look at an alternative way to represent signals, through the use of orthonormal basis . We can think of orthonormal basis as a set of building blockswe use to construct functions. We will build up the signal/vector as a weighted sum of basis elements.
The complex sinusoids
$\frac{1}{\sqrt{T}}e^{i{\omega}_{0}nt}$ for all
$()$∞
In our
Fourier
series equation,
$f(t)=\sum_{n=()} $∞
Recall our definition of a basis : A set of vectors $\{{b}_{i}\}$ in a vector space $S$ is a basis if
Condition 2 in the above definition says we can decompose any vector in terms of the $\{{b}_{i}\}$ . Condition 1 ensures that the decomposition is unique (think about this at home).
Let us look at simple example in ${\mathbb{R}}^{2}$ , where we have the following vector: $$x=\left(\begin{array}{c}1\\ 2\end{array}\right)()$$ Standard Basis: $\{{e}_{0}, {e}_{1}\}()=\{\left(\begin{array}{c}1\\ 0\end{array}\right), \left(\begin{array}{c}0\\ 1\end{array}\right)\}()$ $$x={e}_{0}+2{e}_{1}$$ Alternate Basis: $\{{h}_{0}, {h}_{1}\}()=\{\left(\begin{array}{c}1\\ 1\end{array}\right), \left(\begin{array}{c}1\\ -1\end{array}\right)\}()$ $$x=\frac{3}{2}{h}_{0}+\frac{-1}{2}{h}_{1}$$
In general, given a basis $\{{b}_{0}, {b}_{1}\}()$ and a vector $x\in {\mathbb{R}}^{2}$ , how do we find the ${\alpha}_{0}$ and ${\alpha}_{1}$ such that
Now let us address the question posed above about finding ${\alpha}_{i}$ 's in general for ${\mathbb{R}}^{2}$ . We start by rewriting [link] so that we can stack our ${b}_{i}$ 's as columns in a 2×2 matrix.
Here is a simple example, which shows a little more detail about the above equations.
To make notation simpler, we define the following two itemsfrom the above equations:
Given a standard basis, $\{\begin{pmatrix}1\\ 0\\ \end{pmatrix}, \begin{pmatrix}0\\ 1\\ \end{pmatrix}\}$ , then we have the following basis matrix: $$B=\begin{pmatrix}0 & 1\\ 1 & 0\\ \end{pmatrix}$$
To get the ${\alpha}_{i}$ 's, we solve for the coefficient vector in [link]
Let us look at the standard basis first and try to calculate $\alpha $ from it. $$B=\begin{pmatrix}1 & 0\\ 0 & 1\\ \end{pmatrix}=I$$ Where $I$ is the identity matrix . In order to solve for $\alpha $ let us find the inverse of $B$ first (which is obviously very trivial in this case): $$B^{(-1)}=\begin{pmatrix}1 & 0\\ 0 & 1\\ \end{pmatrix}$$ Therefore we get, $$\alpha =B^{(-1)}x=x$$
Let us look at a ever-so-slightly more complicated basis of $\{\begin{pmatrix}1\\ 1\\ \end{pmatrix}, \begin{pmatrix}1\\ -1\\ \end{pmatrix}\}=\{{h}_{0}, {h}_{1}\}$ Then our basis matrix and inverse basis matrix becomes: $$B=\begin{pmatrix}1 & 1\\ 1 & -1\\ \end{pmatrix}$$ $$B^{(-1)}=\begin{pmatrix}\frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{-1}{2}\\ \end{pmatrix}$$ and for this example it is given that $$x=\begin{pmatrix}3\\ 2\\ \end{pmatrix}$$ Now we solve for $\alpha $ $$\alpha =B^{(-1)}x=\begin{pmatrix}\frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{-1}{2}\\ \end{pmatrix}\begin{pmatrix}3\\ 2\\ \end{pmatrix}=\begin{pmatrix}2.5\\ 0.5\\ \end{pmatrix}$$ and we get $$x=2.5{h}_{0}+0.5{h}_{1}$$
Now we are given the following basis matrix and $x$ : $$\{{b}_{0}, {b}_{1}\}=\{\begin{pmatrix}1\\ 2\\ \end{pmatrix}, \begin{pmatrix}3\\ 0\\ \end{pmatrix}\}$$ $$x=\begin{pmatrix}3\\ 2\\ \end{pmatrix}$$ For this problem, make a sketch of the bases and then represent $x$ in terms of ${b}_{0}$ and ${b}_{1}$ .
In order to represent $x$ in terms of ${b}_{0}$ and ${b}_{1}$ we will follow the same steps we used in the above example. $$B=\begin{pmatrix}1 & 2\\ 3 & 0\\ \end{pmatrix}$$ $$B^{(-1)}=\begin{pmatrix}0 & \frac{1}{2}\\ \frac{1}{3} & \frac{-1}{6}\\ \end{pmatrix}$$ $$\alpha =B^{(-1)}x=\begin{pmatrix}1\\ \frac{2}{3}\\ \end{pmatrix}$$ And now we can write $x$ in terms of ${b}_{0}$ and ${b}_{1}$ . $$x={b}_{0}+\frac{2}{3}{b}_{1}$$ And we can easily substitute in our known values of ${b}_{0}$ and ${b}_{1}$ to verify our results.
We can also extend all these ideas past just ${\mathbb{R}}^{2}$ and look at them in ${\mathbb{R}}^{n}$ and ${\u2102}^{n}$ . This procedure extends naturally to higher (>2) dimensions. Given a basis $\{{b}_{0}, {b}_{1}, \dots , {b}_{n-1}\}$ for ${\mathbb{R}}^{n}$ , we want to find $\{{\alpha}_{0}, {\alpha}_{1}, \dots , {\alpha}_{n-1}\}$ such that
Notification Switch
Would you like to follow the 'Digital signal processing: a user's guide' conversation and receive update notifications?