<< Chapter < Page | Chapter >> Page > |
It is important to analyze the LMS algorithm to determine under what conditions it is stable, whether or not it convergesto the Wiener solution, to determine how quickly it converges, how much degredation is suffered due to the noisy gradient,etc. In particular, we need to know how to choose the parameter $$ .
does ${W}^{k}$ , $k\to $∞ approach the Wiener solution? (since ${W}^{k}$ is always somewhat random in the approximate gradient-based LMS algorithm, we ask whether the expectedvalue of the filter coefficients converge to the Wiener solution)
${X}^{k}$ and ${X}^{k-i}$ , ${X}^{k}$ and ${d}^{k-i}$ , and ${d}_{k}$ and ${d}_{k-i}$ are statistically independent, $i\neq 0$ . This assumption is obviously false, since ${X}^{k-1}$ is the same as ${X}^{k}$ except for shifting down the vector elements one place and adding one new sample. We make this assumptionbecause otherwise it becomes extremely difficult to analyze the LMS algorithm. (First good analysis not makingthis assumption: Macchi and Eweda ) Many simulations and much practical experience has shown that the results one obtains withanalyses based on the patently false assumption above are quite accurate in most situations
With the independence assumption, ${W}^{k}$ (which depends only on previous ${X}^{k-i}$ , ${d}^{k-i}$ ) is statitically independent of ${X}^{k}$ , and we can simplify $({W}^{k}^T{X}^{k}{X}^{k})$
Now ${W}^{k}^T{X}^{k}{X}^{k}$ is a vector, and
Putting this back into our equation
If $\langle {W}^{k}\rangle $ converges, then as $k\to $∞ , $\langle {W}^{k+1}\rangle \approx \langle {W}^{k}\rangle $ , and $$\langle {W}^{}\rangle =I\langle {W}^{}\rangle +2P$$ $$2R\langle {W}^{}\rangle =2P$$ $$R\langle {W}^{}\rangle =P$$ or $$\langle {W}_{\mathrm{opt}}\rangle =R^{(-1)}P$$ the Wiener solution!
So the LMS algorithm, if it converges, gives filter coefficients which on average arethe Wiener coefficients! This is, of course, a desirable result.
But does $\langle {W}^{k}\rangle $ converge, or under what conditions?
Let's rewrite the analysis in term of $\langle {V}^{k}\rangle $ , the "mean coefficient error vector" $\langle {V}^{k}\rangle =\langle {W}^{k}\rangle -{W}_{\mathrm{opt}}$ , where ${W}_{\mathrm{opt}}$ is the Wiener filter $$\langle {W}^{k+1}\rangle =\langle {W}^{k}\rangle -2R\langle {W}^{k}\rangle +2P$$ $$\langle {W}^{k+1}\rangle -{W}_{\mathrm{opt}}=\langle {W}^{k}\rangle -{W}_{\mathrm{opt}}+-(2R\langle {W}^{k}\rangle )+2R{W}_{\mathrm{opt}}-2R{W}_{\mathrm{opt}}+2P$$ $$\langle {V}^{k+1}\rangle =\langle {V}^{k}\rangle -2R\langle {V}^{k}\rangle +-(2R{W}_{\mathrm{opt}})+2P$$ Now ${W}_{\mathrm{opt}}=R^{(-1)}$ , so $$\langle {V}^{k+1}\rangle =\langle {V}^{k}\rangle -2R\langle {V}^{k}\rangle +-(2RR^{(-1)}P)+2P=(I-2R)\langle {V}^{k}\rangle $$ We wish to know under what conditions $\langle {V}^{k}\rangle \to \langle 0\rangle $ ?
Since $R$ is positive definite, real, and symmetric, all the eigenvalues arereal and positive. Also, we can write $R$ as $(Q^{(-1)}, Q)$ , where $$ is a diagonal matrix with diagonal entries ${}_{i}$ equal to the eigenvalues of $R$ , and $Q$ is a unitary matrix with rows equal to the eigenvectors corresponding to theeigenvalues of $R$ .
Using this fact, $${V}^{k+1}=(I-2(Q^{(-1)}, Q)){V}^{k}$$ multiplying both sides through on the left by $Q$ : we get $$Q\langle {V}^{k+1}\rangle =(Q-2Q)\langle {V}^{k}\rangle =(1-2)Q\langle {V}^{k}\rangle $$ Let ${V}^{\text{'}}=QV$ : $${V}^{\text{'}k+1}=(1-2){V}^{\text{'}k}$$ Note that ${V}^{\text{'}}$ is simply $V$ in a rotated coordinate set in $^{m}$ , so convergence of ${V}^{\text{'}}$ implies convergence of $V$ .
Since $1-2$ is diagonal, all elements of ${V}^{\text{'}}$ evolve independently of each other. Convergence (stability) bolis down to whether all $M$ of these scalar, first-order difference equations are stable, and thus $(0)$ . $$\forall i, i$$
For a correlation matrix, $\forall i, i\in \{1, M\}\colon {r}_{\mathrm{ii}}=r(0)$ . So $\mathrm{tr}(R)=Mr(0)=M({x}_{k}{x}_{k})$ . We can easily estimate $r(0)$ with $O(1)$ computations/sample, so in practice we might require $$< \frac{1}{M(r(0))}$$ as a conservative bound, and perhaps adapt $$ accordingly with time.
Each of the modes decays as $$(1-2{}_{i})^{k}$$
Notification Switch
Would you like to follow the 'Adaptive filters' conversation and receive update notifications?