<< Chapter < Page
  Adaptive filters   Page 1 / 1
Chapter >> Page >

Analysis of the lms algorithm

It is important to analyze the LMS algorithm to determine under what conditions it is stable, whether or not it convergesto the Wiener solution, to determine how quickly it converges, how much degredation is suffered due to the noisy gradient,etc. In particular, we need to know how to choose the parameter .

Mean of w

does W k , k approach the Wiener solution? (since W k is always somewhat random in the approximate gradient-based LMS algorithm, we ask whether the expectedvalue of the filter coefficients converge to the Wiener solution)

W k + 1 W k + 1 W k 2 k X k W k 2 d k X k 2 W k X k X k W k 2 P 2 W k X k X k

Patently false assumption

X k and X k - i , X k and d k - i , and d k and d k - i are statistically independent, i 0 . This assumption is obviously false, since X k - 1 is the same as X k except for shifting down the vector elements one place and adding one new sample. We make this assumptionbecause otherwise it becomes extremely difficult to analyze the LMS algorithm. (First good analysis not makingthis assumption: Macchi and Eweda ) Many simulations and much practical experience has shown that the results one obtains withanalyses based on the patently false assumption above are quite accurate in most situations

With the independence assumption, W k (which depends only on previous X k - i , d k - i ) is statitically independent of X k , and we can simplify W k X k X k

Now W k X k X k is a vector, and

W k X k X k i M 1 0 w i k x k - i x k - j i M 1 0 w i k x k - i x k - j i M 1 0 w i k x k - i x k - j i M 1 0 w i k r xx i j R W k
where R X k X k is the data correlation matrix.

Putting this back into our equation

W k + 1 W k 2 P 2 R W k I 2 R W k 2 P
Now if W k converges to a vector of finite magnitude ("convergence in the mean"), what does it converge to?

If W k converges, then as k , W k + 1 W k , and W I 2 R W 2 P 2 R W 2 P R W P or W opt R P the Wiener solution!

So the LMS algorithm, if it converges, gives filter coefficients which on average arethe Wiener coefficients! This is, of course, a desirable result.

First-order stability

But does W k converge, or under what conditions?

Let's rewrite the analysis in term of V k , the "mean coefficient error vector" V k W k W opt , where W opt is the Wiener filter W k + 1 W k 2 R W k 2 P W k + 1 W opt W k W opt 2 R W k 2 R W opt 2 R W opt 2 P V k + 1 V k 2 R V k 2 R W opt 2 P Now W opt R P , so V k + 1 V k 2 R V k 2 R R P 2 P I 2 R V k We wish to know under what conditions V k 0 ?

Linear algebra fact

Since R is positive definite, real, and symmetric, all the eigenvalues arereal and positive. Also, we can write R as Q Q , where is a diagonal matrix with diagonal entries i equal to the eigenvalues of R , and Q is a unitary matrix with rows equal to the eigenvectors corresponding to theeigenvalues of R .

Using this fact, V k + 1 I 2 Q Q V k multiplying both sides through on the left by Q : we get Q V k + 1 Q 2 Q V k 1 2 Q V k Let V ' Q V : V ' k + 1 1 2 V ' k Note that V ' is simply V in a rotated coordinate set in m , so convergence of V ' implies convergence of V .

Since 1 2 is diagonal, all elements of V ' evolve independently of each other. Convergence (stability) bolis down to whether all M of these scalar, first-order difference equations are stable, and thus 0 . i i

    1 2 M
V i ' k + 1 1 2 i V i ' k These equations converge to zero if 1 2 i 1 , or i i 1 and i are positive, so we require i 1 i so for convergence in the mean of the LMS adaptive filter, we require
1 max
This is an elegant theoretical result, but in practice, we may not know max , it may be time-varying, and we certainly won't want to compute it. However, another useful mathematicalfact comes to the rescue... tr R i 1 M r ii i 1 M i max Since the eigenvalues are all positive and real.

For a correlation matrix, i i 1 M r ii r 0 . So tr R M r 0 M x k x k . We can easily estimate r 0 with O 1 computations/sample, so in practice we might require 1 M r 0 as a conservative bound, and perhaps adapt accordingly with time.

Rate of convergence

Each of the modes decays as 1 2 i k

The initial rate of convergence is dominated by the fastest mode 1 2 max . This is not surprising, since a dradient descent method goes "downhill" in the steepest direction
The final rate of convergence is dominated by the slowest mode 1 2 min . For small min , it can take a long time for LMS to converge.
Note that the convergence behavior depends on the data (via R ). LMS converges relatively quickly for roughly equal eigenvalues. Unequaleigenvalues slow LMS down a lot.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Adaptive filters. OpenStax CNX. May 12, 2005 Download for free at http://cnx.org/content/col10280/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Adaptive filters' conversation and receive update notifications?

Ask