<< Chapter < Page
  Adaptive filters   Page 1 / 1
Chapter >> Page >

Recall the Weiner filter problem

x k , d k jointly wide sense stationary

Find W minimizing k 2 k d k y k d k i M 1 0 w i x k - i d k X k W k X k x k x k - 1 x k - M + 1 W k w 0 k w 1 k w M - 1 k The superscript denotes absolute time, and the subscript denotes time or a vector index.

the solution can be found by setting the gradient 0

k W k 2 2 k X k -2 d k X k W k X k 2 d k X k X k X k W -2 P 2 R W
W opt R P Alternatively, W opt can be found iteratively using a gradient descent technique W k + 1 W k k In practice, we don't know R and P exactly, and in an adaptive context they may be slowly varying with time.

To find the (approximate) Wiener filter, some approximations are necessary. As always, the key is to make the right approximations!

Approximate R and P :RLS methods, as discussed last time.
Approximate the gradient! k W k 2
Note that k 2 itself is a very noisy approximation to k 2 . We can get a noisy approximation to the gradient by finding the gradient of k 2 ! Widrow and Hoff first published the LMS algorithm, based on this clever idea, in 1960. k W k 2 2 k W d k W k X k 2 k X k 2 k X k This yields the LMS adaptive filter algorithm

The lms adaptive filter algorithm

  • y k W k X k i 0 M 1 w i k x k - i
  • k d k y k
  • W k + 1 W k k W k -2 k X k W k 2 k X k ( w i k + 1 w i k 2 k x k - i )
Got questions? Get instant answers now!

The LMS algorithm is often called a stochastic gradient algorithm, since k is a noisy gradient. This is by far the most commonly used adaptive filtering algorithm, because

  • it was the first
  • it is very simple
  • in practice it works well (except that sometimes it converges slowly)
  • it requires relatively litle computation
  • it updates the tap weights every sample, so it continually adapts the filter
  • it tracks slow changes in the signal statistics well

Computational cost of lms

To Compute y k k W k + 1 = Total
multiplies M 0 M 1 2 M 1
adds M 1 1 M 2 M

So the LMS algorithm is O M per sample. In fact, it is nicely balanced in that the filter computation and the adaptation require the sameamount of computation.

Note that the parameter plays a very important role in the LMS algorithm. It can also be varied with time, but usually a constant ("convergence weight facor") is used, chosen after experimentation for a givenapplication.

Tradeoffs

large : fast convergence, fast adaptivity

small : accurate W less misadjustment error, stability

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Adaptive filters. OpenStax CNX. May 12, 2005 Download for free at http://cnx.org/content/col10280/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Adaptive filters' conversation and receive update notifications?

Ask