<< Chapter < Page Chapter >> Page >

The Kalman filter is just one of many adaptive filtering (or estimation) algorithms. Despite its elegant derivation and often excellentperformance, the Kalman filter has two drawbacks:

  • The derivation and hence performance of the Kalman filter depends on the accuracy of the a priori assumptions. The performance can be less than impressive if the assumptions are erroneous.
  • The Kalman filter is fairly computationally demanding, requiring O P 2 operations per sample. This can limit the utility of Kalman filters in high rate real timeapplications.
As a popular alternative to the Kalman filter, we willinvestigate the so-called least-mean-square (LMS) adaptive filtering algorithm.

The principle advantages of LMS are

  • No prior assumptions are made regarding the signal to be estimated.
  • Computationally, LMS is very efficient, requiring O P per sample.
The price we pay with LMS instead of a Kalman filter is that the rate of convergence and adaptation to sudden changes is slower forLMS than for the Kalman filter (with correct prior assumptions).

Adaptive filtering applications

Channel/system identification

Channel/system identification

Noise cancellation

Suppression of maternal ECG component in fetal ECG ( ).

Cancelling maternal heartbeat in fetal electrocardiography (ECG): position of leads.
y is an estimate of the maternal ECG signal present in the abdominal signal ( ).
Results of fetal ECG experiment (bandwidth, 3-35Hz; sampling rate, 256Hz): (a)reference input (chestlead); (b)primary input (abdominal lead); (c)noise-canceller output.

Channel equalization

Channel equalization

Adaptive controller

Adaptive controller

Here, the reference signal is the desired output. The adaptive controller adjusts the controllergains (filter weights) to keep them appropriate to the system as it changes over time.

Iterative minimization

Most adaptive filtering alogrithms (LMS included) are modifications of standard iterative proceduresfor solving minimization problems in a real-time or on-line fashion. Therefore, before deriving the LMS algorithm we will look at iterative methods ofminimizing error criteria such as MSE.

Conider the following set-up: x k : observation y k : signal to be estimated

Linear estimator

y k w 1 x k w 2 x k 1 w p x k p 1
Impulse response of the filter: , 0 , 0 , w 1 , w 2 , w p , 0 , 0 ,

Vector notation

y k x k w
Where x k x k x k 1 x k p 1 and w w 1 w 2 w p

Error signal

e k y k y k y k x k w

Assumptions

( x k , y k ) are jointly stationary with zero-mean.

Mse

e k 2 y k x k w 2 y k 2 2 w x k y k w x k x k w R yy 2 w R xy w R xx w
Where R yy is the variance of y k 2 , R xx is the covariance matrix of x k , and R xy x k y k is the cross-covariance between x k and y k
The MSE is quadratic in W which implies the MSE surface is "bowl" shaped with a unique minimum point ( ).

Optimum filter

Minimize MSE:

w e k 2 -2 R xy 2 R xx w 0 w opt R xx R xy
Notice that we can re-write as
x k x k w x k y k
or
x k y k x k w x k e k 0
Which shows that the error signal is orthogonal to the input x k (by the orthogonality principle of minimum MSE estimator).

Steepest descent

Although we can easily determine w opt by solving the system of equations

R xx w R xy
Let's look at an iterative procedure for solving this problem. This will set the stage for our adaptive filteringalgorithm.

We want to minimize the MSE. The idea is simple. Starting at some initial weight vector w 0 , iteratively adjust the values to decrease the MSE ( ).

In one-dimension

We want to move w 0 towards the optimal vector w opt . In order to move in the correct direction, we must move downhill or in the direction opposite to the gradient of the MSE surface at the point w 0 . Thus, a natural and simple adjustment takes the form
w 1 w 0 1 2 w w 0 w e k 2
Where is the step size and tells us how far to move in the negative gradient direction( ).
Generalizing this idea to an iterative strategy, we get
w k w k 1 1 2 w w k 1 w e k 2
and we can repeatedly update w : w 0 , w 1 , , w k . Hopefully each subsequent w k is closer to w opt . Does the procedure converge? Can we adapt it to an on-line, real-time, dynamic situation in which thesignals may not be stationary?

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical signal processing. OpenStax CNX. Jun 14, 2004 Download for free at http://cnx.org/content/col10232/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?

Ask