<< Chapter < Page Chapter >> Page >

which can be minimized by solving

W C a = W A d

with the normal equations

C T W T W C a = C T W T W A d

where W is an L by L diagonal matrix with the weights w k from [link] along the diagonal. A more general formulation of the approximation simply requires W T W to be positive definite. Some authors define the weighted error in [link] using w k rather than w k 2 . We use the latter to be consistent with the least squared error algorithms in Matlab [link] .

Solving [link] is a direct method of designing an FIR filter using a weighted least squared error approximation. To minimize the sum of thesquared error and get approximately the same result as minimizing the integral of the squared error, one must choose L to be 3 to 10 or more times the length L of the filter being designed.

Iterative algorithms to minimize theError

There is no simple direct method for finding the optimal approximation for any error power other than two. However, if the weighting coefficients w k as elements of W in [link] could be set equal to the elements in | A - A d | , minimizing [link] would minimize the fourth power of | A - A d | . This cannot be done in one step because we need the solution to find the weights! We can, however, pose an iterative algorithm whichwill first solve the problem in [link] with no weights, then calculate the error vector ϵ from [link] which will then be used to calculate the weights in [link] . At each stage of the iteration, the weights are updated from the previous error and the problem solved again.This process of successive approximations is called the iterative reweighted least squared error algorithm (IRLS).

The basic IRLS equations can also be derived by simply taking the gradient of the p -error with respect to the filter coefficients h or a and setting it equal to zero [link] , [link] . These equations form the basis for the iterative algorithm.

If the algorithm is a contraction mapping [link] , the successive approximations will converge and the limit is the solution of the minimum L 4 approximation problem. If a general problem can be posed [link] , [link] , [link] as the solution of an equation in the form

x = f ( x ) ,

a successive approximation algorithm can be proposed which iteratively calculates x using

x m + 1 = f ( x m )

starting with some x 0 . The function f ( · ) maps x m into x m + 1 and, if

lim m x m = x 0 where x 0 = f ( x 0 ) , x 0 is the fixed point of the mapping and a solution to [link] . The trick is to find a mapping that solves the desired problem, converges, and converges fast.

By setting the weights in [link] equal to

w ( k ) = | A ( ω k ) - A d ( ω k ) | ( p - 2 ) / 2 ,

the fixed point of a convergent algorithm minimizes

q = k = 0 L - 1 | A ( ω k ) - A d ( ω k ) | p .

It has been shown [link] that weights always exist such that minimizing [link] also minimizes [link] . The problem is to find those weights efficiently.

Basic iterative reweighted least squares

The basic IRLS algorithm is started by initializing the weight matrix defined in [link] and [link] for unit weights with W 0 = I . Using these weights to start, the m t h iteration solves [link] for the filter coefficients with

a m = [ C T W m T W m C ] - 1 C T W m T W m A d

This is a formal statement of the operation. In practice one should not invert a matrix, one should use a sophisticated numerical method [link] to solve the overdetermined equations in [link] The error or residual vector [link] for the m t h iteration is found by

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Digital signal processing and digital filter design (draft). OpenStax CNX. Nov 17, 2012 Download for free at http://cnx.org/content/col10598/1.6
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Digital signal processing and digital filter design (draft)' conversation and receive update notifications?

Ask