<< Chapter < Page Chapter >> Page >
ϵ m = C a m - A d

A new weighting vector is created from this error vector using [link] by

w m + 1 = | ϵ m | ( p - 2 ) / 2

whose elements are the diagonal elements of the new weight matrix

W m + 1 = d i a g [ w m + 1 ] .

Using this weight matrix, we solve for the next vector of filter coefficients by going back to [link] and this defines the basic iterative process of the IRLS algorithm.

It can easily be shown that the a that minimizes [link] is a fixed point of this iterative map. Unfortunately, applied directly, this basicIRLS algorithm does not converge and/or it has numerical problems for most practical cases [link] . There are three aspects that must beaddressed. First, the IRLS algorithm must theoretically converge. Second, the solution of [link] must be numerically stable. Finally, even if the algorithm converges and is numerically stable, it must converge fastenough to be practical.

Both theory and experience indicate there are different convergence problems connected with several different ranges and values of p . In the range 2 p < 3 , virtually all methods converge [link] , [link] , [link] . In the range 3 p < , the algorithm diverges and the various methods discussed in this paper must be used.As p becomes large compared to 2, the weights carry a larger contribution to the total minimization than the underlying least squarederror minimization, the improvement at each iteration becomes smaller, and the likelihood of divergence becomes larger. For p = we can use to advantage the fact that the optimal approximation solution to [link] is unique but the weights in [link] that give that solution are not. In other words, different matrices W give the same solution to [link] but will have different convergence properties. This allows certain alteration to the weights to improve convergence without harmingthe optimality of the results [link] . In the range 1 < p < 2 , both convergence and numerical problems exist as, in contrast to p > 2 , the IRLS iterations are undoing what the underlying least squares isdoing. In particular, the weights near frequencies with small errors become very large. Indeed, if the error happens to be zero, the weightbecomes infinite because of the negative exponent in [link] . For p = 1 the solution to the optimization problem is not even unique. The various algorithms that are presented below are based on schemes toaddress these problems.

The karlovitz method

In order to achieve convergence, a second order update is used which only partially changes the filter coefficients a m in [link] each iteration. This is done by first calculating the unweighted L 2 approximation filter coefficients using Equation 6 from Chebyshev or Equal Ripple Error Approximation Filters as

a 0 = [ C T C ] - 1 C T A d .

The error or residual vector Equation 1 from Chebyshev or Equal Ripple Error Approximation Filters for the m t h iteration is found as Equation 97 from Sampling, Up-Sampling, Down-Sampling, and Multi-Rate by

ϵ m = C a m - A d

and the new weighting vector is created from this error vector using [link] by

w m + 1 = | ϵ m | ( p - 2 ) / 2

whose elements are the diagonal elements of the new weight matrix

W m + 1 = d i a g [ w m + 1 ] .

This weight matrix is then used to calculate a temporary filter coefficient vector by

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Digital signal processing and digital filter design (draft). OpenStax CNX. Nov 17, 2012 Download for free at http://cnx.org/content/col10598/1.6
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Digital signal processing and digital filter design (draft)' conversation and receive update notifications?

Ask