<< Chapter < Page Chapter >> Page >
a ^ m + 1 = [ C T W m + 1 T W m + 1 C ] - 1 C T W m + 1 T W m + 1 A d .

The vector of filter coefficients that is actually used is only partially updated using a form of adjustable step size in the following second orderlinearly weighted sum

a m + 1 = λ a ^ m + 1 + ( 1 - λ ) a m

Using this filter coefficient vector, we solve for the next error vector by going back to [link] and this defines Karlovitz's IRLS algorithm [link] .

In this algorithm, λ is a convergence parameter that takes values 0 < λ 1 . Karlovitz showed that for the proper λ , the IRLS algorithm using [link] always converges to the globally optimal L p approximation for p an even integer in the range 4 p < . At each iteration the L p error has to be minimized over λ which requires a line search. In other words, the full Karlovitz method requires a multi-dimensional weighted least squaresminimization and a one-dimensional p t h power error minimization at each iteration. Extensions of Karlovitz's work [link] show the one-dimensional minimization is not necessary but practice shows thenumber of required iterations increases considerably and robustness in lost.

Fletcher et al. [link] and later Kahng [link] independently derive the same second order iterative algorithm by applying Newton'smethod. That approach gives a formula for λ as a function of p and is discussed later in this paper. Although the iteration count for convergence of the Karlovitz method is good, indeed, perhaps the best ofall, the minimization of λ at each iteration causes the algorithm to be very slow in execution.

Newton's methods

Both the new method in section 4.3 and Lawson's method use a second order updating of the weights to obtain convergence of the basic IRLS algorithm.Fletcher et al. [link] and Kahng [link] use a linear summation for the updating similar in form to [link] but apply it to the filter coefficients in the manner of Karlovitz rather thanthe weights as Lawson did. Indeed, using our development of Karlovitz's method, we see that Kahng's method and Fletcher, Grant, andHebden's method are simply a particular choice of λ as a function of p in Karlovitz's method. They derive

λ = 1 p - 1

by using Newton's method to minimize ε in [link] to give for [link]

a m = ( a ^ m + ( p - 2 ) a m - 1 ) / ( p - 1 ) .

This defines Kahng's method which he says always converges [link] . He also notes that the summation methods in the sections Calculation of the Fourier Transform and Fourier Series using the FFT, Sampling Functions-- the Shah Function, and Down-Sampling, Subsampling, or Decimation do not have the possible restarting problem that Lawson's method theoretically does. Because Kahng's algorithm is a form ofNewton's method, its asymptotic convergence is very good but the initial convergence is poor and very sensitive to starting values.

A new robust irls method

A modification and generalization of an acceleration method suggested independently by Ekblom [link] and by Kahng [link] is developed here and combined with the Newton's method of Fletcher, Grant,and Hebden and of Kahng to give a robust, fast, and accurate IRLS algorithm [link] , [link] . It overcomes the poor initial performance of the Newton's methods and the poor final performance of the RUL algorithms.

Rather than starting the iterations of the IRLS algorithms with the actual desired value of p , after the initial L 2 approximation, the new algorithm starts with p = K * 2 where K is a parameter between one and approximately two, chosen for the particular problem specifications.After the first iteration, the value of p is increased to p = K 2 * 2 . It is increased by a factor of K at each iteration until it reaches the actual desired value. This keeps the value of p being approximated just ahead of the value achieved. This is similar to a homotopy where we varythe value of p from 2 to its final value. A small value of K gives very reliable convergence because the approximation is achieved at eachiteration but requires a large number of iterations for p to reach its final value. A large value of K gives faster convergence for most filter specifications but fails for some. The rule that is used to choose p m at the m t h iteration is

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Digital signal processing and digital filter design (draft). OpenStax CNX. Nov 17, 2012 Download for free at http://cnx.org/content/col10598/1.6
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Digital signal processing and digital filter design (draft)' conversation and receive update notifications?

Ask