<< Chapter < Page Chapter >> Page >

Optimization theory is the branch of applied mathematics whose purpose is to consider a mathematical expression in order to find a set of parameters that either maximize or minimize it. Being an applied discipline, problems usually arise from real-life situations including areas like science, engineering and finance (among many other). This section presents some basic concepts for completeness and is not meant to replace a treaty on the subject. The reader is encouraged to consult further references for more information.

Solution of linear weighted least squares problems

Consider the quadratic problem

min h d - C h 2

which can be written as

min h d - C h T d - C h

omitting the square root since this problem is a strictly convex one. Therefore its unique (and thus global) solution is found at the point where the partial derivatives with respect to the optimization variable are equal to zero. That is,

h d - C h T d - C h = h d T d - 2 d T C h + C h T C h = - 2 C T d + 2 C T C h = 0 C T C h = C T d

The solution of [link] is given by

h = C T C - 1 C T d

where the inverted term is referred [link] , [link] as the Moore-Pentrose pseudoinverse of C T C .

In the case of a weighted version of [link] ,

min h w d - C h 2 2 = k w k | d k - C k h | 2

where C k is the k -th row of C , one can write [link] as

min h W ( d - C h ) T W ( d - C h )

where W = diag ( w ) contains the weighting vector w . The solution is therefore given by

h = C T W T W C - 1 C T W T W d

Newton's method and the approximation of linear systems in an l p Sense

Newton's method and l p Linear phase systems

Consider the problem

min a g ( a ) = A ( ω ; a ) - D ( ω ) p

for a R M + 1 . Problem [link] is equivalent to the better posed problem

min a f ( a ) = g ( a ) p = A ( ω ; a ) - D ( ω ) p p = i = 0 L C i a - D i p

where D i = D ( ω i ) , ω i [ 0 , π ] , C i = [ C i , 0 , ... , C i , M ] , and

C = C 0 C L

The i j -th element of C is given by C i , j = cos ω i ( M - j ) , where 0 i L and 0 j M . From [link] we have that

f ( a ) = a 0 f ( a ) a M f ( a )

where a j is the j -th element of a R M + 1 and

a j f ( a ) = a j i = 0 L C i a - D i p = i = 0 L a j C i a - D i p = p i = 0 L C i a - D i p - 1 · a j C i a - D i

Now,

a j C i a - D i = sign ( C i a - D i ) · a j ( C i a - D i ) = C i , j sign ( C i a - D i )

where Note that

lim u ( a ) 0 + a j u ( a ) p = lim u ( a ) 0 - a j u ( a ) p = 0

sign ( x ) = 1 x > 0 0 x = 0 - 1 x < 0

Therefore the Jacobian of f ( a ) is given by

f ( a ) = p i = 0 L C i , 0 C i a - D i p - 1 sign ( C i a - D i ) p i = 0 L C i , M - 1 C i a - D i p - 1 sign ( C i a - D i )

The Hessian of f ( a ) is the matrix 2 f ( a ) whose j m -th element ( 0 j , m M ) is given by

j , m 2 f ( a ) = a 2 a j a m f ( a ) = a m a j f ( a ) = i = 0 L p C i , j a m D i - C i a p - 1 sign ( D i - C i a ) = i = 0 L α a m b ( a ) d ( a )

where adequate substitutions have been made for the sake of simplicity. We have

a m b ( a ) = a m C i a - D i p - 1 = ( p - 1 ) C i , m C i a - D i p - 2 sign ( C i a - D i ) a m d ( a ) = a m sign ( D i - C i a ) = 0

Note that the partial derivative of d ( a ) at D i - C i a = 0 is not defined. Therefore

a m b ( a ) d ( a ) = b ( a ) a m d ( a ) + d ( a ) a m b ( a ) = ( p - 1 ) C i , m C i a - D i p - 2 sign 2 ( C i a - D i )

Note that sign 2 ( C i a - D i ) = 1 for all D i - C i a 0 where it is not defined. Then

j , m 2 f ( a ) = p ( p - 1 ) i = 0 L C i , j C i , m C i a - D i p - 2

except at D i - C i a = 0 where it is not defined.

Based on [link] and [link] , one can apply Newton's method to problem [link] as follows,

  • Given a 0 R M + 1 , D R L + 1 , C R L + 1 × M + 1
  • For i = 0 , 1 , ...
    1. Find f ( a i ) .
    2. Find 2 f ( a i ) .
    3. Solve 2 f ( a i ) s = - f ( a i ) for s .
    4. Let a + = a i + s .
    5. Check for convergence and iterate if necessary.

Note that for problem [link] the Jacobian of f ( a ) can be written as

f ( a ) = p C T y

where

y = C a i - D p - 1 sign ( C a i - D ) = C a i - D p - 2 ( C a i - D )

Also,

j , m 2 f ( a ) = p ( p - 1 ) C j T Z C m

where

Z = diag C a i - D p - 2

and

C j = C 0 , j C L , j

Therefore

2 f ( a ) = ( p 2 - p ) C T Z C

From [link] , the Hessian 2 f ( a ) can be expressed as

2 f ( a ) = ( p 2 - p ) C T W T W C

where

W = diag C a i - D p - 2 2

The matrix C R ( L + 1 ) × ( M + 1 ) is given by

C = cos M ω 0 cos ( M - 1 ) ω 0 cos ( M - j ) ω 0 cos ω 0 1 cos M ω 1 cos ( M - 1 ) ω 1 cos ( M - j ) ω 1 cos ω 1 1 cos M ω i cos ( M - 1 ) ω i cos ( M - j ) ω i cos ω i 1 cos M ω L - 1 cos ( M - 1 ) ω L - 1 cos ( M - j ) ω L - 1 cos ω L - 1 1 cos M ω L cos ( M - 1 ) ω L cos ( M - j ) ω L cos ω L 1

The matrix H = 2 f ( a ) is positive definite (for p > 1 ). To see this, consider H = K T K where K = W C . Let z R M + 1 , z 0 . Then

z T H z = z T K T K z = K z 2 2 > 0

unless z N ( K ) . But since W is diagonal and C is full column rank, N ( K ) = 0 . Thus z T H z 0 (identity only if z = 0 ) and so H is positive definite.

Newton's method and l p Complex linear systems

Consider the problem

min x e ( x ) = A x - b p p

where A C m × n , x R n and b C m . One can write [link] in terms of the real and imaginary parts of A and b ,

e ( x ) = i = 1 m | A i x - b i | p = i = 1 m | Re { A i x - b i } + j I m { A i x - b i } | p = i = 1 m | ( R i x - α i ) + ( Z i x - γ i ) | p = i = 1 m ( R i x - α i ) 2 + ( Z i x - γ i ) 2 p = i = 1 m g i ( x ) p / 2

where A = R + j Z and b = α + j γ . The gradient e ( x ) is the vector whose k -th element is given by

x k e ( x ) = p 2 i = 1 m x k g i ( x ) g i ( x ) p - 2 2 = p 2 q k ( x ) g ^ ( x )

where q k is the row vector whose i -th element is

q k , i ( x ) = x k g i ( x ) = 2 ( R i x - α α i ) R i k + 2 ( Z i x - γ γ i ) Z i k = 2 R i k R i x + 2 Z i k Z i x - [ 2 α i R i k + 2 γ i Z i k ]

Therefore one can express the gradient of e ( x ) by e ( x ) = p 2 Q g ^ , where Q = [ q k , i ] as above. Note that one can also write the gradient in vector form as follows

e ( x ) = p R T diag ( R x - α ) + Z T diag ( Z x - γ ) · ( R x - α ) 2 + ( Z x - γ ) 2 p - 2 2

The Hessian H ( x ) is the matrix of second derivatives whose k l -th entry is given by

H k , l ( x ) = 2 x k x l e ( x ) = x l p 2 i = 1 m q k , i ( x ) g i ( x ) p - 2 2 = p 2 i = 1 m q k , i ( x ) x l g i ( x ) p - 2 2 + g i ( x ) p - 2 2 x l q k , i ( x )

Now,

x l g i ( x ) p - 2 p = p - 2 2 x l g i ( x ) g i ( x ) p - 4 2 = p - 2 2 q l , i ( x ) g i ( x ) p - 4 2 x l q k , i ( x ) = 2 R i k R i l + 2 Z i k Z i l

Substituting [link] and [link] into [link] we obtain

H k , l ( x ) = p ( p - 2 ) 4 i = 1 m q k , i ( x ) q l , i ( x ) g i ( x ) p - 4 4 + p i = 1 m ( R i k R i l + Z i k Z i l ) g i ( x ) p - 2 2

Note that H ( x ) can be written in matrix form as

H ( x ) = p ( p - 2 ) 4 Q diag g ( x ) p - 4 2 Q T + p R T diag g ( x ) p - 2 2 R + Z T diag g ( x ) p - 2 2 Z

Therefore to solve [link] one can use Newton's method as follows: given an initial point x 0 , each iteration gives a new estimate x + according to the formulas

H ( x c ) s = - e ( x c ) x + = x c + s

where H ( x c ) and e ( x c ) correspond to the Hessian and gradient of e ( x ) as defined previously, evaluated at the current point x c . Since the p -norm is convex for 1 < p < , problem [link] is convex. Therefore Newton's method will converge to the global minimizer x as long as H ( x c ) is not ill-conditioned.

Questions & Answers

given eccentricity and a point find the equiation
Moses Reply
12, 17, 22.... 25th term
Alexandra Reply
12, 17, 22.... 25th term
Akash
College algebra is really hard?
Shirleen Reply
Absolutely, for me. My problems with math started in First grade...involving a nun Sister Anastasia, bad vision, talking & getting expelled from Catholic school. When it comes to math I just can't focus and all I can hear is our family silverware banging and clanging on the pink Formica table.
Carole
find the 15th term of the geometric sequince whose first is 18 and last term of 387
Jerwin Reply
I know this work
salma
The given of f(x=x-2. then what is the value of this f(3) 5f(x+1)
virgelyn Reply
hmm well what is the answer
Abhi
how do they get the third part x = (32)5/4
kinnecy Reply
can someone help me with some logarithmic and exponential equations.
Jeffrey Reply
sure. what is your question?
ninjadapaul
20/(×-6^2)
Salomon
okay, so you have 6 raised to the power of 2. what is that part of your answer
ninjadapaul
I don't understand what the A with approx sign and the boxed x mean
ninjadapaul
it think it's written 20/(X-6)^2 so it's 20 divided by X-6 squared
Salomon
I'm not sure why it wrote it the other way
Salomon
I got X =-6
Salomon
ok. so take the square root of both sides, now you have plus or minus the square root of 20= x-6
ninjadapaul
oops. ignore that.
ninjadapaul
so you not have an equal sign anywhere in the original equation?
ninjadapaul
hmm
Abhi
is it a question of log
Abhi
🤔.
Abhi
I rally confuse this number And equations too I need exactly help
salma
But this is not salma it's Faiza live in lousvile Ky I garbage this so I am going collage with JCTC that the of the collage thank you my friends
salma
Commplementary angles
Idrissa Reply
hello
Sherica
im all ears I need to learn
Sherica
right! what he said ⤴⤴⤴
Tamia
hii
Uday
hi
salma
what is a good calculator for all algebra; would a Casio fx 260 work with all algebra equations? please name the cheapest, thanks.
Kevin Reply
a perfect square v²+2v+_
Dearan Reply
kkk nice
Abdirahman Reply
algebra 2 Inequalities:If equation 2 = 0 it is an open set?
Kim Reply
or infinite solutions?
Kim
The answer is neither. The function, 2 = 0 cannot exist. Hence, the function is undefined.
Al
y=10×
Embra Reply
if |A| not equal to 0 and order of A is n prove that adj (adj A = |A|
Nancy Reply
rolling four fair dice and getting an even number an all four dice
ramon Reply
Got questions? Join the online conversation and get instant answers!
QuizOver.com Reply

Get the best Algebra and trigonometry course in your pocket!





Source:  OpenStax, Iterative design of l_p digital filters. OpenStax CNX. Dec 07, 2011 Download for free at http://cnx.org/content/col11383/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Iterative design of l_p digital filters' conversation and receive update notifications?

Ask