<< Chapter < Page Chapter >> Page >

Example 2 Let X = R 2 ; solve

x 0 = arg max x = [ x 1 x 2 ] T f ( x ) = x 1 + x 2 subjectto x 1 2 + x 2 2 = 1 .

In words, we look for the point in a unit circle in R 2 that has the largest sum of its coordinates. It is easy to see by inspection that such point is given by x 0 = [ 2 2 ] T . The feasible set can be given in terms of the single equality constrain

g ( x ) = x 1 2 + x 2 2 - 1 = x T x - 1 = x , x - 1 = x , I x - 1 .

From previous work, we know that the gradient is given by g ( x ) = ( I + I * ) x = 2 I x = 2 x We can also write the objective function as

f ( x ) = 1 1 , x ,

which has gradient f ( x ) = 1 1 . Thus, we can write the tangent space in this case as

T Ω ( x 0 ) = { h X : g ( x 0 ) , h = 0 } = { h X : x 0 , h = 0 } = { h : 2 h 1 + 2 h 2 = 0 } .

Therefore, we can write the tangent space as T Ω ( x 0 ) = { h X : h 1 = - h 2 } . It is easy to see at this point that for any such h T Ω ( x 0 ) we will have f ( x 0 ) , h = 0 , as stated by Theorem  [link] .

Lagrangian multipliers

In this section, we will develop a method to solve optimization problems with linear objective functions and linear equality constraints.

Lemma 1 Let f 0 , f 1 , f 2 , ... , f n be linear functionals on a Hilbert space X and suppose f 0 ( x ) = 0 for all x X such that f i ( x ) = 0 , i = 1 , 2 , ... , n . Then there exists constants λ 1 , λ 2 , ... , λ n such that f 0 = λ 1 f 1 + λ 2 f 2 + ... + λ n f n .

Since our functionals are linear we have f 0 , f 1 , ... , f n X * . Define the subspace M = span ( { f 1 , f 2 , ... , f n } ) . Since M is finite-dimensional, then M is closed. We can therefore define its orthogonal complement:

M = { f X * : f , f i = 0 , i = 1 , 2 , ... , n } .

Since Hilbert spaces are self-dual, then for each function f i there exists w i X such that f i ( x ) = x , w i ; therefore, we can rewrite the space above as its dual equivalent

M = { w X : w , w i = 0 , i = 1 , 2 , ... , n } .

Now since w , w i = f i ( w ) , it follows that for all w M we have that f i ( w ) = 0 , w = 1 , ... , n . Therefore, the Lemma implies that for all w M we have that f 0 ( w ) = 0 = w , w 0 . This implies that w 0 ( M ) = M , due to M being closed. Reversing to the dual space X * , this implies that f 0 M = span ( { f 1 , ... , f n } ) , and so we can write f = λ 1 f 1 + λ 2 f 2 + ... + λ n f n .

Theorem  [link] shows that we can apply Lemma  [link] to the constrained optimization problem. Thus, at the extremum x 0 X of the constrained program there exist constants λ 1 , ... , λ n such that for all h X ,

δ f ( x 0 ; h ) = i = 1 n λ i δ g i ( x 0 ; h ) , f ( x 0 ) , h = i = 1 n λ i g i ( x 0 ) , h , f ( x 0 ) - i = 1 n λ i g i ( x 0 ) , h = 0 ,

which is equivalent to f ( x 0 ) + i = 1 n g i ( x 0 ) = 0 . This can be written as the gradient of the Lagrangian function

L ( x , λ ) = f ( x ) + i = 1 n λ i g i ( x ) .

Thus, we say that the extremum must provide a zero-valued directional derivative of the Lagrangian in all directions or, equivalently, a zero-valued gradient for the Lagrangian. The results obtained in this section can be collected into a proof for the following theorem.

Theorem 2 If x 0 X is an extremum of a functional f subject to constraints { g i } i = 1 n , then there exist scalars λ 1 , , λ n , such that the Lagrangian L ( x , λ ) = f ( x ) + i = 1 n λ i g i ( x ) is stationary at x 0 , i.e., for all h X we have that δ L ( x , λ ; h ) = 0 , i.e., L ( x , λ ) = 0 .

The constants λ 1 , λ 2 , , λ n are known as Lagrange multipliers.

Example 3 We want to minimize f ( x ) = x 1 2 + x 2 2 subject to 2 x 1 + x 2 = 3 . The constraint function is

g ( x ) = 2 x 1 + x 2 - 3 .

From earlier we know that f ( x ) = 2 x , while we can rewrite the constraint as

g ( x ) = 2 1 x + 3 ,

so that g ( x ) = 2 1 . Therefore, the extremum's condition on the gradient of the Lagrangian results in the equation

f ( x ) + λ g ( x ) = 0 , 2 x + λ 2 1 = 0 , 2 x 1 + 2 λ 2 x 2 + λ = 0 0 ;

the solution to this equation is x 1 = - λ , x 2 = - 1 2 λ . To solve for the value of the Lagrangian multiplier λ , we plug the solution into the constraint: Plug in the constraint function

2 ( - λ ) + - 1 2 λ - 3 = 0 ,

which gives λ = - 6 5 . Therefore, we end up with the solution x 1 = 6 5 , x 2 = 3 5 .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Signal theory. OpenStax CNX. Oct 18, 2013 Download for free at http://legacy.cnx.org/content/col11542/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Signal theory' conversation and receive update notifications?

Ask