<< Chapter < Page Chapter >> Page >
Introduces the theory of constrained optimization, including Lagrange multipliers.

In constrained optimization, we look to minimize or maximize an objective function only over a set of inputs Ω X that can be written in the following form:

Ω = x 0 X : g 1 ( x ) = 0 , h 1 ( x ) 0 , g 2 ( x ) = 0 , h 2 ( x ) 0 , g 3 ( x ) = 0 , h 3 ( x ) 0 ,

In words, Ω is a set described by set of equalities and inequalities on x . The constraints g i ( x ) = 0 are said to be equality constraints , and the constraints h i ( x ) 0 are said to be inequality constraints .

Example 1 Minimize f ( x ) subject to x 3 = 0 , 1 x 1 2 and 1 x 2 2 , drawn on [link] .

An example of a feasible set Ω that can be expressed using equality and inequality constraints.

In this optimization problem, the feasible set Ω can be written in terms of one equality constraint, g ( x ) = x 3 , and four inequality constraints:

h 1 ( x ) = 1 - x 1 , h 2 ( x ) = x 1 - 2 , h 3 ( x ) = 1 - x 2 , h 4 ( x ) = x 2 - 2 .

Definition 1 The points x Ω are called feasible points , and the set Ω is called the feasible set .

In the sequel, we will assume that f , g i , h i are continuous and Fréchet-differentiable (continuous gradients).

We will first consider problems where the feasible set Ω can be expressed in terms of equality constraints only.

Tangent space

For a given feasible point x 0 Ω , the tangent space gives us the set of directions in which one can move from x 0 while still staying within the feasible set Ω . The two examples below show tangent spaces in the cases where the set Ω correspond to a curve in R 2 and a nonlinear manifold in R 3 .

Examples of tangent spaces and gradients.

The tangent space at x 0 can be expressed formally in terms of the derivatives of the equality constraints.

Definition 2 The tangent space to the feasible set Ω with equality constraints g 1 , ... , g n at a feasible point x 0 Ω is given by

T Ω ( x 0 ) = { d X : δ g i ( x 0 ; d - x 0 ) = 0 , i = 1 , 2 , , n } , = { d X : g i ( x 0 ) , d - x 0 = 0 , i = 1 , 2 , , n } .

Requiring all the directional derivatives of the equality constraints to be zero in the direction of the tangent d - x 0 guarantees that the value of the equality constraints remains at zero, therefore guaranteeing that d remains a feasible point, i.e., d Ω .

Definition 3 A point x 0 satisfying the constraints g 1 ( x 0 ) = 0 , g 2 ( x 0 ) = 0 , ... , g n ( x 0 ) = 0 is said to be a regular point if the n linear functionals δ g 1 ( x 0 ; h ) , δ g 2 ( x 0 ; h ) , ... , δ g n ( x 0 ; h ) (i.e., the derivatives of the equality constraints) are linearly independent on h .

Theorem 1 If x 0 is an extremum of f ( x ) subject to constraints g 1 ( x 0 ) = 0 , g 2 ( x 0 ) = 0 , ... , g n ( x 0 ) = 0 and x 0 is a regular point of { g i } i = 1 n then for any h X such that δ g i ( x ; h ) = 0 for all i = 1 , 2 , ... , n , we must have δ f ( x 0 ; h ) = 0 .

One can rewrite this theorem in terms of gradients as: if g i ( x 0 ) , h = 0 , then f ( x 0 ) , h = 0 . More intuition can be obtained by defining the translated tangent space at x 0 as follows:

T ˜ Ω ( x 0 ) = { d X : g i ( x 0 ) , d = 0 , i = 1 , 2 , , n } .

Thus, the theorem above can be written as: if h T ˜ Ω ( x 0 ) , then f ( x 0 ) , h = 0 . In words, we expect for the derivative of the objective function f at the constrained extremum x 0 to be zero-valued in the directions in which we can move from x 0 and remain in the feasible set Ω — that is, in the directions h T ˜ Ω ( x 0 ) . Thus, we can write that the constrained optimum x 0 must obey f ( x 0 ) T ˜ ( x 0 ) , which implies f ( x 0 ) T Ω ( x 0 ) .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Signal theory. OpenStax CNX. Oct 18, 2013 Download for free at http://legacy.cnx.org/content/col11542/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Signal theory' conversation and receive update notifications?

Ask