<< Chapter < Page Chapter >> Page >

By their nature, steepest descent and hill climbing methods use only local information. This isbecause the update from a point x [ k ] depends only on the value of x [ k ] and on the value of its derivative evaluated at that point. This can be a problem,since if the objective function has many minima, the steepest descent algorithm may become “trapped” at a minimum that is not (globally)the smallest. These are called local minima. To see how this can happen, consider the problem of finding the value of x that minimizes the function

J ( x ) = e - 0 . 1 | x | sin ( x ) .

Applying the chain rule, the derivative is

e - 0 . 1 | x | cos ( x ) - 0 . 1 e - 0 . 1 | x | sin ( x ) sign ( x ) ,

where

sign ( x ) = 1 x > 0 - 1 x < 0

is the formal derivative of | x | . Solving directly for the minimum point is nontrivial (try it!). Yet implementing a steepest descentsearch for the minimum can be done in a straightforward manner using the iteration

x [ k + 1 ] = x [ k ] - μ e - 0 . 1 | x [ k ] | · ( cos ( x [ k ] ) - 0 . 1 sin ( x [ k ] ) sign ( x ) ) .

To be concrete, replace the update equation in polyconverge.m with

x(k+1)=x(k)-mu*exp(-0.1*abs(x(k)))*(cos(x(k))...            -0.1* sin(x(k))*sign(x(k)));

Implement the steepest descent strategy to find the minimum of J ( x ) in [link] , modeling the program after polyconverge.m . Run the program for different values of mu , N , and x(1) , and answer the same questions as in Exercise [link] .

One way to understand the behavior of steepest descent algorithms is to plot the error surface , which is basically a plot of the objective as a function of the variablethat is being optimized. [link] (a) displays clearly the single global minimum of the objective function [link] while [link] (b) shows the many minima of the objective function defined by [link] . As will be clear to anyone who has attempted Exercise  [link] , initializing within any one of the valleys causes the algorithmto descend to the bottom of that valley. Although true steepest descent algorithms can never climb over a peak to enter another valley(even if the minimum there is lower) it can sometimes happen in practice when there is a significant amount of noise inthe measurement of the downhill direction.

Essentially, the algorithm gradually descends the error surface by moving in the (locally)downhill direction, and different initial estimates may lead to different minima. Thisunderscores one of the limitations of steepest descent methods—if there are many minima, then it is important to initialize near an acceptable one. In someproblems such prior information may easily be obtained, while in others it may be truly unknown.

The examples of this section are somewhat simple because they involve static functions. Most applications incommunication systems deal with signals that evolve over time, and the next section applies thesteepest descent idea in a dynamic setting to the problem of Automatic Gain Control (AGC). The AGC provides a simple settingin which all three of the major issues in optimization must be addressed: setting the goal, choosing a method of solution, andverifying that the method is successful.

Error surfaces corresponding to (a) the objective function Equation 13 and (b) the objective function Equation 18.
Error surfaces corresponding to (a) the objective function [link] and (b) the objective function [link] .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Software receiver design. OpenStax CNX. Aug 13, 2013 Download for free at http://cnx.org/content/col11510/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Software receiver design' conversation and receive update notifications?

Ask