<< Chapter < Page Chapter >> Page >

The term a 2 r 2 ( k T ) inside the parentheses is equal to s 2 [ k ] . The term a r 2 ( k T ) outside the parentheses is not directly available to the assessment mechanism, though it can reasonably beapproximated by s 2 [ k ] a . Substituting the derivative into [link] and evaluating at a = a [ k ] gives the algorithm

a [ k + 1 ] = a [ k ] - μ avg ( s 2 [ k ] - s 2 ) s 2 [ k ] a [ k ] .

Care must be taken when implementing [link] that a [ k ] does not approach zero.

Of course, J L S ( a ) of [link] is not the only possible goal for the AGC problem.What is important is not the exact form of the performance function, but where the performance function has its optimal points.Another performance function that has a similar error surface (peek ahead to [link] ) is

J N ( a ) = avg { | a | ( s 2 [ k ] 3 - s 2 ) } = avg { | a | ( a 2 r 2 ( k T ) 3 - s 2 ) } .

Taking the derivative gives

d J N ( a ) d a = d avg { | a | ( a 2 r 2 ( k T ) 3 - s 2 ) } d a avg { d | a | ( a 2 r 2 ( k T ) 3 - s 2 ) d a } = avg { sgn ( a [ k ] ) ( s 2 [ k ] - s 2 ) } ,

where the approximation arises from swapping the order of the differentiation and the averagingand where the derivative of | · | is the signum or sign function, which holds as long as the argument is nonzero.Evaluating this at a = a [ k ] and substituting into [link] gives another AGC algorithm

a [ k + 1 ] = a [ k ] - μ avg { sgn ( a [ k ] ) ( s 2 [ k ] - s 2 ) } .

Consider the “logic” of this algorithm. Suppose that a is positive. Since s is fixed,

avg { sgn ( a [ k ] ) ( s 2 [ k ] - s 2 ) } = avg { ( s 2 [ k ] - s 2 ) } = avg { s 2 [ k ] } - s 2 .

Thus, if the average energy in s [ k ] exceeds s 2 , a is decreased. If the average energy in s [ k ] is less than s 2 , a is increased. The update ceases when avg { s 2 [ k ] } s 2 , that is, where a 2 s 2 r 2 , as desired. (An analogous logic applies when a is negative.)

The two performance functions [link] and [link] define the updates for the two adaptive elements in [link] and [link] . J L S ( a ) minimizes the square of the deviation of the power in s [ k ] from the desired power s 2 . This is a kind of “least square” performance function(hence the subscript LS). Such squared-error objectives are common, and will reappear in phase trackingalgorithms in Chapter  [link] , in clock recovery algorithms in Chapter  [link] , and in equalization algorithms in Chapter  [link] . On the other hand, the algorithm resulting from J N ( a ) has a clear logical interpretation (the N stands for `naive'), and the update is simpler, since [link] has fewer terms and no divisions.

To experiment concretely with these algorithms, agcgrad.m provides an implementation in M atlab . It is easy to control the rate at which a [ k ] changes by choice of stepsize: a larger μ allows a [ k ] to change faster, while a smaller μ allows greater smoothing. Thus, μ can be chosen by the system designer to trade off the bandwidth of a [ k ] (the speed at which a [ k ] can track variations in the energy levels of the incoming signal) versus theamount of jitter or noise. Similarly, the length over which the averaging is done (specified by the parameter lenavg ) will also affect the speed of adaptation;longer averages imply slower moving, smoother estimates while shorter averages imply faster moving, more jittery estimates.

n=10000;                           % number of steps in simulation vr=1.0;                            % power of the inputr=sqrt(vr)*randn(n,1);             % generate random inputs ds=0.15;                           % desired power of outputmu=0.001;                          % algorithm stepsize lenavg=10;                         % length over which to averagea=zeros(n,1); a(1)=1;              % initialize AGC parameter s=zeros(n,1);                      % initialize outputsavec=zeros(1,lenavg);              % vector to store terms for averaging for k=1:n-1  s(k)=a(k)*r(k);                  % normalize by a(k)   avec=[sign(a(k))*(s(k)^2-ds),avec(1:lenavg-1)];  % incorporate new update into avec  a(k+1)=a(k)-mu*mean(avec);       % average adaptive update of a(k) end
agcgrad.m minimize the performance function J ( a ) = avg { | a | ( ( 1 / 3 ) a 2 r 2 - d s ) } by choice of a (download file)

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Software receiver design. OpenStax CNX. Aug 13, 2013 Download for free at http://cnx.org/content/col11510/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Software receiver design' conversation and receive update notifications?

Ask