<< Chapter < Page Chapter >> Page >
| | x | | p = n | x ( n ) | p 1 / p

and finding x to minimizing this p-norm while satisfying Ax = b .

It has been shown this is equivalent to solving a least weighted norm problem for specific weights.

| | x | | p = n w ( n ) 2 | x ( n ) | 2 1 / 2

The development follows the same arguments as in the previous section but using the formula [link] , [link] derived in [link]

x = [ W T W ] - 1 A T A [ W T W ] - 1 A T - 1 b

with the weights, w ( n ) , being the diagonal of the matrix, W , in the iterative algorithm to give the minimum weighted solution norm in the same way as [link] gives the minimum weighted equation error.

A Matlab program that implements these ideas applied to our pseudoinverse problem with more unknowns than equations (case 3a) is:

% m-file IRLS2.m to find the optimal solution to Ax=b %  minimizing the L_p norm ||x||_p, using IRLS. %  Newton iterative update of solution, x, for  M < N. %  For 2<p<infty, use homotopy parameter K = 1.01 to 2 %  For 0<p<2, use K = approx 0.7 to 0.9 %  csb 10/20/2012 function x = IRLS2(A,b,p,K,KK) if nargin < 5, KK= 10;  end; if nargin < 4, K = .8;  end; if nargin < 3, p = 1.1; end; pk = 2;                                 % Initial homotopy value x  = pinv(A)*b;                         % Initial L_2 solution E = []; for k = 1:KK    if p >= 2, pk = min([p, K*pk]);      % Homotopy update of p       else pk = max([p, K*pk]); end    W  = diag(abs(x).^((2-pk)/2)+0.00001);  % norm weights for IRLS    AW = A*W;                            % applying new weights    x1 = W*AW'*((AW*AW')\b);             % Weighted L_2 solution    q  = 1/(pk-1);                       % Newton's parameter    if p >= 2, x = q*x1 + (1-q)*x; nn=p; % Newton's partial update for p>2       else x = x1; nn=1; end            % no Newton's partial update for p<2    ee = norm(x,nn);  E = [E ee];        % norm at each iteration end; plot(E)

This approach is useful in sparse signal processing and for frame representation.

The chebyshev, minimax, or L Appriximation

The Chebyshev optimization problem minimizes the maximum error:

ϵ m = max n | ϵ ( n ) |

This is particularly important in filter design. The Remez exchange algorithm applied to filter design as the Parks-McClellan algorithm is very efficient [link] . An interesting result is the limit of an | | x | | p optimization as p is the Chebyshev optimal solution. So, the Chebyshev optimal, the minimax optimal, and the L optimal are all the same [link] , [link] .

A particularly powerful theorem which characterizes a solution to Ax = b is given by Cheney [link] in Chapter 2 of his book:

  • A Characterization Theorem: For an M by N real matrix, A with M > N , every minimax solution x is a minimax solution of an appropriate N + 1 subsystem of the M equations. This optimal minimax solution will have at least N + 1 equal magnitude errors and they will be larger than any of the errors of the other equations.

This is a powerful statement saying an optimal minimax solution will have out of M , at least N + 1 maximum magnitude errors and they are the minimum size possible. What this theorem doesn't state is which of the M equations are the N + 1 appropriate ones. Cheney develops an algorithm based on this theorem which finds these equations and exactly calculates this optimal solution in a finite numberof steps. He shows how this can be combined with the minimum | | e | | p using a large p , to make an efficient solver for a minimax or Chebyshev solution.

This theorem is similar to the Alternation Theorem [link] but more general and, therefore, somewhat more difficult to implement.

The L 1 Approximation and sparsity

The sparsity optimization is to minimize the number of non-zero terms in a vector. A “pseudonorm", | | x | | 0 , is sometimes used to denote a measure of sparsity. This is not convex, so is not really a norm but the convex (in the limit) norm | | x | | 1 is close enough to the | | x | | 0 to give the same sparsity of solution [link] . Finding a sparse solution is not easy but interative reweighted least squares (IRLS) [link] , [link] , weighted norms [link] , [link] , and a somewhat recent result is called Basis Pursuit [link] , [link] are possibilities.

This approximation is often used with an underdetermined set of equations (Case 3a) to obtain a sparse solution x .

Using the IRLS algorithm to minimize the l p equation error often gives a sparse error if one exists. Using the algorithm in the illustrated Matlab program with p = 1 . 1 on the problem in Cheney [link] gives a zero error in equation 4 while using no larger p gives any zeros.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Basic vector space methods in signal and systems theory. OpenStax CNX. Dec 19, 2012 Download for free at http://cnx.org/content/col10636/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Basic vector space methods in signal and systems theory' conversation and receive update notifications?

Ask