<< Chapter < Page | Chapter >> Page > |
and finding to minimizing this p-norm while satisfying .
It has been shown this is equivalent to solving a least weighted norm problem for specific weights.
The development follows the same arguments as in the previous section but using the formula [link] , [link] derived in [link]
with the weights, , being the diagonal of the matrix, , in the iterative algorithm to give the minimum weighted solution norm in the same way as [link] gives the minimum weighted equation error.
A Matlab program that implements these ideas applied to our pseudoinverse problem with more unknowns than equations (case 3a) is:
% m-file IRLS2.m to find the optimal solution to Ax=b
% minimizing the L_p norm ||x||_p, using IRLS.
% Newton iterative update of solution, x, for M < N.
% For 2<p<infty, use homotopy parameter K = 1.01 to 2
% For 0<p<2, use K = approx 0.7 to 0.9
% csb 10/20/2012
function x = IRLS2(A,b,p,K,KK)
if nargin < 5, KK= 10; end;
if nargin < 4, K = .8; end;
if nargin < 3, p = 1.1; end;
pk = 2; % Initial homotopy value
x = pinv(A)*b; % Initial L_2 solution
E = [];
for k = 1:KK
if p >= 2, pk = min([p, K*pk]); % Homotopy update of p
else pk = max([p, K*pk]); end
W = diag(abs(x).^((2-pk)/2)+0.00001); % norm weights for IRLS
AW = A*W; % applying new weights
x1 = W*AW'*((AW*AW')\b); % Weighted L_2 solution
q = 1/(pk-1); % Newton's parameter
if p >= 2, x = q*x1 + (1-q)*x; nn=p; % Newton's partial update for p>2
else x = x1; nn=1; end % no Newton's partial update for p<2
ee = norm(x,nn); E = [E ee]; % norm at each iteration
end;
plot(E)
This approach is useful in sparse signal processing and for frame representation.
The Chebyshev optimization problem minimizes the maximum error:
This is particularly important in filter design. The Remez exchange algorithm applied to filter design as the Parks-McClellan algorithm is very efficient [link] . An interesting result is the limit of an optimization as is the Chebyshev optimal solution. So, the Chebyshev optimal, the minimax optimal, and the optimal are all the same [link] , [link] .
A particularly powerful theorem which characterizes a solution to is given by Cheney [link] in Chapter 2 of his book:
This is a powerful statement saying an optimal minimax solution will have out of , at least maximum magnitude errors and they are the minimum size possible. What this theorem doesn't state is which of the equations are the appropriate ones. Cheney develops an algorithm based on this theorem which finds these equations and exactly calculates this optimal solution in a finite numberof steps. He shows how this can be combined with the minimum using a large , to make an efficient solver for a minimax or Chebyshev solution.
This theorem is similar to the Alternation Theorem [link] but more general and, therefore, somewhat more difficult to implement.
The sparsity optimization is to minimize the number of non-zero terms in a vector. A “pseudonorm", , is sometimes used to denote a measure of sparsity. This is not convex, so is not really a norm but the convex (in the limit) norm is close enough to the to give the same sparsity of solution [link] . Finding a sparse solution is not easy but interative reweighted least squares (IRLS) [link] , [link] , weighted norms [link] , [link] , and a somewhat recent result is called Basis Pursuit [link] , [link] are possibilities.
This approximation is often used with an underdetermined set of equations (Case 3a) to obtain a sparse solution .
Using the IRLS algorithm to minimize the equation error often gives a sparse error if one exists. Using the algorithm in the illustrated Matlab program with on the problem in Cheney [link] gives a zero error in equation 4 while using no larger gives any zeros.
Notification Switch
Would you like to follow the 'Basic vector space methods in signal and systems theory' conversation and receive update notifications?