<< Chapter < Page Chapter >> Page >

We now turn to the questions of generating good approximations for n term approximation from a general dictionary We shall assume that the dictionary D is complete in the Hilbert space H . This means that every element in H can be approximated arbitrarily well by linear combinations of the elements of D . Since the dictionary is no longer an orthogonal basis as was considered above, we need to revisit howto find good n term approximations. Because of redundancy within the dictionary, we cannot simply pick the largestcoefficients as we saw with a basis. Greedy algorithms are a method to generate good n term approximations.

  • General Greedy Algorithm Given f , we want to generate an n -term approximation to f .
    f s = s = 1 n c j g j
    The general steps are as follows:
    • Initialize: (approximation) s 0 = 0 , (residual) r 0 = f , approximation collection Λ 0 =
    • Search D for some g D , then add g to the set Λ .
    • Use { g 1 , g 2 , ... , g n } to find new approximation for s n .
    At stage n , we have s n , r n = f - s n , and Λ n = { g 1 , g 2 , ... , g n } . There are many types of greedy algorithms. We describe the threemost common in the case S is a Hilbert space. However, there are anlaogues of these for L p .
  • Pure Greedy Algorithm (PGA) Note:>From r n choose g n + 1 : = argmax | r n ( f ) , g | (the g that causes the largest inner product).
    s n + 1 = s n + r n ( f ) , g g
    r n + 1 = f - s n - f - s n , g g = f - s n + 1
    This method is similar to a steepest decent algorithm for decreasing the error.
  • Orthogonal Greedy Algorithm (OGA)>From r n choose g n + 1 : = argmax | r n ( f ) , g | as in the PGA.
    V n + 1 : = s p { g 1 , g 2 , ... , g n + 1 }
    s n + 1 : = p V n + 1 f = j = 1 n + 1 α j g j
    where P V denotes the orthogonal projection onto the space V . We can find s n + 1 = P V n + 1 f by solving the linear system of equations
    j = 1 n + 1 α j g j , g k = f , g k .
    Then, r n + 1 = f - s n + 1 .
  • Relaxed Greedy Algorithm (RGA)>From r n choose g n + 1 in some way (for example, our earlier methods) and then define
    s n + 1 ( f ) = α s n + β g n + 1
    Unlike PGA, here we do not make a full step in the correct direction. For example, one way to proceed is to define
    arginf α , β , g f - α s n + β g n = : α * , β * , g *
    This type of greedy algorithm is known to perform the best as compred with the previous two.

Measuring performance

Given X , D , it is not practical to minimize σ n ( f ) X by searching over all the possibilities.The greedy approximation gives an n -term solution with less computation, but does it perform well?

Let

L 1 ( D ) : = { f X : c g g , g D | c g | M }

where the smallest M is the L 1 norm of f .

For OGA or RGA as described above, we have

f - s n f C n - 1 2 | f | L 1 .

Remark 5 This is similar to σ n ( x ) l 2 n - 1 2 x l 1 ( n -term approximation) but its not always quite as good.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Compressive sensing. OpenStax CNX. Sep 21, 2007 Download for free at http://cnx.org/content/col10458/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Compressive sensing' conversation and receive update notifications?

Ask