<< Chapter < Page | Chapter >> Page > |
As opposed to solving a (possibly computationally expensive) convex optimization program, an alternate flavor to sparse recovery is to apply methods of sparse approximation . Recall that the goal of sparse recovery is to recover the sparsest vector $x$ which explains the linear measurements $y$ . In other words, we aim to solve the (nonconvex) problem:
where $\mathcal{I}$ denotes a particular subset of the indices $i=1,...,N$ , and ${\phi}_{i}$ denotes the ${i}^{\mathrm{th}}$ column of $\Phi $ . It is well known that searching over the power set formed by the columns of $\Phi $ for the optimal subset ${\mathcal{I}}^{*}$ with smallest cardinality is NP-hard. Instead, classical sparse approximation methods tackle this problem by greedily selecting columns of $\Phi $ and forming successively better approximations to $y$ .
Matching Pursuit (MP), named and introduced to the signal processing community by Mallat and Zhang [link] , [link] , is an iterative greedy algorithm that decomposes a signal into a linear combination of elements from a dictionary. In sparse recovery, this dictionary is merely the sampling matrix $\Phi \in {\mathbb{R}}^{M\times N}$ ; we seek a sparse representation ( $x$ ) of our “signal” $y$ .
MP is conceptually very simple. A key quantity in MP is the residual $r\in {\mathbb{R}}^{M}$ ; the residual represents the as-yet “unexplained” portion of the measurements. At each iteration of the algorithm, we select a vector from the dictionary that is maximally correlated with the residual $r$ :
Once this column is selected, we possess a “better” representation of the signal, since a new coefficient indexed by ${\lambda}_{k}$ has been added to our signal approximation. Thus, we update both the residual and the approximation as follows:
and repeat the iteration. A suitable stopping criterion is when the norm of $r$ becomes smaller than some quantity. MP is described in pseudocode form below.
Inputs: Measurement matrix
$\Phi $ , signal measurements
$y$ Outputs: Sparse signal
$\widehat{x}$ initialize:
${\widehat{x}}_{0}=0$ ,
$r=y$ ,
$i=0$ .
while ħalting criterion false
do 1.
$i\leftarrow i+1$ 2.
$b\leftarrow {\Phi}^{T}r$ {form residual signal estimate}
3.
${\widehat{x}}_{i}\leftarrow {\widehat{x}}_{i-1}+\mathbf{T}\left(1\right)$ {update largest magnitude coefficient}
4.
$r\leftarrow r-\Phi {\widehat{x}}_{i}$ {update measurement residual}
end while return
$\widehat{x}\leftarrow {\widehat{x}}_{i}$
Although MP is intuitive and can find an accurate approximation of the signal, it possesses two major drawbacks: (i) it offers no guarantees in terms of recovery error; indeed, it does not exploit the special structure present in the dictionary $\Phi $ ; (ii) the required number of iterations required can be quite large. The complexity of MP is $O\left(MNT\right)$ [link] , where $T$ is the number of MP iterations
Matching Pursuit (MP) can prove to be computationally infeasible for many problems, since the complexity of MP grows linearly in the number of iterations $T$ . By employing a simple modification of MP, the maximum number of MP iterations can be upper bounded as follows. At any iteration $k$ , Instead of subtracting the contribution of the dictionary element with which the residual $r$ is maximally correlated, we compute the projection of $r$ onto the orthogonal subspace to the linear span of the currently selected dictionary elements. This quantity thus better represents the “unexplained” portion of the residual, and is subtracted from $r$ to form a new residual, and the process is repeated. If ${\Phi}_{\Omega}$ is the submatrix formed by the columns of $\Phi $ selected at time step $t$ , the following operations are performed:
Notification Switch
Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?