<< Chapter < Page | Chapter >> Page > |
Define $H$ as the set of all discrete-valued functions $h:\{1,...,N\}\to \{1,...,m\}$ . Note that $H$ is a finite set of size ${m}^{N}$ . Each function $h\in H$ can be specified by a binary characteristic matrix $\phi \left(h\right)$ of size $m\times N$ , with each column being a binary vector with exactly one 1 at the location $j$ , where $j=h\left(i\right)$ . To construct the overall sampling matrix $\Phi $ , we choose $d$ functions ${h}_{1},...,{h}_{d}$ independently from the uniform distribution defined on $H$ , and vertically concatenate their characteristic matrices. Thus, if $M=md$ , $\Phi $ is a binary matrix of size $M\times N$ with each column containing exactly $d$ ones.
Now given any signal $x$ , we acquire linear measurements $y=\Phi x$ . It is easy to visualize the measurements via the following two properties. First, the coefficients of the measurement vector $y$ are naturally grouped according to the “mother” binary functions $\{{h}_{1},...,{h}_{d}\}$ . Second, consider the ${i}^{\mathrm{t}h}$ coefficient of the measurement vector $y$ , which corresponds to the mother binary function $h$ . Then, the expression for ${y}_{i}$ is simply given by:
In other words, for a fixed signal coefficient index $j$ , each measurement ${y}_{i}$ as expressed above consists of an observation of ${x}_{j}$ corrupted by other signal coefficients mapped to the same $i$ by the function $h$ . Signal recovery essentially consists of estimating the signal values from these “corrupted" observations.
The count-min algorithm is useful in the special case where the entries of the original signal are positive. Given measurements $y$ using the sampling matrix $\Phi $ as constructed above, the estimate of the ${j}^{\mathrm{th}}$ signal entry is given by:
Intuitively, this means that the estimate of ${x}_{j}$ is formed by simply looking at all measurements that comprise of ${x}_{j}$ corrupted by other signal values, and picking the one with the lowest magnitude. Despite the simplicity of this algorithm, it is accompanied by an arguably powerful instance-optimality guarantee: if $d=ClogN$ and $m=4/\alpha K$ , then with high probability, the recovered signal $\widehat{x}$ satisfies:
where ${x}^{*}$ represents the best $K$ -term approximation of $x$ in the ${\ell}_{1}$ sense.
For the general setting when the coefficients of the original signal could be either positive or negative, a similar algorithm known as count-median can be used. Instead of picking the minimum of the measurements, we compute the median of all those measurements that are comprised of a corrupted version of ${x}_{j}$ and declare it as the signal coefficient estimate, i.e.,
The recovery guarantees for count-median are similar to that for count-min, with a different value of the failure probability constant. An important feature of both count-min and count-median is that they require that the measurements be perfectly noiseless , in contrast to optimization/greedy algorithms which can tolerate small amounts of measurement noise.
Although we ultimately wish to recover a sparse signal from a small number of linear measurements in both of these settings, there are some important differences between such settings and the compressive sensing setting studied in this course . First, in these settings it is natural to assume that the designer of the reconstruction algorithm also has full control over $\Phi $ , and is thus free to choose $\Phi $ in a manner that reduces the amount of computation required to perform recovery. For example, it is often useful to design $\Phi $ so that it has few nonzeros, i.e., the sensing matrix itself is also sparse [link] , [link] , [link] . In general, most methods involve careful construction of the sensing matrix $\Phi $ , which is in contrast with the optimization and greedy methods that work with any matrix satisfying a generic condition such as the restricted isometry property . This additional degree of freedom can lead to significantly faster algorithms [link] , [link] , [link] , [link] .
Second, note that the computational complexity of all the convex methods and greedy algorithms described above is always at least linear in $N$ , since in order to recover $x$ we must at least incur the computational cost of reading out all $N$ entries of $x$ . This may be acceptable in many typical compressive sensing applications, but this becomes impractical when $N$ is extremely large, as in the network monitoring example. In this context, one may seek to develop algorithms whose complexity is linear only in the length of the representation of the signal, i.e., its sparsity $K$ . In this case the algorithm does not return a complete reconstruction of $x$ but instead returns only its $K$ largest elements (and their indices). As surprising as it may seem, such algorithms are indeed possible. See [link] , [link] for examples.
Notification Switch
Would you like to follow the 'Introduction to compressive sensing' conversation and receive update notifications?