<< Chapter < Page | Chapter >> Page > |
Let $\Phi :{\mathbb{R}}^{N}\to {\mathbb{R}}^{M}$ denote a sensing matrix and $\Delta :{\mathbb{R}}^{M}\to {\mathbb{R}}^{N}$ denote a recovery algorithm. We say that the pair $(\Phi ,\Delta )$ is $C$ -stable if for any $x\in {\Sigma}_{K}$ and any $e\in {\mathbb{R}}^{M}$ we have that
This definition simply says that if we add a small amount of noise to the measurements, then the impact of this on the recovered signal should not be arbitrarily large. [link] below demonstrates that the existence of any decoding algorithm (potentially impractical) that can stably recover from noisy measurements requires that $\Phi $ satisfy the lower bound of [link] with a constant determined by $C$ .
If the pair $(\Phi ,\Delta )$ is $C$ -stable, then
for all $x\in {\Sigma}_{2K}$ .
Pick any $x,z\in {\Sigma}_{K}$ . Define
and note that
Let $\widehat{x}=\Delta (\Phi x+{e}_{x})=\Delta (\Phi z+{e}_{z})$ . From the triangle inequality and the definition of $C$ -stability, we have that
Since this holds for any $x,z\in {\Sigma}_{K}$ , the result follows.
Note that as $C\to 1$ , we have that $\Phi $ must satisfy the lower bound of [link] with ${\delta}_{K}=1-1/{C}^{2}\to 0$ . Thus, if we desire to reduce the impact of noise in our recovered signal then we must adjust $\Phi $ so that it satisfies the lower bound of [link] with a tighter constant.
One might respond to this result by arguing that since the upper bound is not necessary, we can avoid redesigning $\Phi $ simply by rescaling $\Phi $ so that as long as $\Phi $ satisfies the RIP with ${\delta}_{2K}<1$ , the rescaled version $\alpha \Phi $ will satisfy [link] for any constant $C$ . In settings where the size of the noise is independent of our choice of $\Phi $ , this is a valid point — by scaling $\Phi $ we are simply adjusting the gain on the “signal” part of our measurements, and if increasing this gain does not impact the noise, then we can achieve arbitrarily high signal-to-noise ratios, so that eventually the noise is negligible compared to the signal.
However, in practice we will typically not be able to rescale $\Phi $ to be arbitrarily large. Moreover, in many practical settings the noise is not independent of $\Phi $ . For example, suppose that the noise vector $e$ represents quantization noise produced by a finite dynamic range quantizer with $B$ bits. Suppose the measurements lie in the interval $[-T,T]$ , and we have adjusted the quantizer to capture this range. If we rescale $\Phi $ by $\alpha $ , then the measurements now lie between $[-\alpha T,\alpha T]$ , and we must scale the dynamic range of our quantizer by $\alpha $ . In this case the resulting quantization error is simply $\alpha e$ , and we have achieved no reduction in the reconstruction error.
We can also consider how many measurements are necessary to achieve the RIP. If we ignore the impact of $\delta $ and focus only on the dimensions of the problem ( $N$ , $M$ , and $K$ ) then we can establish a simple lower bound. We first provide a preliminary lemma that we will need in the proof of the main theorem.
Let $K$ and $N$ satisfying $K<N/2$ be given. There exists a set $X\subset {\Sigma}_{K}$ such that for any $x\in X$ we have ${\u2225x\u2225}_{2}\le \sqrt{K}$ and for any $x,z\in X$ with $x\ne z$ ,
and
We will begin by considering the set
By construction, ${\u2225x\u2225}_{2}^{2}=K$ for all $x\in U$ . Thus if we construct $X$ by picking elements from $U$ then we automatically have ${\u2225x\u2225}_{2}\le \sqrt{K}$ .
Next, observe that $\left|U\right|=\left(\genfrac{}{}{0pt}{}{N}{K}\right){2}^{K}$ . Note also that ${\u2225x,-,z\u2225}_{0}\le {\u2225x,-,z\u2225}_{2}^{2}$ , and thus if ${\u2225x,-,z\u2225}_{2}^{2}\le K/2$ then ${\u2225x,-,z\u2225}_{0}\le K/2$ . From this we observe that for any fixed $x\in U$ ,
Notification Switch
Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?