<< Chapter < Page | Chapter >> Page > |
Deblurring or decovolution aims to recover the unknown image $\overline{u}\left(x\right)$ from $f\left(x\right)$ and $k\left(x\right)$ based on ( ). When $k\left(x\right)$ is unknown or only an estimate of it is available, recovering $\overline{u}\left(x\right)$ from $f\left(x\right)$ is called blind deconvolution. Throughout this module, we assume that $k\left(x\right)$ is known and $\omega \left(x\right)$ is either Gaussian or impulsive noise. When $k\left(x\right)$ is equal to the Dirac delta, the recovery of $\overline{u}\left(x\right)$ becomes a pure denoising problem. In the rest of this section, we review theTV-based variational models for image restoration and introduce necessary notation for analysis.
The TV regularization was first proposed by Rudin, Osher and Fatemi in for image denoising, and then extended to image deblurring in . The TV of $u$ is defined as
When $\nabla u\left(x\right)$ does not exist, the TV is defined using a dual formulation , which is equivalent to ( ) when $u$ is differentiable. We point out that, in practical computation, discrete forms of regularization are always used wheredifferential operators are replaced by ceratin finite difference operators.We refer TV regularization and its variants as TV-like regularization. In comparison toTikhonov-like regularization, the homogeneous penalty on image smoothness in TV-like regularization can better preserve sharp edgesand object boundaries that are usually the most important features to recover. Variational modelswith TV regularization and ${\ell}_{2}$ fidelity has beenwidely studied in image restoration; see e.g. , and references therein. For ${\ell}_{1}$ fidelity with TV regularization, itsgeometric properties are analyzed in , , . The superiority of TV over Tikhonov-like regularization was analyzedin , for recovering images containing piecewise smooth objects.
Besides Tikhonov and TV-like regularization, there are other well studied regularizers in the literature, e.g. the Mumford-Shahregularization . In this module, we concentrate on TV-like regularization. We derive fast algorithms, study theirconvergence, and examine their performance.
As used before, we let $\parallel \xb7\parallel $ be the 2-norm. In practice, we always discretize an image defined on $\Omega $ , and vectorize the two-dimensional digitalized image into a long one-dimensional vector. We assume that $\Omega $ is a square region in ${\mathbb{R}}^{2}$ . Specifically, we first discretize $u\left(x\right)$ into a digital image represented by a matrix $U\in {\mathbb{R}}^{n\times n}$ . Then we vectorize $U$ column by column into a vector $u\in {\mathbb{R}}^{{n}^{2}}$ , i.e.
where ${u}_{i}$ denotes the $i$ th component of $u$ , ${U}_{pq}$ is the component of $U$ at $p$ th row and $q$ th column, and $p$ and $q$ are determined by $i=(q-1)n+p$ and $1\le q\le n$ . Other quantities such as the convolution kernel $k\left(x\right)$ , additive noise $\omega \left(x\right)$ , and the observation $f\left(x\right)$ are all discretized correspondingly. Now we present the discrete forms of the previously presented equations.The discrete form of ( ) is
where in this case, $\overline{u},\omega ,f\in {\mathbb{R}}^{{n}^{2}}$ are all vectors representing, respectively, the discrete forms of the originalimage, additive noise and the blurry and noisy observation, and $K\in {\mathbb{R}}^{{n}^{2}\times {n}^{2}}$ is a convolution matrix representing the kernel $k\left(x\right)$ . The gradient $\nabla u\left(x\right)$ is replaced by certain first-order finite difference at pixel $i$ . Let ${D}_{i}\in {\mathbb{R}}^{2\times {n}^{2}}$ be a first-order local finite difference matrix at pixel $i$ in horizontal and vertical directions. E.g. when the forward finite difference is used, we have
Notification Switch
Would you like to follow the 'The art of the pfug' conversation and receive update notifications?