# 5.1 Convex optimization-based methods  (Page 2/2)

 Page 2 / 2

A newer algorithm called “l1_ls" [link] is based on an interior-point algorithm that uses a preconditioned conjugate gradient (PCG) method to approximately solve linear systems in a truncated-Newton framework. The algorithm exploits the structure of the Hessian to construct their preconditioner; thus, this is a second order method. Computational results show that about a hundred PCG steps are sufficient for obtaining accurate reconstruction. This method has been typically shown to be slower than first-order methods, but could be faster in cases where the true target signal is highly sparse.

## Fixed-point continuation

As opposed to solving the constrained formulation, an alternate approach is to solve the unconstrained formulation in [link] . A widely used method for solving ${\ell }_{1}$ -minimization problems of the form

$\underset{x}{min}\phantom{\rule{0.277778em}{0ex}}\mu {\parallel x\parallel }_{1}+H\left(x\right),$

for a convex and differentiable $H$ , is an iterative procedure based on shrinkage (also called soft thresholding; see [link] below). In the context of solving [link] with a quadratic $H$ , this method was independently proposed and analyzed in [link] , [link] , [link] , [link] , and then further studied or extended in [link] , [link] , [link] , [link] , [link] , [link] . Shrinkage is a classic method used in wavelet-based image denoising. The shrinkage operator on any scalar component can be defined as follows:

$\mathrm{shrink}\left(t,\alpha \right)=\left\{\begin{array}{cc}t-\alpha \hfill & \mathrm{if}t>\alpha ,\hfill \\ 0\hfill & \mathrm{if}-\alpha \le t\le \alpha ,\mathrm{and}\hfill \\ t+\alpha \hfill & \mathrm{if}t<-\alpha .\hfill \end{array}\right)$

This concept can be used effectively to solve [link] . In particular, the basic algorithm can be written as following the fixed-point iteration: for $i=1,...,N$ , the ${i}^{\mathrm{th}}$ coefficient of $x$ at the ${\left(k+1\right)}^{\mathrm{th}}$ time step is given by

${x}_{i}^{k+1}=\mathrm{shrink}\left({\left({x}^{k}-\tau ▽H\left({x}^{k}\right)\right)}_{i},\mu \tau \right)$

where $\tau >0$ serves as a step-length for gradient descent (which may vary with $k$ ) and $\mu$ is as specified by the user. It is easy to see that the larger $\mu$ is, the larger the allowable distance between ${x}^{k+1}$ and ${x}^{k}$ . For a quadratic penalty term $H\left(·\right)$ , the gradient $▽H$ can be easily computed as a linear function of ${x}^{k}$ ; thus each iteration of [link] essentially boils down to a small number of matrix-vector multiplications.

The simplicity of the iterative approach is quite appealing, both from a computational, as well as a code-design standpoint. Various modifications, enhancements, and generalizations to this approach have been proposed, both to improve the efficiency of the basic iteration in [link] , and to extend its applicability to various kinds of $J$ [link] , [link] , [link] . In principle, the basic iteration in [link] would not be practically effective without a continuation (or path-following) strategy [link] , [link] in which we choose a gradually decreasing sequence of values for the parameter $\mu$ to guide the intermediate iterates towards the final optimal solution.

This procedure is known as continuation ; in [link] , the performance of an algorithm known as Fixed-Point Continuation (FPC) has been compared favorably with another similar method known as Gradient Projection for Sparse Reconstruction (GPSR) [link] and “l1_ls” [link] . A key aspect to solving the unconstrained optimization problem is the choice of the parameter $\mu$ . As discussed above, for CS recovery, $\mu$ may be chosen by trial and error; for the noiseless constrained formulation, we may solve the corresponding unconstrained minimization by choosing a large value for $\mu$ .

In the case of recovery from noisy compressive measurements, a commonly used choice for the convex cost function $H\left(x\right)$ is the square of the norm of the residual . Thus we have:

$\begin{array}{cc}\hfill H\left(x\right)& ={\parallel y-\Phi x\parallel }_{2}^{2}\hfill \\ \hfill ▽H\left(x\right)& =2{\Phi }^{\top }\left(y-\Phi x\right).\hfill \end{array}$

For this particular choice of penalty function, [link] reduces to the following iteration:

${x}_{i}^{k+1}=\mathrm{shrink}\left(\left({x}^{k}-\tau ▽H{\left(y-\Phi {x}^{k}\right)}_{i},\mu \tau \right)$

which is run until convergence to a fixed point. The algorithm is detailed in pseudocode form below.

Inputs: CS matrix $\Phi$ , signal measurements $y$ , parameter sequence ${\mu }_{n}$ Outputs: Signal estimate $\stackrel{^}{x}$ initialize: ${\stackrel{^}{x}}_{0}=0$ , $r=y$ , $k=0$ . while ħalting criterion false do 1. $k←k+1$ 2. $x←\stackrel{^}{x}-\tau {\Phi }^{T}r$ {take a gradient step} 3. $\stackrel{^}{x}←\mathrm{shrink}\left(x,{\mu }_{k}\tau \right)$ {perform soft thresholding} 4. $r←y-\Phi \stackrel{^}{x}$ {update measurement residual} end while return $\stackrel{^}{x}←\stackrel{^}{x}$

## Bregman iteration methods

It turns out that an efficient method to obtain the solution to the constrained optimization problem in [link] can be devised by solving a small number of the unconstrained problems in the form of [link] . These subproblems are commonly referred to as Bregman iterations . A simple version can be written as follows:

$\begin{array}{cc}\hfill {y}^{k+1}& ={y}^{k}+y-\Phi {x}^{k}\hfill \\ \hfill {x}^{k+1}& =argmin\phantom{\rule{3.33333pt}{0ex}}J\left(x\right)+\frac{\mu }{2}{\parallel \Phi x-{y}^{k+1}\parallel }^{2}.\hfill \end{array}$

The problem in the second step can be solved by the algorithms reviewed above. Bregman iterations were introduced in [link] for constrained total variation minimization problems, and was proved to converge for closed, convex functions $J\left(x\right)$ . In [link] , it is applied to [link] for $J\left(x\right)={\parallel x\parallel }_{1}$ and shown to converge in a finite number of steps for any $\mu >0$ . For moderate $\mu$ , the number of iterations needed is typically lesser than 5. Compared to the alternate approach that solves [link] through directly solving the unconstrained problem in [link] with a very large $\mu$ , Bregman iterations are often more stable and sometimes much faster.

## Discussion

All the methods discussed in this section optimize a convex function (usually the ${\ell }_{1}$ -norm) over a convex (possibly unbounded) set. This implies guaranteed convergence to the global optimum. In other words, given that the sampling matrix $\Phi$ satisfies the conditions specified in "Signal recovery via ${\ell }_{1}$ minimization" , convex optimization methods will recover the underlying signal $x$ . In addition, convex relaxation methods also guarantee stable recovery by reformulating the recovery problem as the SOCP, or the unconstrained formulation.

how to know photocatalytic properties of tio2 nanoparticles...what to do now
it is a goid question and i want to know the answer as well
Maciej
Do somebody tell me a best nano engineering book for beginners?
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
what is the Synthesis, properties,and applications of carbon nano chemistry
Mostly, they use nano carbon for electronics and for materials to be strengthened.
Virgil
is Bucky paper clear?
CYNTHIA
so some one know about replacing silicon atom with phosphorous in semiconductors device?
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
Do you know which machine is used to that process?
s.
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
what's the easiest and fastest way to the synthesize AgNP?
China
Cied
types of nano material
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials​ and their applications of sensors.
what is nano technology
what is system testing?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
what is system testing
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
Hello
Uday
I'm interested in Nanotube
Uday
this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15
Prasenjit
can nanotechnology change the direction of the face of the world
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Berger describes sociologists as concerned with
Got questions? Join the online conversation and get instant answers!