<< Chapter < Page Chapter >> Page >

A newer algorithm called “l1_ls" [link] is based on an interior-point algorithm that uses a preconditioned conjugate gradient (PCG) method to approximately solve linear systems in a truncated-Newton framework. The algorithm exploits the structure of the Hessian to construct their preconditioner; thus, this is a second order method. Computational results show that about a hundred PCG steps are sufficient for obtaining accurate reconstruction. This method has been typically shown to be slower than first-order methods, but could be faster in cases where the true target signal is highly sparse.

Fixed-point continuation

As opposed to solving the constrained formulation, an alternate approach is to solve the unconstrained formulation in [link] . A widely used method for solving 1 -minimization problems of the form

min x μ x 1 + H ( x ) ,

for a convex and differentiable H , is an iterative procedure based on shrinkage (also called soft thresholding; see [link] below). In the context of solving [link] with a quadratic H , this method was independently proposed and analyzed in [link] , [link] , [link] , [link] , and then further studied or extended in [link] , [link] , [link] , [link] , [link] , [link] . Shrinkage is a classic method used in wavelet-based image denoising. The shrinkage operator on any scalar component can be defined as follows:

shrink ( t , α ) = t - α if t > α , 0 if - α t α , and t + α if t < - α .

This concept can be used effectively to solve [link] . In particular, the basic algorithm can be written as following the fixed-point iteration: for i = 1 , ... , N , the i th coefficient of x at the ( k + 1 ) th time step is given by

x i k + 1 = shrink ( ( x k - τ H ( x k ) ) i , μ τ )

where τ > 0 serves as a step-length for gradient descent (which may vary with k ) and μ is as specified by the user. It is easy to see that the larger μ is, the larger the allowable distance between x k + 1 and x k . For a quadratic penalty term H ( · ) , the gradient H can be easily computed as a linear function of x k ; thus each iteration of [link] essentially boils down to a small number of matrix-vector multiplications.

The simplicity of the iterative approach is quite appealing, both from a computational, as well as a code-design standpoint. Various modifications, enhancements, and generalizations to this approach have been proposed, both to improve the efficiency of the basic iteration in [link] , and to extend its applicability to various kinds of J [link] , [link] , [link] . In principle, the basic iteration in [link] would not be practically effective without a continuation (or path-following) strategy [link] , [link] in which we choose a gradually decreasing sequence of values for the parameter μ to guide the intermediate iterates towards the final optimal solution.

This procedure is known as continuation ; in [link] , the performance of an algorithm known as Fixed-Point Continuation (FPC) has been compared favorably with another similar method known as Gradient Projection for Sparse Reconstruction (GPSR) [link] and “l1_ls” [link] . A key aspect to solving the unconstrained optimization problem is the choice of the parameter μ . As discussed above, for CS recovery, μ may be chosen by trial and error; for the noiseless constrained formulation, we may solve the corresponding unconstrained minimization by choosing a large value for μ .

In the case of recovery from noisy compressive measurements, a commonly used choice for the convex cost function H ( x ) is the square of the norm of the residual . Thus we have:

H ( x ) = y - Φ x 2 2 H ( x ) = 2 Φ ( y - Φ x ) .

For this particular choice of penalty function, [link] reduces to the following iteration:

x i k + 1 = shrink ( ( x k - τ H ( y - Φ x k ) i , μ τ )

which is run until convergence to a fixed point. The algorithm is detailed in pseudocode form below.


Inputs: CS matrix Φ , signal measurements y , parameter sequence μ n Outputs: Signal estimate x ^ initialize: x ^ 0 = 0 , r = y , k = 0 . while ħalting criterion false do 1. k k + 1 2. x x ^ - τ Φ T r {take a gradient step} 3. x ^ shrink ( x , μ k τ ) {perform soft thresholding} 4. r y - Φ x ^ {update measurement residual} end while return x ^ x ^

Bregman iteration methods

It turns out that an efficient method to obtain the solution to the constrained optimization problem in [link] can be devised by solving a small number of the unconstrained problems in the form of [link] . These subproblems are commonly referred to as Bregman iterations . A simple version can be written as follows:

y k + 1 = y k + y - Φ x k x k + 1 = arg min J ( x ) + μ 2 Φ x - y k + 1 2 .

The problem in the second step can be solved by the algorithms reviewed above. Bregman iterations were introduced in [link] for constrained total variation minimization problems, and was proved to converge for closed, convex functions J ( x ) . In [link] , it is applied to [link] for J ( x ) = x 1 and shown to converge in a finite number of steps for any μ > 0 . For moderate μ , the number of iterations needed is typically lesser than 5. Compared to the alternate approach that solves [link] through directly solving the unconstrained problem in [link] with a very large μ , Bregman iterations are often more stable and sometimes much faster.

Discussion

All the methods discussed in this section optimize a convex function (usually the 1 -norm) over a convex (possibly unbounded) set. This implies guaranteed convergence to the global optimum. In other words, given that the sampling matrix Φ satisfies the conditions specified in "Signal recovery via 1 minimization" , convex optimization methods will recover the underlying signal x . In addition, convex relaxation methods also guarantee stable recovery by reformulating the recovery problem as the SOCP, or the unconstrained formulation.

Questions & Answers

how to know photocatalytic properties of tio2 nanoparticles...what to do now
Akash Reply
it is a goid question and i want to know the answer as well
Maciej
Do somebody tell me a best nano engineering book for beginners?
s. Reply
what is fullerene does it is used to make bukky balls
Devang Reply
are you nano engineer ?
s.
what is the Synthesis, properties,and applications of carbon nano chemistry
Abhijith Reply
Mostly, they use nano carbon for electronics and for materials to be strengthened.
Virgil
is Bucky paper clear?
CYNTHIA
so some one know about replacing silicon atom with phosphorous in semiconductors device?
s. Reply
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
Do you know which machine is used to that process?
s.
how to fabricate graphene ink ?
SUYASH Reply
for screen printed electrodes ?
SUYASH
What is lattice structure?
s. Reply
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
Sanket Reply
what's the easiest and fastest way to the synthesize AgNP?
Damian Reply
China
Cied
types of nano material
abeetha Reply
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials​ and their applications of sensors.
Ramkumar Reply
what is nano technology
Sravani Reply
what is system testing?
AMJAD
preparation of nanomaterial
Victor Reply
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
Himanshu Reply
good afternoon madam
AMJAD
what is system testing
AMJAD
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
Hello
Uday
I'm interested in Nanotube
Uday
this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15
Prasenjit
can nanotechnology change the direction of the face of the world
Prasenjit Reply
how did you get the value of 2000N.What calculations are needed to arrive at it
Smarajit Reply
Privacy Information Security Software Version 1.1a
Good
Berger describes sociologists as concerned with
Mueller Reply
Got questions? Join the online conversation and get instant answers!
QuizOver.com Reply

Get the best Algebra and trigonometry course in your pocket!





Source:  OpenStax, An introduction to compressive sensing. OpenStax CNX. Apr 02, 2011 Download for free at http://legacy.cnx.org/content/col11133/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?

Ask