<< Chapter < Page Chapter >> Page >
This module provides an overview of the application of Bayesian methods to compressive sensing and sparse recovery.

Setup

Throughout this course , we have almost exclusively worked within a deterministic signal framework. In other words, our signal x is fixed and belongs to a known set of signals. In this section, we depart from this framework and assume that the sparse (or compressible ) signal of interest arises from a known probability distribution , i.e., we assume sparsity promoting priors on the elements of x , and recover from the stochastic measurements y = Φ x a probability distribution on each nonzero element of x . Such an approach falls under the purview of Bayesian methods for sparse recovery .

The algorithms discussed in this section demonstrate a digression from the conventional sparse recovery techniques typically used in compressive sensing (CS). We note that none of these algorithms are accompanied by guarantees on the number of measurements required, or the fidelity of signal reconstruction; indeed, in a Bayesian signal modeling framework, there is no well-defined notion of “reconstruction error”. However, such methods do provide insight into developing recovery algorithms for rich classes of signals, and may be of considerable practical interest.

Sparse recovery via belief propagation

As we will see later in this course, there are significant parallels to be drawn between error correcting codes and sparse recovery  [link] . In particular, sparse codes such as LDPC codes have had grand success. The advantage that sparse coding matrices may have in efficient encoding of signals and their low complexity decoding algorithms, is transferable to CS encoding and decoding with the use of sparse sensing matrices Φ . The sparsity in the Φ matrix is equivalent to the sparsity in LDPC coding graphs.

Factor graph depicting the relationship between the variables involved in CS decoding using BP. Variable nodes are black and the constraint nodes are white.

A sensing matrix Φ that defines the relation between the signal x and measurements y can be represented as a bipartite graph of signal coefficient nodes x ( i ) and measurement nodes y ( i )   [link] , [link] . The factor graph in [link] represents the relationship between the signal coefficients and measurements in the CS decoding problem.

The choice of signal probability density is of practical interest. In many applications, the signals of interest need to be modeled as being compressible (as opposed to being strictly sparse). This behavior is modeled by a two-state Gaussian mixture distribution, with each signal coefficient taking either a “large” or “small” coefficient value state. Assuming that the elements of x are i.i.d., it can be shown that small coefficients occur more frequently than the large coefficients. Other distributions besides the two-state Gaussian may also be used to model the coefficients, for e.g., the i.i.d. Laplace prior on the coefficients of x .

The ultimate goal is to estimate (i.e., decode) x , given y and Φ . The decoding problem takes the form of a Bayesian inference problem in which we want to approximate the marginal distributions of each of the x ( i ) coefficients conditioned on the observed measurements y ( i ) . We can then estimate the Maximum Likelihood Estimate (MLE), or the Maximum a Posteriori (MAP) estimates of the coefficients from their distributions. This sort of inference can be solved using a variety of methods; for example, the popular belief propagation method (BP)  [link] can be applied to solve for the coefficients approximately. Although exact inference in arbitrary graphical models is an NP hard problem, inference using BP can be employed when Φ is sparse enough, i.e., when most of the entries in the matrix are equal to zero.

Sparse bayesian learning

Another probabilistic approach used to estimate the components of x is by using Relevance Vector Machines (RVMs). An RVM is essentially a Bayesian learning method that produces sparse classification by linearly weighting a small number of fixed basis functions from a large dictionary of potential candidates (for more details the interested reader may refer to  [link] , [link] ). From the CS perspective, we may view this as a method to determine the elements of a sparse x which linearly weight the basis functions comprising the columns of Φ .

The RVM setup employs a hierarchy of priors; first, a Gaussian prior is assigned to each of the N elements of x ; subsequently, a Gamma prior is assigned to the inverse-variance α i of the i th Gaussian prior. Therefore each α i controls the strength of the prior on its associated weight in x i . If x is the sparse vector to be reconstructed, its associated Gaussian prior is given by:

p ( x | α ) = i = 1 N N ( x i | 0 , α i - 1 )

and the Gamma prior on α is written as:

p ( α | a , b ) = i = 1 N Γ ( α i | a , b )

The overall prior on x can be analytically evaluated to be the Student-t distribution, which can be designed to peak at x i = 0 with appropriate choice of a and b . This enables the desired solution x to be sparse. The RVM approach can be visualized using a graphical model similar to the one in "Sparse recovery via belief propagation" . Using the observed measurements y , the posterior density on each x i is estimated by an iterative algorithm (e.g., Markov Chain Monte Carlo (MCMC) methods). For a detailed analysis of the RVM with a measurement noise prior, refer to  [link] , [link] .

Alternatively, we can eliminate the need to set the hyperparameters a and b as follows. Assuming Gaussian measurement noise with mean 0 and variance σ 2 , we can directly find the marginal log likelihood for α and maximize it by the EM algorithm (or directly differentiate) to find estimates for α .

L ( α ) = log p ( y | α , σ 2 ) = log p ( y | x , σ 2 ) p ( y | α ) d x .

Bayesian compressive sensing

Unfortunately, evaluation of the log-likelihood in the original RVM setup involves taking the inverse of an N × N matrix, rendering the algorithm's complexity to be O ( N 3 ) . A fast alternative algorithm for the RVM is available which monotonically maximizes the marginal likelihoods of the priors by a gradient ascent, resulting in an algorithm with complexity O ( N M 2 ) . Here, basis functions are sequentially added and deleted, thus building the model up constructively, and the true sparsity of the signal x is exploited to minimize model complexity. This is known as Fast Marginal Likelihood Maximization, and is employed by the Bayesian Compressive Sensing (BCS) algorithm  [link] to efficiently evaluate the posterior densities of x i .

A key advantage of the BCS algorithm is that it enables evaluation of “error bars” on each estimated coefficient of x ; these give us an idea of the (in)accuracies of these estimates. These error bars could be used to adaptively select the linear projections (i.e., the rows of the matrix Φ ) to reduce uncertainty in the signal. This provides an intriguing connection between CS and machine learning techniques such as experimental design and active learning  [link] , [link] .

Questions & Answers

so some one know about replacing silicon atom with phosphorous in semiconductors device?
s. Reply
how to fabricate graphene ink ?
SUYASH Reply
for screen printed electrodes ?
SUYASH
What is lattice structure?
s. Reply
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
Sanket Reply
what's the easiest and fastest way to the synthesize AgNP?
Damian Reply
China
Cied
types of nano material
abeetha Reply
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials​ and their applications of sensors.
Ramkumar Reply
what is nano technology
Sravani Reply
what is system testing?
AMJAD
preparation of nanomaterial
Victor Reply
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
Himanshu Reply
good afternoon madam
AMJAD
what is system testing
AMJAD
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
Hello
Uday
I'm interested in Nanotube
Uday
this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15
Prasenjit
can nanotechnology change the direction of the face of the world
Prasenjit Reply
At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light.
Ali Reply
the Beer law works very well for dilute solutions but fails for very high concentrations. why?
bamidele Reply
how did you get the value of 2000N.What calculations are needed to arrive at it
Smarajit Reply
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
QuizOver.com Reply

Get the best Algebra and trigonometry course in your pocket!





Source:  OpenStax, An introduction to compressive sensing. OpenStax CNX. Apr 02, 2011 Download for free at http://legacy.cnx.org/content/col11133/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?

Ask