Compressive sensing (CS) is an evolving area in signal acquisition and reconstruction with many applications [1][2][3]. In CS the goal is to efficiently measure and then reconstruct the signal under the assumption that such signal is sparse but the number and location of nonzeros are unknown. A linear CS problem is modeled as y=Ax s +e, where y ∈ ℝ M contains measurements, x s ∈ ℝ N is the sparse solution, and e is the noise with M ≪ N [4][5][6]. A = ΦΨ, where Φ is the sensing matrix and Ψ is a proper basis in which x s is sparse. There are three approaches to solve for x s i.e, greedy-based, convex-based, and sparse Bayesian learning (SBL) algorithms. Here, we consider the SBL approach. Specifically, we consider Gaussian-Bernoulli prior to promote sparsity in the solution and then use variational Bayes (VB) inference to estimate the variables and parameters of the model. In the Gaussian-Bernoulli model, the sparse solution is defined as x s =(s∘x), where s is a binary support learning vector, x accounts for the values of the solution, and "∘" is the element-wise product [7,8].It turns out that using VB inference for CS problem has the over fitting issue mainly when the number of measurements are low. For example, for the CluSS-VB algorithm, Yu et al.[8] pointed out that the solution may tend to become non-sparse. In this work, we propose a VB-based SBL algorithm which uses a simple criterion to remove such effect and forces the solution to become sparse. We also discuss and compare the update rules obtained from the SBL using fully hierarchical Bayesian approach via Markov chain Monte Carlo (MCMC) [7], expectation-maximization (EM) algorithm, and the VB inference. As expected, there exist a very close relationship between all these algorithms and we provide some intuition on how to turn equations of one approach to another approach. Also, we will provide some simulation results to compare the performance of such algorithms for the CS problems.
G-OSBL (VB): An SBL algorithm using VBSuppose there is a model with parameters Θ, hidden variables collected in x, and a set of observations denoted by y. Then, the approximation to the joint density p(x,Θ|y) can be represented by p(x,Θ|y) ≈ q x (x)q θ (Θ). Then, the lower bound on the model log marginal likelihood can be iteratively optimized by the following updates [9]