2017 51st Asilomar Conference on Signals, Systems, and Computers 2017
DOI: 10.1109/acssc.2017.8335470
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Bayesian learning using variational Bayes inference based on a greedy criterion

Abstract: Compressive sensing (CS) is an evolving area in signal acquisition and reconstruction with many applications [1][2][3]. In CS the goal is to efficiently measure and then reconstruct the signal under the assumption that such signal is sparse but the number and location of nonzeros are unknown. A linear CS problem is modeled as y=Ax s +e, where y ∈ ℝ M contains measurements, x s ∈ ℝ N is the sparse solution, and e is the noise with M ≪ N [4][5][6]. A = ΦΨ, where Φ is the sensing matrix and Ψ is a proper basis in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 11 publications
(17 reference statements)
0
2
0
Order By: Relevance
“…The measurement matrix can be defined as , where is the sensing design matrix and is a proper sparsifying basis. There exist various approaches to solve for in ( 1 ) including greedy-based, convex-based, thresholding-based and sparse Bayesian learning (SBL) algorithms [ 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 ]. Typically, the performance of CS reconstruction is determined in terms of the mean-squared reconstruction error.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The measurement matrix can be defined as , where is the sensing design matrix and is a proper sparsifying basis. There exist various approaches to solve for in ( 1 ) including greedy-based, convex-based, thresholding-based and sparse Bayesian learning (SBL) algorithms [ 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 ]. Typically, the performance of CS reconstruction is determined in terms of the mean-squared reconstruction error.…”
Section: Introductionmentioning
confidence: 99%
“…A prior favoring the sparsity or compressibility in can be represented in the SBL framework via Gaussian-inverse Gamma (GiG), Laplace-inverse Gamma (LiG), Bernoulli–Gaussian-inverse Gamma (BGiG), often referred to as spike-and-slab prior, etc. [ 27 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 ]. The inference on parameters and hidden variables in these models is usually made using Markov chain Monte Carlo (MCMC) and variational Bayes (VB) [ 27 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 ].…”
Section: Introductionmentioning
confidence: 99%