2013
DOI: 10.1109/tsp.2013.2272287
|View full text |Cite
|
Sign up to set email alerts
|

Expectation-Maximization Gaussian-Mixture Approximate Message Passing

Abstract: When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal's non-zero coefficients can have a profound effect on recovery mean-squared error (MSE). If this distribution was apriori known, then one could use computationally efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though, the distribution is unknown, motivating the use of robust algorithms like LASSO---which is nearly minimax optimal---at the cost … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
267
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 409 publications
(273 citation statements)
references
References 37 publications
2
267
0
Order By: Relevance
“…In this section, we outline a methodology that takes a given set of BiG-AMP parameterized priors and tunes the parameter vector using an expectation-maximization (EM) [37] based approach, with the goal of maximizing the likelihood, i.e., finding . The approach presented here can be considered as a generalization of the GAMP-based work [38] to BiG-AMP.…”
Section: A Parameter Tuning Via Expectation Maximizationmentioning
confidence: 99%
See 1 more Smart Citation
“…In this section, we outline a methodology that takes a given set of BiG-AMP parameterized priors and tunes the parameter vector using an expectation-maximization (EM) [37] based approach, with the goal of maximizing the likelihood, i.e., finding . The approach presented here can be considered as a generalization of the GAMP-based work [38] to BiG-AMP.…”
Section: A Parameter Tuning Via Expectation Maximizationmentioning
confidence: 99%
“…In addition, we propose an adaptive damping [36] mechanism, an expectation-maximization (EM)-based [37] method of tuning the parameters of , , and (in case they are unknown), and methods to select the rank (in case it is unknown). In the case that , , and/or are completely unknown, they can be modeled as Gaussian-mixtures with mean/variance/weight parameters learned via EM [38]. In Part II [1], we detail the application of BiG-AMP to matrix completion, robust PCA, and dictionary learning, and present the results of an extensive numerical investigation into the performance of BiG-AMP in each application.…”
mentioning
confidence: 99%
“…In order to promote sparsity, we assign Bernoulli-Laplace priorr to these vectors. Note that this kind of prior has been used successfully in different applications [5,6,7]. Based on these works, the following probability density function (pdf) is chosen as prior for m i,k…”
Section: Mp Vector M Kmentioning
confidence: 99%
“…One example of M could be a model that mimics the ocean and/or the atmosphere dynamics. In traditional DA settings, prior errors are described by Gaussian distributions, where, for the k-th mixture component, x b k ∈ R n×1 is the mean, B k ∈ R n×n is the background error covariance, and α b k is the prior weight, for 1 ≤ k ≤ K. These weights can be estimated, for instance, using the Expectation Maximization algorithm [6][7][8]. In sequential methods, it is common to assume Gaussian errors over observations y ∈ R m×1 , y ∼ N (H (x) , R)…”
Section: Introductionmentioning
confidence: 99%