When recovering a sparse signal from noisy compressive linear measurements,
the distribution of the signal's non-zero coefficients can have a profound
effect on recovery mean-squared error (MSE). If this distribution was apriori
known, then one could use computationally efficient approximate message passing
(AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though,
the distribution is unknown, motivating the use of robust algorithms like
LASSO---which is nearly minimax optimal---at the cost of significantly larger
MSE for non-least-favorable distributions. As an alternative, we propose an
empirical-Bayesian technique that simultaneously learns the signal distribution
while MMSE-recovering the signal---according to the learned
distribution---using AMP. In particular, we model the non-zero distribution as
a Gaussian mixture, and learn its parameters through expectation maximization,
using AMP to implement the expectation step. Numerical experiments on a wide
range of signal classes confirm the state-of-the-art performance of our
approach, in both reconstruction error and runtime, in the high-dimensional
regime, for most (but not all) sensing operators
The generalized approximate message passing (GAMP) algorithm is an efficient method of MAP or approximate-MMSE estimation of x observed from a noisy version of the transform coefficients z = Ax. In fact, for large zero-mean i.i.d sub-Gaussian A, GAMP is characterized by a state evolution whose fixed points, when unique, are optimal. For generic A, however, GAMP may diverge. In this paper, we propose adaptive-damping and mean-removal strategies that aim to prevent divergence. Numerical results demonstrate significantly enhanced robustness to non-zero-mean, rank-deficient, column-correlated, and ill-conditioned A.
Abstract-The approximate message passing (AMP) algorithm originally proposed by Donoho, Maleki, and Montanari yields a computationally attractive solution to the usual ℓ1-regularized least-squares problem faced in compressed sensing, whose solution is known to be robust to the signal distribution. When the signal is drawn i.i.d from a marginal distribution that is not least-favorable, better performance can be attained using a Bayesian variation of AMP. The latter, however, assumes that the distribution is perfectly known. In this paper, we navigate the space between these two extremes by modeling the signal as i.i.d Bernoulli-Gaussian (BG) with unknown prior sparsity, mean, and variance, and the noise as zero-mean Gaussian with unknown variance, and we simultaneously reconstruct the signal while learning the prior signal and noise parameters. To accomplish this task, we embed the BG-AMP algorithm within an expectationmaximization (EM) framework. Numerical experiments confirm the excellent performance of our proposed EM-BG-AMP on a range of signal types.
We propose two novel approaches for the recovery of an (approximately) sparse signal from noisy linear measurements in the case that the signal is a priori known to be non-negative and obey given linear equality constraints, such as a simplex signal. This problem arises in, e.g., hyperspectral imaging, portfolio optimization, density estimation, and certain cases of compressive imaging. Our first approach solves a linearly constrained non-negative version of LASSO using the max-sum version of the generalized approximate message passing (GAMP) algorithm, where we consider both quadratic and absolute loss, and where we propose a novel approach to tuning the LASSO regularization parameter via the expectation maximization (EM) algorithm. Our second approach is based on the sum-product version of the GAMP algorithm, where we propose the use of a Bernoulli non-negative Gaussian-mixture signal prior and a Laplacian likelihood and propose an EM-based approach to learning the underlying statistical parameters. In both approaches, the linear equality constraints are enforced by augmenting GAMP's generalized-linear observation model with noiseless pseudo-measurements. Extensive numerical experiments demonstrate the state-of-the-art performance of our proposed approaches.
In cosparse analysis compressive sensing (CS), one seeks to estimate a non-sparse signal vector from noisy sub-Nyquist linear measurements by exploiting the knowledge that a given linear transform of the signal is cosparse, i.e., has sufficiently many zeros. We propose a novel approach to cosparse analysis CS based on the generalized approximate message passing (GAMP) algorithm. Unlike other AMPbased approaches to this problem, ours works with a wide range of analysis operators and regularizers. In addition, we propose a novel ℓ0-like soft-thresholder based on MMSE denoising for a spike-andslab distribution with an infinite-variance slab. Numerical demonstrations on synthetic and practical datasets demonstrate advantages over existing AMP-based, greedy, and reweighted-ℓ1 approaches.
Abstract-The goal of hyperspectral unmixing is to decompose an electromagnetic spectral dataset measured over M spectral bands and T pixels into N constituent material spectra (or "endmembers") with corresponding spatial abundances. In this paper, we propose a novel approach to hyperspectral unmixing based on loopy belief propagation (BP) that enables the exploitation of spectral coherence in the endmembers and spatial coherence in the abundances. In particular, we partition the factor graph into spectral coherence, spatial coherence, and bilinear subgraphs, and pass messages between them using a "turbo" approach. To perform message passing within the bilinear subgraph, we employ the bilinear generalized approximate message passing algorithm (BiG-AMP), a recently proposed belief-propagationbased approach to matrix factorization. Furthermore, we propose an expectation-maximization (EM) strategy to tune the prior parameters and a model-order selection strategy to select the number of materials N . Numerical experiments conducted with both synthetic and real-world data show favorable unmixing performance relative to existing methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.