2022
DOI: 10.1561/2200000092
|View full text |Cite
|
Sign up to set email alerts
|

A Unifying Tutorial on Approximate Message Passing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(14 citation statements)
references
References 0 publications
0
14
0
Order By: Relevance
“…Another important class of algorithms uses approximate message passing (AMP), which is a powerful technique that has been utilized extensively in high-dimensional statistics [38]. Variants of AMP have successfully been devised with theoretical guarantees in several inverse problems with generative priors, including linear forward models [86], [39], spiked matrix recovery [6], and phase retrieval [5].…”
Section: E Further Developmentsmentioning
confidence: 99%
See 1 more Smart Citation
“…Another important class of algorithms uses approximate message passing (AMP), which is a powerful technique that has been utilized extensively in high-dimensional statistics [38]. Variants of AMP have successfully been devised with theoretical guarantees in several inverse problems with generative priors, including linear forward models [86], [39], spiked matrix recovery [6], and phase retrieval [5].…”
Section: E Further Developmentsmentioning
confidence: 99%
“…The reformulation of τ ISTA iterations onto a τ -layer neural network with parameters that can be further tuned, as described in (38), (39), and (40), is referred to as LISTA. It is often referred to as a model-based learning method, because the network architecture is specifically defined according to a particular measurement model (the linear model), optimization procedure (Lasso), and iterative solver (ISTA).…”
Section: B the Unfolding Principle: Listamentioning
confidence: 99%
“…To save space and to avoid introducing further notation, we only state an informal theorem of this result here, and we refer to the statement of the GAMP algorithm in [17, Section 4 Equation (55)] and its corresponding state evolution in [17, Section 4 Equation (57)-( 58)]. [17,Section 4 Equation (55)] and its state evolution in [17,Section 4 Equation ( 57)-( 58)], the asymptotic equivalence of its iterates and its state evolution (see for example for an asymptotic statement in [17,Section 4 Theorem 4.2]) can be upgraded to exponentially fast concentration with rates of…”
Section: Theorem 1 [Vamp and Gvamp Concentration]mentioning
confidence: 99%
“…AMP-style algorithms can accommodate a range of estimation procedures for the models in (1.1)-(1.2), including maximum a posteriori (MAP) and minimum mean squared error (MMSE) estimation. See [17] for a tutorial on AMP.…”
Section: Introductionmentioning
confidence: 99%
“…In [AHMN23], it is shown that in the GOE case, the asymptotic behavior of µ x‹ , where u ‹ is now the LCP solution, can be evaluated with the help of an Approximate Message Passing (AMP) technique. Such techniques have recently aroused an intense and growing research effort in the fields of statistical physics, communication theory, or statistical Machine Learning [FVRS22]. In a word, given a function h : R ˆR ˆN Ñ R and a random symmetric n ˆn so-called measurement matrix W , a standard AMP algorithm is an iterative algorithm of the form xt`1 " W hpx t , η, tq `a "correction" term, where η " rη i s n i"1 P R n is a parameter vector, and where hpx t , η, tq " " hpx t i , η i , tq…”
Section: Introductionmentioning
confidence: 99%