2022
DOI: 10.48550/arxiv.2208.03313
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Non-Asymptotic Framework for Approximate Message Passing in Spiked Models

Abstract: Approximate message passing (AMP) emerges as an effective iterative paradigm for solving highdimensional statistical problems. However, prior AMP theory -which focused mostly on high-dimensional asymptotics -fell short of predicting the AMP dynamics when the number of iterations surpasses o log n log log n (with n the problem dimension). To address this inadequacy, this paper develops a nonasymptotic framework for understanding AMP in spiked matrix estimation. Built upon new decomposition of AMP updates and co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 87 publications
0
3
0
Order By: Relevance
“…For example, when the denoisers are non-separable [19], to settings where the distributional parameters of the noise and signal must be learned [18], or to AMP algorithms that work under more general assumptions on the measurement matrix [11,15,16,34,43]. Another interesting direction for future study is whether the analysis of Li and Wei [25] can be used for the type of AMP algorithms studied here to improve the dependence between the problem size and number of iterations for which the concentration results hold.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, when the denoisers are non-separable [19], to settings where the distributional parameters of the noise and signal must be learned [18], or to AMP algorithms that work under more general assumptions on the measurement matrix [11,15,16,34,43]. Another interesting direction for future study is whether the analysis of Li and Wei [25] can be used for the type of AMP algorithms studied here to improve the dependence between the problem size and number of iterations for which the concentration results hold.…”
Section: Discussionmentioning
confidence: 99%
“…Recent work by Li and Wei [25] extends the finite sample analysis of [38] to a related AMP algorithm for spiked matrix estimation. This work shows rates exponential in N for up to O N poly log N iterations by using novel proof methods, improving on the O log N log log N iteration guarantees found in this paper and in [38].…”
Section: Related Workmentioning
confidence: 99%
“…To improve the estimation accuracy, the local refinement stage invokes nonconvex optimization algorithms like alternating minimization [21,22], gradient descent [2,7], manifold-based optimization [35], block coordinate decent [37], etc. These were motivated in part by the effectiveness of nonconvex optimization in solving nonconvex low-complexity problems [32,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60]; see an overview of recent development in [33]. Various statistical and computational guarantees have been provided for these algorithms, all of which have been shown to run in polynomial time.…”
Section: Prior Artmentioning
confidence: 99%