2018
DOI: 10.1073/pnas.1705490115
|View full text |Cite
|
Sign up to set email alerts
|

Near-optimal matrix recovery from random linear measurements

Abstract: SignificanceVarious problems of science and engineering can be reduced to recovery of an unknown matrix from a small number of random linear measurements. We present two matrix recovery algorithms based on approximate message passing, a framework originally developed for sparse vector recovery problems. Our algorithms typically converge exponentially fast. Matrix recovery algorithms can be compared in terms of the number of measurements required for successful recovery. One of our algorithms requires the same … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 45 publications
0
6
0
Order By: Relevance
“…This algorithm was implemented in [Don13] and partly motivated the predictions of [DGM13]. A recent detailed study (and generalizations) can be found in [RG17], showing that its phase transition matches the one of nuclear norm minimization, predicted in [DGM13] and proved in [OTH13,ALMT14].…”
Section: Vignette #1: Matrix Compressed Sensingmentioning
confidence: 96%
“…This algorithm was implemented in [Don13] and partly motivated the predictions of [DGM13]. A recent detailed study (and generalizations) can be found in [RG17], showing that its phase transition matches the one of nuclear norm minimization, predicted in [DGM13] and proved in [OTH13,ALMT14].…”
Section: Vignette #1: Matrix Compressed Sensingmentioning
confidence: 96%
“…Thus, if this lowrank signal structure can be exploited by a linear inference algorithm, then bilinear inference can be accomplished. This is precisely what was proposed in [36], building on the nonseparable-denoising version of the AMP algorithm from [37]. A rigorous analysis of "lifted AMP" was presented in [38].…”
Section: B Prior Workmentioning
confidence: 80%
“…(63). The power of these tools is further enhanced by new computational methods that allow improved design of experiments, such as recommender systems (62) and compressed sensing (64,65). Finally, approaches leveraging allelic series of sgRNAs with different inhibition/activation levels (35), or with paired CRISPRi/a systems to look for suppressive interactions (19) may be able to exploit single-cell phenotypic heterogeneity to more finely dissect gene-level perturbation responses at scale.…”
Section: Discussionmentioning
confidence: 99%
“…CRISPRa K562 or HUDEP2 cells were concentrated by cytocentrifugation at 400g for 8 minutes onto glass slides using a Shandon Cytospin 3 (Thermo Fisher Scientific). Slides were fixed in methanol for 30 seconds, stained in May-Grünwald solution (Sigma-Aldrich) for 10 minutes, and stained in a 1:5 dilution of Giemsa solution (Sigma-Aldrich) in distilled water for 64 20 minutes. Stained slides were then rinsed in distilled water and allowed to dry before being covered with a coverslip.…”
Section: Cytologymentioning
confidence: 99%