2020
DOI: 10.48550/arxiv.2006.01065
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Hadamard Wirtinger Flow for Sparse Phase Retrieval

Abstract: We consider the problem of reconstructing an n-dimensional k-sparse signal from a set of magnitude-only measurements. Formulating the problem as an unregularized empirical risk minimization task, we study the sample complexity performance of gradient descent with Hadamard parametrization, which we call Hadamard Wirtinger flow (HWF). Provided knowledge of the signal sparsity k, we prove that a single step of HWF is able to recover the support from O(k(x * max ) −2 log n) samples, where x * max is the largest co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
26
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(27 citation statements)
references
References 37 publications
1
26
0
Order By: Relevance
“…If the step sizes of the gradient descent updates are fixed at µ ∈ R + for all l's and µ = µ y (0) 2 ≤ 2 β , then the inequality relation in (20) can be modified to…”
Section: B Exact Recovery Guarantee and Convergence Ratementioning
confidence: 99%
See 1 more Smart Citation
“…If the step sizes of the gradient descent updates are fixed at µ ∈ R + for all l's and µ = µ y (0) 2 ≤ 2 β , then the inequality relation in (20) can be modified to…”
Section: B Exact Recovery Guarantee and Convergence Ratementioning
confidence: 99%
“…Several variants of WF have been introduced to improve its sample complexity of O(N log N ) [4], [19]. However, the exact recovery theory of original WF and all of its variants [4], [5], [18]- [20] relies on the assumption that the forward map is Gaussian. This poses a fundamental limitation for imaging applications since the forward models are almost always deterministic.…”
Section: Introductionmentioning
confidence: 99%
“…Learning over-parameterized models, which have more parameters than the problem's intrinsic dimension, is becoming a crucial topic in machine learning [1][2][3][4][5][6][7][8][9][10][11]. While classical learning theories suggest that over-parameterized models tend to overfit [12], recent advances showed that an optimization algorithm may produce an implicit bias that regularizes the solution with desired properties.…”
Section: Introductionmentioning
confidence: 99%
“…While classical learning theories suggest that over-parameterized models tend to overfit [12], recent advances showed that an optimization algorithm may produce an implicit bias that regularizes the solution with desired properties. This type of results has led to new insights and better understandings on gradient descent for solving several fundamental problems, including logistic regression on linearly separated data [1], compressive sensing [2,3], sparse phase retrieval [4], nonlinear least-squares [5], low-rank (deep) matrix factorization [6][7][8][9], and deep linear neural networks [10,11], etc.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation