2016
DOI: 10.1007/s00025-016-0564-5
|View full text |Cite
|
Sign up to set email alerts
|

The Local Convexity of Solving Systems of Quadratic Equations

Abstract: This paper considers the recovery of a rank r positive semidefinite matrix XX T ∈ R n×n from m scalar measurements of the form yi := a T i XX T ai (i.e., quadratic measurements of X). Such problems arise in a variety of applications, including covariance sketching of highdimensional data streams, quadratic regression, quantum state tomography, among others. A natural approach to this problem is to minimize the loss function f (U ) = i (yi − a T i UU T ai) 2 which has an entire manifold of solutions given by {X… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
51
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(55 citation statements)
references
References 30 publications
(51 reference statements)
4
51
0
Order By: Relevance
“…Proposition 2 demonstrates that the distance of TAF's successive iterates to x is monotonically decreasing once the algorithm enters a small-size neighborhood around x. This neighborhood is commonly referred to as the basin of attraction; see further discussions in [19], [33], [6], [37], [39]. In other words, as soon as one lands within the basin of attraction, TAF's iterates remain in this region and will be attracted to x exponentially fast.…”
Section: B Exact Recovery From Noiseless Datamentioning
confidence: 94%
“…Proposition 2 demonstrates that the distance of TAF's successive iterates to x is monotonically decreasing once the algorithm enters a small-size neighborhood around x. This neighborhood is commonly referred to as the basin of attraction; see further discussions in [19], [33], [6], [37], [39]. In other words, as soon as one lands within the basin of attraction, TAF's iterates remain in this region and will be attracted to x exponentially fast.…”
Section: B Exact Recovery From Noiseless Datamentioning
confidence: 94%
“…To better understand the challenge, recall the GD rule (59). When m is exceedingly large, the negative gradient concentrates around the population-level gradient, which forms a reliable search direction.…”
Section: Truncation For Computational and Statistical Benefitsmentioning
confidence: 99%
“…a constant fraction of measurements are outliers). It is obvious that the original GD iterates (59) are not robust, since the residual r t,i := (a i x t ) 2 − y i can be perturbed arbitrarily if i ∈ S. Hence, we instead include only a subset of the samples when forming the search direction, yielding a truncated GD update rule…”
Section: Truncation For Removing Sparse Outliersmentioning
confidence: 99%
“…3) Tangentially related work: Some other tangentially related work includes: (i) computing the approximate rank r approximation of any matrix from its random sketches [37], [38]; (i) compressed covariance estimation using different sketching matrices for each data vector, but without the low-rank assumption [39]; and (iii) a generalization of low-rank covariance sketching [40]: it attempts to recover an n × r matrix U * from measurements y i = a i U * 2 with r n. When r = 1, this is the standard PR problem. In the general case, this is related to covariance sketching described above, but not to our problem.…”
Section: I-dmentioning
confidence: 99%