2022
DOI: 10.1287/opre.2021.2106
|View full text |Cite
|
Sign up to set email alerts
|

Nonconvex Low-Rank Tensor Completion from Noisy Data

Abstract: This paper investigates a problem of broad practical interest, namely, the reconstruction of a large-dimensional low-rank tensor from highly incomplete and randomly corrupted observations of its entries. Although a number of papers have been dedicated to this tensor completion problem, prior algorithms either are computationally too expensive for large-scale applications or come with suboptimal statistical performance. Motivated by this, we propose a fast two-stage nonconvex algorithm—a gradient method followi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
72
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 43 publications
(72 citation statements)
references
References 60 publications
0
72
0
Order By: Relevance
“…Proposition 1. [24, Propositions 4.6.1, 4.6.2] For a symmetric matrix M ∈ R N ×N , a vector x ∈ R N with x 2 = 1 is an eigenvector of M if and only if it is a critical point of the population risk (5) 2 . Moreover, denote {λ n } N n=1 with λ 1 > λ 2 ≥ λ 3 ≥ • • • ≥ λ N as the eigenvalues of M and {v n } N n=1 as the associated eigenvectors.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Proposition 1. [24, Propositions 4.6.1, 4.6.2] For a symmetric matrix M ∈ R N ×N , a vector x ∈ R N with x 2 = 1 is an eigenvector of M if and only if it is a critical point of the population risk (5) 2 . Moreover, denote {λ n } N n=1 with λ 1 > λ 2 ≥ λ 3 ≥ • • • ≥ λ N as the eigenvalues of M and {v n } N n=1 as the associated eigenvectors.…”
Section: Resultsmentioning
confidence: 99%
“…For any small constant δ ∈ (0, 1], it follows from [17, Lemma 13] that Y−EY ≤ δ x 2 2 with probability at least 1 − c 1 N −c2 provided that m ≥ Cδ −2 N log(N ) for some constant C. Therefore, by setting = Cδ x 2 2 ≤ η = 0.11d min , we can conclude that the two conditions in (7) and (8) hold with probability at least 1 − c 1 N −c2 as long as the number of measurements satisfies m ≥ Cd −2 min x 4 2 N log(N ). Moreover, it follows from Corollary III.1 that the distance between the empirical local minimum and population local minimum is on the order of d −1 min δ x 2 2 .…”
Section: Phase Retrievalmentioning
confidence: 99%
See 2 more Smart Citations
“…For the former, several papers use SI after training (Nakkiran et al, 2015;Yaguchi et al, 2019;Yang et al, 2020), while Ioannou et al (2016) argue for initializing factors as though they were single layers, which we find inferior to SI in some cases. Outside deep learning, spectral methods have also been shown to yield better initializations for certain matrix and tensor problems (Keshavan et al, 2010;Cai et al, 2019). For regularization, Gray et al (2019) suggest compression-rate scaling (CRS), which scales weight-decay using the reduction in parameter count; this is justified via the usual Bayesian understanding of 2 -regularization (Murphy, 2012).…”
Section: Related Workmentioning
confidence: 99%