2012
DOI: 10.1109/tit.2011.2171521
|View full text |Cite
|
Sign up to set email alerts
|

A Geometric Approach to Low-Rank Matrix Completion

Abstract: The low-rank matrix completion problem can be succinctly stated as follows: given a subset of the entries of a matrix, find a low-rank matrix consistent with the observations. While several low-complexity algorithms for matrix completion have been proposed so far, it remains an open problem to devise search procedures with provable performance guarantees for a broad class of matrix models. The standard approach to the problem, which involves the minimization of an objective function defined using the Frobenius… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
37
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 44 publications
(38 citation statements)
references
References 33 publications
1
37
0
Order By: Relevance
“…Recently considered applications include matrix completion problems [8,21,12,33], truss optimization [26], finite-element discretization of Cosserat rods [27], matrix mean computation [6,4], and independent component analysis [31,30]. Research efforts to develop and analyze optimization methods on manifolds can be traced back to the work of Luenberger [20]; they concern, among others, steepest-descent methods [20], conjugate gradients [32], Newton's method [32,3], and trust-region methods [1,5]; see also [2] for an overview.…”
Section: Introductionmentioning
confidence: 99%
“…Recently considered applications include matrix completion problems [8,21,12,33], truss optimization [26], finite-element discretization of Cosserat rods [27], matrix mean computation [6,4], and independent component analysis [31,30]. Research efforts to develop and analyze optimization methods on manifolds can be traced back to the work of Luenberger [20]; they concern, among others, steepest-descent methods [20], conjugate gradients [32], Newton's method [32,3], and trust-region methods [1,5]; see also [2] for an overview.…”
Section: Introductionmentioning
confidence: 99%
“…These algorithms usually use non-convex formulations instead of the convex nuclear norm minimization (2), and they can be fast and have comparable recoverability to those based on nuclear norm minimization in practice; see, e.g., [6,21,42,56,71].…”
mentioning
confidence: 99%
“…Here, the notations U m,1 and G m,1 follow from the convention in [62], [63]. Note that each element in U m,1 is a unit-norm vector while each element in G m,1 is a one-dimensional subspace in R m .…”
Section: Preliminaries On Manifoldsmentioning
confidence: 99%
“…Our results show that a gradient descent method will not converge to any other stationary points than global minimizers. More recently, the rank-one decomposition problem where λ 2 = λ 3 = · · · = λ m = 0 was studied in [63]. Our proof technique is significantly different as the effects of the eigen-spaces corresponding to λ 2 , · · · , λ m need to be considered for the rank-one approximation problem.…”
Section: Convergence Of Primitive Simcomentioning
confidence: 99%