Proceedings of the 25th International Conference on Machine Learning - ICML '08 2008
DOI: 10.1145/1390156.1390239
|View full text |Cite
|
Sign up to set email alerts
|

Rank minimization via online learning

Abstract: Minimum rank problems arise frequently in machine learning applications and are notoriously difficult to solve due to the non-convex nature of the rank objective. In this paper, we present the first online learning approach for the problem of rank minimization of matrices over polyhedral sets. In particular, we present two online learning algorithms for rank minimization -our first algorithm is a multiplicative update method based on a generalized experts framework, while our second algorithm is a novel applic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
50
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 57 publications
(51 citation statements)
references
References 18 publications
1
50
0
Order By: Relevance
“…A similar result applies for semidefinite programming (based on an existing primal MWU-based SDP algorithm (Arora et al, 2005b)) as well as other optimizations for which the MWU applies, such as rank minimization (Meka et al, 2008), etc.…”
Section: Find Feasible X(t) Inmentioning
confidence: 70%
“…A similar result applies for semidefinite programming (based on an existing primal MWU-based SDP algorithm (Arora et al, 2005b)) as well as other optimizations for which the MWU applies, such as rank minimization (Meka et al, 2008), etc.…”
Section: Find Feasible X(t) Inmentioning
confidence: 70%
“…Equation 7 restricts the representation such that it does not contain any part of the original vector itself. This problem turns out to be non-convex and NP-hard [7]. Nevertheless, recent developments in the optimization field have provided heuristics that are able to find relaxed solutions that are sparse [8].…”
Section: Interaction Motion As Sparse Subspace Separationmentioning
confidence: 99%
“…It has since been used in boosting and in the generalized experts framework in machine learning [16], [23], [3]. And, this approach has been used to create an efficient algorithm for low-rank matrix approximation with guarantees [25]. To the best of our knowledge, our paper is the first to exploit these ideas to improve the running time for any MDP algorithms.…”
Section: Introductionmentioning
confidence: 99%