2018
DOI: 10.1214/17-aos1666
|View full text |Cite
|
Sign up to set email alerts
|

Robust low-rank matrix estimation

Abstract: Many results have been proved for various nuclear norm penalized estimators of the uniform sampling matrix completion problem. However, most of these estimators are not robust: in most of the cases the quadratic loss function and its modifications are used. We consider robust nuclear norm penalized estimators using two well-known robust loss functions: the absolute value loss and the Huber loss. Under several conditions on the sparsity of the problem (i.e. the rank of the parameter matrix) and on the regularit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

3
53
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 30 publications
(56 citation statements)
references
References 16 publications
3
53
0
Order By: Relevance
“…This condition has been extensively studied in Learning theory (cf. [5,66,49,7,64,23]). We can identify mainly two approaches to study this condition: when the class F is convex and the loss function is "strongly convex", then the risk function inherits this property and automatically satisfies the Bernstein condition (cf.…”
Section: A Review Of the Bernstein And Margin Conditionsmentioning
confidence: 99%
“…This condition has been extensively studied in Learning theory (cf. [5,66,49,7,64,23]). We can identify mainly two approaches to study this condition: when the class F is convex and the loss function is "strongly convex", then the risk function inherits this property and automatically satisfies the Bernstein condition (cf.…”
Section: A Review Of the Bernstein And Margin Conditionsmentioning
confidence: 99%
“…As the Lipschitz property allows to make only weak assumptions on the outputs, these losses have been quite popular in robust statistics [17]. Empirical risk minimizers (ERM) based on Lipschitz losses such as the Huber loss have received recently an important attention [45,15,2].…”
Section: Introductionmentioning
confidence: 99%
“…This boundedness is not satisfied in linear regression with unbounded design so the results of [2] don't apply to this basic example such as linear regression with a Gaussian design. To bypass this restriction, the global condition is relaxed into a "local" one as in [15,42], see Assumption 4 below.The main constraint in our results on ERM is the assumption on the design. This constraint can be relaxed by considering alternative estimators based on the "median-of-means" (MOM) principle of [37,9,18,1] and the minmax procedure of [3,5].…”
mentioning
confidence: 99%
See 2 more Smart Citations