2016
DOI: 10.48550/arxiv.1605.01288
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fast rates with high probability in exp-concave statistical learning

Abstract: We present an algorithm for the statistical learning setting with a bounded exp-concave loss in d dimensions that obtains excess risk O(d log(1/δ)/n) with probability at least 1 − δ. The core technique is to boost the confidence of recent in-expectation O(d/n) excess risk bounds for empirical risk minimization (ERM), without sacrificing the rate, by leveraging a Bernstein condition which holds due to exp-concavity. We also show that with probability 1 − δ the standard ERM method obtains excess risk O(d(log(n) … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 10 publications
0
11
0
Order By: Relevance
“…Our online-to-batch procedure only provides upper-bounds in expectation. Obtaining high-probability bounds is more challenging and universal conversion methods such as the one of [Mehta, 2016] may not work for improper procedures.…”
Section: Discussionmentioning
confidence: 99%
“…Our online-to-batch procedure only provides upper-bounds in expectation. Obtaining high-probability bounds is more challenging and universal conversion methods such as the one of [Mehta, 2016] may not work for improper procedures.…”
Section: Discussionmentioning
confidence: 99%
“…With the tool of the Rademacher complexity R n , Srebro et al [28] demonstrated an O(R n / √ n) risk bound for ERM, and many papers strengthened the theory further [3,2]. For nonconvex but exp-concave objectives, Koren & Levy [14] and Mehta [17] derived a risk bound of O(1/n). Under certain stronger conditions, a tighter O(1/n 2 ) risk bound has been shown [39].…”
Section: Related Workmentioning
confidence: 99%
“…26]. Under exp-concave distributions, Mehta [30] also recently made use of this technique. Problems arise, however, when the losses can be potentially heavy-tailed.…”
Section: Challenges Under Weak Convexitymentioning
confidence: 99%