2022
DOI: 10.1109/tit.2022.3156592
|View full text |Cite
|
Sign up to set email alerts
|

Exponential Savings in Agnostic Active Learning Through Abstention

Abstract: We consider the problem of stochastic convex optimization with exp-concave losses using Empirical Risk Minimization in a convex class. Answering a question raised in several prior works, we provide aexcess risk bound valid for a wide class of bounded exp-concave losses, where d is the dimension of the convex reference set, n is the sample size, and δ is the confidence level. Our result is based on a unified geometric assumption on the gradient of losses and the notion of local norms.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 56 publications
0
2
0
Order By: Relevance
“…Since f (mid) outputs 2-sparse convex combinations of elements of the dictionary G, similarly to the above analysis of Audibert's star algorithm, it is enough to establish that f (mid) satisfies the offset condition. For the midpoint estimator, this fact is already implicit in the proofs of Puchkin and Zhivotovskiy (2021) in the context of active learning. While, admittedly, the direct analysis of the midpoint estimator is no more difficult than establishing the below lemma, for exposition purposes, let us demonstrate that f (min) does indeed satisfy the offset condition.…”
Section: Model Selection Aggregationmentioning
confidence: 95%
“…Since f (mid) outputs 2-sparse convex combinations of elements of the dictionary G, similarly to the above analysis of Audibert's star algorithm, it is enough to establish that f (mid) satisfies the offset condition. For the midpoint estimator, this fact is already implicit in the proofs of Puchkin and Zhivotovskiy (2021) in the context of active learning. While, admittedly, the direct analysis of the midpoint estimator is no more difficult than establishing the below lemma, for exposition purposes, let us demonstrate that f (min) does indeed satisfy the offset condition.…”
Section: Model Selection Aggregationmentioning
confidence: 95%
“…Exploiting negative regret. The possibility of getting a negative excess risk has been recently explicitly exploited by Puchkin and Zhivotovskiy (2022) in the setup of active learning with abstentions. In the context of online to batch conversion of online learning algorithms, a similar idea is exploited in Section 4.1.…”
Section: Related Workmentioning
confidence: 99%