2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton) 2015
DOI: 10.1109/allerton.2015.7447165
|View full text |Cite
|
Sign up to set email alerts
|

Active learning from noisy and abstention feedback

Abstract: An active learner is given an instance space, a label space and a hypothesis class, where one of the hypotheses in the class assigns ground truth labels to instances. Additionally, the learner has access to a labeling oracle, which it can interactively query for the label of any example in the instance space. The goal of the learner is to find a good estimate of the hypothesis in the hypothesis class that generates the ground truth labels while making as few interactive queries to the oracle as possible.This w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
15
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(19 citation statements)
references
References 9 publications
1
15
0
Order By: Relevance
“…Its running time is O(log |H|), where H is the hypothesis/function space. A variation of the model was studied by Yan et al [44]. Here, the instance space, image space and hypothesis space are, respectively, [0, 1], {0, 1} and…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Its running time is O(log |H|), where H is the hypothesis/function space. A variation of the model was studied by Yan et al [44]. Here, the instance space, image space and hypothesis space are, respectively, [0, 1], {0, 1} and…”
Section: Related Workmentioning
confidence: 99%
“…Its running time is O(log |H|), where H is the hypothesis/function space. A variation of the model was studied by Yan et al [44].…”
Section: Related Workmentioning
confidence: 99%
“…Deep learning is also used in other works like [38][39][40]. A number of approaches rely on active learning techniques [41][42][43][44].…”
Section: Data Errorsmentioning
confidence: 99%
“…Our algorithm is statistically consistent under very mild conditions -when the abstention rate is nondecreasing as we get closer to the decision boundary. Under slightly stronger conditions as in [24], our algorithm has the same query complexity. However, if the abstention rate of the labeler increases strictly monotonically close to the decision boundary, then our algorithm adapts and does substantially better.…”
Section: Introductionmentioning
confidence: 96%
“…The setting of active learning with an abstaining noisy labeler was first considered by [24], who looked at learning binary threshold classifiers based on queries to an labeler whose abstention rate is higher closer to the decision boundary. They primarily looked at the case when the abstention rate at a distance ∆ from the decision boundary is less than 1 − Θ(∆ α ), and the rate of label flips at the same distance is less than 1 2 − Θ(∆ β ); under these conditions, they provided an active learning algorithm that given parameters α and β, outputs a classifier with error ǫ using Õ(ǫ −α−2β ) queries to the labeler.…”
Section: Introductionmentioning
confidence: 99%