DOI: 10.1007/978-3-540-74958-5_14
|View full text |Cite
|
Sign up to set email alerts
|

Dual Strategy Active Learning

Abstract: Abstract. Active Learning methods rely on static strategies for sampling unlabeled point(s). These strategies range from uncertainty sampling and density estimation to multi-factor methods with learn-once-use-always model parameters. This paper proposes a dynamic approach, called DUAL, where the strategy selection parameters are adaptively updated based on estimated future residual error reduction after each actively sampled point. The objective of dual is to outperform static strategies over a large operating… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
101
0

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 131 publications
(105 citation statements)
references
References 13 publications
2
101
0
Order By: Relevance
“…When the core of the model is consolidated, items with highest uncertainty should provide a higher improvement in performance by effectively delimiting with more precision the decision frontier of the model. This phenomenon, which lies at the heart of well-known semi-supervised learning techniques like self-training (or bootstrapping), has also been noted by approaches combining density estimation methods when very few examples are available, and uncertainty sampling when the training dataset has grown [5,17].…”
Section: Relevant Workmentioning
confidence: 82%
“…When the core of the model is consolidated, items with highest uncertainty should provide a higher improvement in performance by effectively delimiting with more precision the decision frontier of the model. This phenomenon, which lies at the heart of well-known semi-supervised learning techniques like self-training (or bootstrapping), has also been noted by approaches combining density estimation methods when very few examples are available, and uncertainty sampling when the training dataset has grown [5,17].…”
Section: Relevant Workmentioning
confidence: 82%
“…This combined semi-supervised AL has the benefit of ignoring regions that can be reliably "filled in" by a semisupervised procedure, while also selecting those examples that may benefit this EM process. Donmez et al [16] propose a modification of the density-weighted technique of Nguyen and Smeulders. This modification simply selects examples according to the convex combination of the density-weighted technique and traditional uncertainty sampling.…”
Section: Density-sensitive Active Learningmentioning
confidence: 99%
“…The cold start problem has long been known to be a key difficulty in building effective classifiers quickly and cheaply via AL [13,16]. Since the quality of data selection directly depends on the understanding of the space provided by the "current" model, early stages of acquisitions can result in a vicious cycle of uninformative selections, leading to poor quality models and therefore to additional poor selections.…”
Section: Starting Coldmentioning
confidence: 99%
“…Our work is motivated by Nguyen and Smeulders [12] and Donmez et al [13]. Philosophically, our work is most aligned with Sheng et al [14], where relabeling is used to obviate the effects of noise and Vijayanarasimhan et al [15], which identifies promising crowdsourcing annotation tasks given a specific data labeling budget.…”
Section: Related Workmentioning
confidence: 99%