2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00018
|View full text |Cite
|
Sign up to set email alerts
|

Learning Loss for Active Learning

Abstract: The performance of deep neural networks improves with more annotated data. The problem is that the budget for annotation is limited. One solution to this is active learning, where a model asks human to annotate data that it perceived as uncertain. A variety of recent methods have been proposed to apply active learning to deep networks but most of them are either designed specific for their target tasks or computationally inefficient for large networks. In this paper, we propose a novel active learning method t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
520
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 504 publications
(569 citation statements)
references
References 48 publications
1
520
2
Order By: Relevance
“…If a loss is given to the model, the given loss would force it to increasel i and decreasel j . In this way, the loss prediction model completely discard the overall scale changes [41]. To this end, the final loss function is computed as:…”
Section: B Active Deep Densely Connected Convolutional Networkmentioning
confidence: 99%
“…If a loss is given to the model, the given loss would force it to increasel i and decreasel j . In this way, the loss prediction model completely discard the overall scale changes [41]. To this end, the final loss function is computed as:…”
Section: B Active Deep Densely Connected Convolutional Networkmentioning
confidence: 99%
“…The problem is that manual annotation of sentiment labels takes time and manpower. Meanwhile, the budget for annotation is limited [5]. Most of the existing emotion datasets that are no more than 1,000 images [2].…”
Section: B Active Learningmentioning
confidence: 99%
“…The LP module is attached to the traditional CNN, which uses the representation of multi-level layer of the traditional CNN. After each of the active learning cycle, all the affective samples are evaluated in the unlabeled pool by the LP module to obtain data-loss pairs {(x i , Ll i )|x i ∈ U }, then human oracles manually annotate the samples of the Khighest losses [5]:…”
Section: B Uncertainty Sampling For Affective Samples Selectionmentioning
confidence: 99%
See 2 more Smart Citations