2021
DOI: 10.48550/arxiv.2112.07963
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards General and Efficient Active Learning

Abstract: Active learning aims to select the most informative samples to exploit limited annotation budgets. Most existing work follows a cumbersome pipeline by repeating the timeconsuming model training and batch data selection multiple times on each dataset separately. We challenge this status quo by proposing a novel general and efficient active learning (GEAL) method in this paper. Utilizing a publicly available model pre-trained on a large dataset, our method can conduct data selection processes on different datase… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 46 publications
0
8
0
Order By: Relevance
“…However, DNNs require large-scale training data, while labeling such data is expensive and time-consuming. On the other hand, active learning (AL) selects the most suitable samples for annotation, which can boost the training of models using fewer annotations [Xie et al, 2021]. Thus, combining AL and DL can both reduce labeling cost and handle complex data.…”
Section: Introductionmentioning
confidence: 99%
“…However, DNNs require large-scale training data, while labeling such data is expensive and time-consuming. On the other hand, active learning (AL) selects the most suitable samples for annotation, which can boost the training of models using fewer annotations [Xie et al, 2021]. Thus, combining AL and DL can both reduce labeling cost and handle complex data.…”
Section: Introductionmentioning
confidence: 99%
“…To solve this problem, Refs. [ 18 , 19 , 20 ] have been proposed to compute uncertainty algorithmically, using only task model learning. These methods do not require learning to predict uncertainty, making them easy to use in practice.…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, for object detection tasks, each image may contain multiple candidate regions of target instances. Therefore, simply combining or averaging the uncertainties of instances or pixels does not adequately determine the overall image uncertainty [9]. To enhance the effectiveness of the active learning strategy on the detector's learning performance, it is essential to focus on the local feature maps of various prospective instance regions and eliminate background region interference.…”
mentioning
confidence: 99%
“…For example, MI-AOD [8] employs a pair of adversarial classifiers to align the representativeness and uncertainty of the target set, selecting images based on instance predictions. GEAL [9] uses feature point extraction for representative selection. CALD [28] employs a metric for data consistency in active learning.…”
mentioning
confidence: 99%
See 1 more Smart Citation