2021
DOI: 10.1007/s10489-020-02121-4
|View full text |Cite
|
Sign up to set email alerts
|

Dual generative adversarial active learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
10

Relationship

3
7

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 34 publications
0
4
0
Order By: Relevance
“…Dual Generative Adversarial Active Learning (DGAAL) is a novel active learning method that combines pool-based and synthesis-based approaches to reduce annotation costs while maintaining good model performance [ 21 ]. It uses two Generative Adversarial Networks (GANs) consisting of a generator and two discriminators.…”
Section: Related Workmentioning
confidence: 99%
“…Dual Generative Adversarial Active Learning (DGAAL) is a novel active learning method that combines pool-based and synthesis-based approaches to reduce annotation costs while maintaining good model performance [ 21 ]. It uses two Generative Adversarial Networks (GANs) consisting of a generator and two discriminators.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, promising research directions have been explored to extend DAL algorithms regarding the integration of different annotation granularity, abundant unlabeled data and related supervision setting into active learning pipeline, including multi-label [19], multi-view [20], multi-instance [10], multi-instance multi-label (M2AL) [21], multi-view multi-instance multi-label (M3AL) [22], and unsupervised [23], [14] AL schemes. Among them, more attention has been paid to address two aspects: the automatic design of selection samples strategy [24] and the alleviation of various problems, namely data-related problems such as confidence and insufficient labeled sample, model-related problems such as generalization ability, and domain-specific problems such as domain shift, cold-start problem and class imbalance.…”
Section: A Active Learning For Deep Architecturesmentioning
confidence: 99%
“…Thus the superiority of BLPM is verified. We compared the performance of IEAAL with the current mainstream methods, including Random, Monte Carlo dropout (MC dropout) [52], Core-set [38], LLAL [19], VAAL [39], SRAAL [18], ARAL [25] and DGAAL [53]. Figure 7a shows the performance on CIFAR-100.…”
Section: Ablation Studymentioning
confidence: 99%