2021
DOI: 10.48550/arxiv.2111.12880
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Active Learning at the ImageNet Scale

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 0 publications
0
10
0
Order By: Relevance
“…Ensemble active learning. Active learning iterates between training a model and selecting new inputs to be labeled [13,14,15,16,17]. In contrast, we focus on data pruning: one-shot selection of a data subset sufficient to train to high accuracy from scratch.…”
Section: Pruning Metrics: Not All Training Examples Are Created Equalmentioning
confidence: 99%
See 1 more Smart Citation
“…Ensemble active learning. Active learning iterates between training a model and selecting new inputs to be labeled [13,14,15,16,17]. In contrast, we focus on data pruning: one-shot selection of a data subset sufficient to train to high accuracy from scratch.…”
Section: Pruning Metrics: Not All Training Examples Are Created Equalmentioning
confidence: 99%
“…In order to avoid cherry-picking classes while at the same time making sure that we are visualizing images for very different classes, we here show extreme images for classes 100, ..., 500, 600 while leaving out classes 0 and 400 (which would have been part of the visualization) since those classes almost exclusively consist of images containing people (0: tench, 400: academic gown). The extremal images are shown in Figures 9,10,11,12,13,14,15,16,17,18. With larger pruning fractions, class balance decreasesboth when pruning "easy" images (turquoise) and when pruning "hard" images (orange).…”
Section: Additional Scaling Experimentsmentioning
confidence: 99%
“…State-of-the-art AL algorithms have been combined with other advanced machine learning approaches to adapt the properties of specific tasks. For example, Wei et al used semisupervised learning to recognize unlabeled instances automatically [28]; Bengar et al and Emam et al integrated self-supervised learning in the AL framework [5,8]. However, the performance of the algorithm is strongly dependent on a randomly selected initial set.…”
Section: Related Workmentioning
confidence: 99%
“…Inspired by the success of deep learning in the passive regime, active learning with neural networks has been extensively explored in recent years (Sener and Savarese, 2018;Ash et al, 2019;Citovsky et al, 2021;Ash et al, 2021;Kothawade et al, 2021;Emam et al, 2021;Ren et al, 2021). Great empirical performances are observed in these papers, however, rigorous label complexity guarantees have largely remains elusive (except in Karzand and Nowak (2020); Wang et al (2021), with limitations discussed before).…”
Section: Related Workmentioning
confidence: 99%
“…When the hypothesis class is a set of neural networks, the learner further benefits from the representation power of deep neural networks, which has driven the successes of passive learning in the past decade (Krizhevsky et al, 2012;LeCun et al, 2015). With these added benefits, deep active learning has become a popular research area, with empirical successes observed in many recent papers (Sener and Savarese, 2018;Ash et al, 2019;Citovsky et al, 2021;Ash et al, 2021;Kothawade et al, 2021;Emam et al, 2021;Ren et al, 2021). However, due to the difficulty of analyzing a set of neural networks, rigorous label complexity guarantees for deep active learning have remained largely elusive.…”
Section: Introductionmentioning
confidence: 99%