2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01053
|View full text |Cite
|
Sign up to set email alerts
|

Low-Shot Validation: Active Importance Sampling for Estimating Classifier Performance on Rare Categories

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…When we are interested in the performance on a rare category, the estimation of predictive metrics, e.g. F-scores (or F-measures) [115], becomes more challenging and hence requires more eicient approaches than naive sampling [95]. Consider that the input of the test classiication model, denoted by X , has a ixed probability distribution across the population of samples.…”
Section: Problem Settingmentioning
confidence: 99%
“…When we are interested in the performance on a rare category, the estimation of predictive metrics, e.g. F-scores (or F-measures) [115], becomes more challenging and hence requires more eicient approaches than naive sampling [95]. Consider that the input of the test classiication model, denoted by X , has a ixed probability distribution across the population of samples.…”
Section: Problem Settingmentioning
confidence: 99%
“…(Bénédict et al 2022) try to maximize a surrogate loss in place of the F1 score. There has been some work in active sampling to estimate F1 score using optimal subsampling (Sawade, Landwehr, and Scheffer 2010) or iterative importance sampling (Poms et al 2021) ,however, both the motivation and guarantees are different from the coreset guarantees.…”
Section: Introductionmentioning
confidence: 99%