2020
DOI: 10.1007/978-3-030-58598-3_11
|View full text |Cite
|
Sign up to set email alerts
|

Solving Long-Tailed Recognition with Deep Realistic Taxonomic Classifier

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(16 citation statements)
references
References 41 publications
0
15
0
Order By: Relevance
“…Long-tailed object detector with classification equilibrium (LOCE) [33] proposed to use the mean classification prediction score (i.e., running prediction probability) to monitor model training on different classes, and guide memory-augmented feature sampling for enhancing tail-class performance. [67] CVPR 2016 Focal loss [68] ICCV 2017 Range loss [21] ICCV 2017 CRL [69] ICCV 2017 MetaModelNet [70] NeurIPS 2017 DSTL [71] CVPR 2018 CB [16] CVPR 2019 Bayesian estimate [72] CVPR 2019 FTL [73] CVPR 2019 Unequal-training [74] CVPR 2019 OLTR [15] CVPR 2019 DCL [75] ICCV 2019 Meta-Weight-Net [76] NeurIPS 2019 LDAM [18] NeurIPS 2019 Decoupling [32] ICLR 2020 LST [77] CVPR 2020 BBN [48] CVPR 2020 BAGS [78] CVPR 2020 Domain adaptation [28] CVPR 2020 Equalization loss (ESQL) [19] CVPR 2020 DBM [22] CVPR 2020 M2m [79] CVPR 2020 LEAP [80] CVPR 2020 IEM [81] CVPR 2020 SimCal [34] ECCV 2020 PRS [82] ECCV 2020 Distribution-balanced loss [37] ECCV 2020 OFA [83] ECCV 2020 LFME [84] ECCV 2020 Deep-RTC [85] ECCV 2020 Balanced Meta-Softmax [86] NeurIPS 2020 UNO-IC [87] NeurIPS 2020 De-confound-TDE [88] NeurIPS 2020 SSP [89] NeurIPS 2020 Logit adjustment [14] ICLR 2021 RIDE [17] ICLR 2021 KCL [13] ICLR 2021 LTML [90] CVPR 2021 Equalization loss v2…”
Section: Re-samplingmentioning
confidence: 99%
See 1 more Smart Citation
“…Long-tailed object detector with classification equilibrium (LOCE) [33] proposed to use the mean classification prediction score (i.e., running prediction probability) to monitor model training on different classes, and guide memory-augmented feature sampling for enhancing tail-class performance. [67] CVPR 2016 Focal loss [68] ICCV 2017 Range loss [21] ICCV 2017 CRL [69] ICCV 2017 MetaModelNet [70] NeurIPS 2017 DSTL [71] CVPR 2018 CB [16] CVPR 2019 Bayesian estimate [72] CVPR 2019 FTL [73] CVPR 2019 Unequal-training [74] CVPR 2019 OLTR [15] CVPR 2019 DCL [75] ICCV 2019 Meta-Weight-Net [76] NeurIPS 2019 LDAM [18] NeurIPS 2019 Decoupling [32] ICLR 2020 LST [77] CVPR 2020 BBN [48] CVPR 2020 BAGS [78] CVPR 2020 Domain adaptation [28] CVPR 2020 Equalization loss (ESQL) [19] CVPR 2020 DBM [22] CVPR 2020 M2m [79] CVPR 2020 LEAP [80] CVPR 2020 IEM [81] CVPR 2020 SimCal [34] ECCV 2020 PRS [82] ECCV 2020 Distribution-balanced loss [37] ECCV 2020 OFA [83] ECCV 2020 LFME [84] ECCV 2020 Deep-RTC [85] ECCV 2020 Balanced Meta-Softmax [86] NeurIPS 2020 UNO-IC [87] NeurIPS 2020 De-confound-TDE [88] NeurIPS 2020 SSP [89] NeurIPS 2020 Logit adjustment [14] ICLR 2021 RIDE [17] ICLR 2021 KCL [13] ICLR 2021 LTML [90] CVPR 2021 Equalization loss v2…”
Section: Re-samplingmentioning
confidence: 99%
“…Realistic taxonomic classifier (RTC) [85] proposed to address class imbalance with hierarchical classification. Specifically, RTC maps images into a class taxonomic tree structure, where the hierarchy is defined by a set of classification nodes and node relations.…”
Section: Classifier Designmentioning
confidence: 99%
“…Hierarchical classification typically employs a label hierarchy in the form of a tree [8,6,5,22,1,18] or directed acyclic graph [7,12] that explicitly injects prior knowledge of the label relationships into the model. The relationship between the labels can be either 'IS-A' or not depending on the associated classification problem [1,8,7,6,5,22,18,3,2].…”
Section: Related Workmentioning
confidence: 99%
“…Hierarchical classification typically employs a label hierarchy in the form of a tree [8,6,5,22,1,18] or directed acyclic graph [7,12] that explicitly injects prior knowledge of the label relationships into the model. The relationship between the labels can be either 'IS-A' or not depending on the associated classification problem [1,8,7,6,5,22,18,3,2]. One could either define the label hierarchy manually with domain knowledge [1,12,18] or derive the hierarchy automatically [8,6,5,7,22] from a well established semantic lexical database, such as WordNet [16].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation