2023
DOI: 10.2139/ssrn.4364285
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Hierarchical Block Aggregation Network for Long-Tailed Visual Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…1) Ensemble Learning: Ensemble learning has been a commonly used strategy that averages (sometimes weighted) the output of multiple models trained with different data proportions [217], data distributions [218], model architectures [219] and so on. In healthcare applications, missing data/input variables and even the whole sensor modalities are common, which are difficult to handle in developing appropriate machine learning models.…”
Section: F Learn With Multiple Modelsmentioning
confidence: 99%
“…1) Ensemble Learning: Ensemble learning has been a commonly used strategy that averages (sometimes weighted) the output of multiple models trained with different data proportions [217], data distributions [218], model architectures [219] and so on. In healthcare applications, missing data/input variables and even the whole sensor modalities are common, which are difficult to handle in developing appropriate machine learning models.…”
Section: F Learn With Multiple Modelsmentioning
confidence: 99%
“…Much research has been done on the class-imbalance problem [15][16][17][18][19][20]40], and different solutions have been proposed to solve this problem including under-sampling and over-sampling [41,42], reconciliation of loss function [15,17,43,44], and learning paradigms such as self-supervised learning [16,45], transfer learning [18], ensemble learning [46,47], metalearning [48], and metric learning [49]. All these methods have been used in the scenario of a single domain and use the data splits for all participants from the same domain, while we extend the data heterogeneity problem to multi-domain and imbalance classes in FL environment.…”
Section: Related Work 21 Class-imbalance and Label Distributionmentioning
confidence: 99%
“…ISSN 2616-5775 Vol. [10] 43.7 64.3 37.1 8.2 LDAM-DRW [1] 49.8 60.4 46.9 30.7 Decouple-cRT [4] 47.3 58.8 44.0 26.1 Balanced Softmax [9] 51.4 62.2 48.8 29.8 LADE [16] 51.9 62.3 49.3 31.2 DisAlign [17] 52.2 60.8 50.4 34.7 Seesaw [21] 50.4 67.1 45.2 21.4 MARC [3] 52.…”
Section: Experiments On Imagenet-ltmentioning
confidence: 99%
“…Specifically, prior to any calibration efforts, it was observed that classes featuring a higher volume of images exhibited larger margins and logits. Conversely, classes with fewer images displayed reduced margins and logits [3] . Moreover, we found that uncalibrated margins and logits will substantially detract from image classification performance.…”
Section: Introductionmentioning
confidence: 96%