2022
DOI: 10.1002/int.23068
|View full text |Cite
|
Sign up to set email alerts
|

One‐stage self‐distillation guided knowledge transfer for long‐tailed visual recognition

Abstract: Deep learning has achieved remarkable progress for visual recognition on balanced data sets but still performs poorly on real‐world long‐tailed data distribution. The existing methods mainly decouple the problem into the two‐stage decoupling training, that is, representation learning and classifier training, or multistage training based on knowledge distillation, thus resulting in huge training steps and extra computation cost. In this paper, we propose a conceptually simple yet effective One‐stage Long‐tailed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 30 publications
0
0
0
Order By: Relevance