2023
DOI: 10.1109/tcyb.2022.3173356
|View full text |Cite
|
Sign up to set email alerts
|

Drop Loss for Person Attribute Recognition With Imbalanced Noisy-Labeled Samples

Abstract: Person attribute recognition (PAR) aims to simultaneously predict multiple attributes of a person. Existing deep learning-based PAR methods have achieved impressive performance. Unfortunately, these methods usually ignore the fact that different attributes have an imbalance in the number of noisylabeled samples in the PAR training datasets, thus leading to suboptimal performance. To address the above problem of imbalanced noisy-labeled samples, we propose a novel and effective loss called drop loss for PAR. In… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 57 publications
0
0
0
Order By: Relevance
“…First, the imbalanced class data distribution ) (e.g., the imbalance ratios between the minority classes and the majority classes on the Cele-bA dataset are up to 1:43) exists in facial attribute datasets. Second, many facial attributes, especially for subjective attributes, have ambiguous annotations in these datasets (Yan et al, 2022). Third, some facial attributes may not be provid- 9, where 0.2% of labeled training data on CelebA are used.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…First, the imbalanced class data distribution ) (e.g., the imbalance ratios between the minority classes and the majority classes on the Cele-bA dataset are up to 1:43) exists in facial attribute datasets. Second, many facial attributes, especially for subjective attributes, have ambiguous annotations in these datasets (Yan et al, 2022). Third, some facial attributes may not be provid- 9, where 0.2% of labeled training data on CelebA are used.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…Kang et al [12] addressed the incorporation of a kNN filter into the undersampling method in order to exclude noise samples in the minority class. Yan et al [5] used the semantic relationship between the attributes of the problem itself to aid in the identification of noise. Indeed, kNN is a good preprocessing method for imbalance classification as long as the noise does not interfere significantly with the results.…”
Section: Noise Filtering In Undersamplingmentioning
confidence: 99%
“…It is well known that imbalanced problems can be solved via two methods of research: one is oversampling [5,6], whereby data sets are balanced via the random generation of samples for the minority class, and the other is undersampling, whereby data sets are balanced by performing partial sampling in the majority class and the number of samples selected is therefore reduced [7][8][9]. Although both methods have enabled significant research results to be obtained, we will focus on the undersampling method in this paper.…”
Section: Introductionmentioning
confidence: 99%