2019 IEEE International Conference on Multimedia and Expo (ICME) 2019
DOI: 10.1109/icme.2019.00113
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Label Image Recognition with Joint Class-Aware Map Disentangling and Label Correlation Embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 34 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…For example, CPSD improves more on ResNet101 in comparison with ResNet101-TF and Q2L-R101. We attribute this to the fact that the more powerful ap-Methods mAP CF1 OF1 MS-CMA [You et al, 2020] 61.4 60.5 73.8 SRN [Zhu et al, 2017] 62.0 58.5 73.4 CPCL [Zhou et al, 2021a] 62.3 59.2 73.0 CADM [Chen et al, 2019b] 62 proaches are much harder to trap in model overfitting. Even so, our method still introduces a considerable improvement.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…For example, CPSD improves more on ResNet101 in comparison with ResNet101-TF and Q2L-R101. We attribute this to the fact that the more powerful ap-Methods mAP CF1 OF1 MS-CMA [You et al, 2020] 61.4 60.5 73.8 SRN [Zhu et al, 2017] 62.0 58.5 73.4 CPCL [Zhou et al, 2021a] 62.3 59.2 73.0 CADM [Chen et al, 2019b] 62 proaches are much harder to trap in model overfitting. Even so, our method still introduces a considerable improvement.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…Note that in literature, OF1 and CF1 are more commonly used as compared to the other metrics to evaluate models for MLIC. We compare our method with SRN [Zhu et al, 2017], CADM [Chen et al, 2019f] , ML-GCN [Chen et al, 2019e], KSSNet [Liu et al, 2018], MS-CMA [You et al, 2020], MCAR [Gao and Zhou, 2021], SSGRL Chen et al [2019d], C-Trans [Lanchantin et al, 2021], ADD-GCN [Ye et al, 2020] , ASL [Ridnik et al, 2021], MlTr-l [Cheng et al, 2022], Swin-L [Liu et al, 2021b], CvT-w24 [Wu et al, 2021] and Q2L-CvT .…”
Section: B2 Ms-cocomentioning
confidence: 99%
“…The research direction on multilabel classification can be roughly categorized as: loss functions [28,39,53], training scheme [5,52,63], classification head [30,40]. Besides, label correlation modeling [8,9,55,57] and the utilization of region features [15,36,50] are proved to be effective for multi-label classification. In light of the challenge of annotating all groundtruth labels for an image, multi-label learning in the presence of missing labels (MLML) has also attracted much research attention [11,20,56,61].…”
Section: Related Workmentioning
confidence: 99%