2023
DOI: 10.1109/tkde.2022.3232114
|View full text |Cite
|
Sign up to set email alerts
|

Robust Label and Feature Space Co-Learning for Multi-Label Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 42 publications
0
5
0
Order By: Relevance
“…The above theorem reveals the equivalence between (19) and (20), under some circumstances in (21). A summary of conclusions of Theorem 1 is shown in Table 2.…”
Section: Generalized Estf and Theoretical Analysismentioning
confidence: 72%
See 3 more Smart Citations
“…The above theorem reveals the equivalence between (19) and (20), under some circumstances in (21). A summary of conclusions of Theorem 1 is shown in Table 2.…”
Section: Generalized Estf and Theoretical Analysismentioning
confidence: 72%
“…For single-view MLC experiments, we compare the proposed MLPC in the CP, TT and TR formats with three types of methods: MLC, PL and baseline. For MLC, we compare MLPC with state-ofthe-art methods, including LLSF [10], GRRO [49], RLFSCL [20] and WRAP [44]. Both LLSF and WRAP learn label-specific features, while GRRO considers feature relevance, feature redundancy and label correlation simultaneously.…”
Section: Compared Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Additionally, classifiers that can develop a global model for a class perform worse when multi-label datasets are classified [ 5 ]. To concentrate on the decision limits of the classifier(s) in each area, clustering has previously been used to extract the distribution of data completely or separately for each class [ 6 , 7 ]. In other words, one classifier is trained for each cluster of data after the initial clustering of the data.…”
Section: Introductionmentioning
confidence: 99%