2015 IEEE International Conference on Data Mining 2015
DOI: 10.1109/icdm.2015.41
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Implicit Relative Labeling-Importance Information for Effective Multi-label Learning

Abstract: In multi-label learning, each training example is represented by a single instance while associated with multiple labels, and the task is to predict a set of relevant labels for the unseen instance. Existing approaches learn from multi-label data by assuming equal labeling-importance, i.e. all the associated labels are regarded to be relevant while their relative importance for the training example are not differentiated. Nonetheless, this assumption fails to reflect the fact that the importance degree of each… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
28
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 61 publications
(29 citation statements)
references
References 26 publications
1
28
0
Order By: Relevance
“…There have been some existing multi-label learning algorithms based on LE. According to [23], a label propagation procedure over the training instances is used to constitute the label distributions from the logical multi-label data. According to [24], label manifold is explored to transfer the logical labels into realvalued labels.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…There have been some existing multi-label learning algorithms based on LE. According to [23], a label propagation procedure over the training instances is used to constitute the label distributions from the logical multi-label data. According to [24], label manifold is explored to transfer the logical labels into realvalued labels.…”
Section: Related Workmentioning
confidence: 99%
“…(1); 6: Update U (t) via the IRWLS procedure; 7: t ← t + 1; 8: until convergence reached 9: Return U , Θ and b. labels, numerical labels should be divided into two sets, i.e., the relevant and irrelevant sets. According to [10] and [23], an extra virtual label y 0 is added into the original label set, i.e., the extended original label set Y = Y ∪ {y 0 } = {y 0 , y 1 , ..., y l }. In this paper, the logical value of y 0 is set to 0.…”
Section: B the Alternating Solution For The Optimizationmentioning
confidence: 99%
“…One common strategy adopted by existing approaches is manipulating label space Y, such as exploiting correlations between labels and reducing label space dimension, with identical feature representation of the instance, i.e., x, to finish the classification task. Although many algorithms [45], [22], [15] have been designed for this strategy, it might only explore partial essence of multi-label learning. Videlicet, it might be suboptimal as the specific characteristics of each label cannot be distinguished from each other.…”
Section: Introductionmentioning
confidence: 99%
“…The first one features that label distribution is from data itself, which includes pre-release rating prediction on movies (Geng and Hou 2015), emotion recognition (Zhou, Xue, and Geng 2015), et al The second one is characterized by that label distribution is originated from pre-knowledge, among which applications include age estimation (Geng, Yin, and Zhou 2013), head pose estimation (Geng and Xia 2014), et al The third one is attributed to that label distribution is learned from data automatically. Applications of such class include label-importance-aware multi-label learning (Li, Zhang, and Geng 2015), beauty sensing (Ren and Geng 2017), video parsing (Geng and Ling 2017), et al The secret of successes of LDL being applied in a variety of fields is that explicit introduction of label ambiguity with label distribution boosts performance of real-world applications.…”
Section: Introductionmentioning
confidence: 99%