2020
DOI: 10.1109/access.2020.2987922
|View full text |Cite
|
Sign up to set email alerts
|

A Structure-Induced Framework for Multi-Label Feature Selection With Highly Incomplete Labels

Abstract: Feature selection has shown significant promise in improving the effectiveness of multilabel learning by constructing a reduced feature space. Previous studies typically assume that label assignment is complete or partially complete; however, missing-label and unlabeled data are commonplace and accompanying occurrences in real applications due to the high expense of manual annotation and label ambiguity. We call this ''highly incomplete labels'' problem. Such label incompleteness severely damages the inherent … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 32 publications
0
6
0
Order By: Relevance
“…The feature space is built using the k-highest similarity neighborhoods to maintain the intrinsic local correlation information. To ensure feature space representation validity in recovering label structures, the weight matrix đť‘Š of the feature space similarity is defined as in the following Equation (2) [2,22]:…”
Section: Phase 1 Instance-level Feature Space Similarity Weightingmentioning
confidence: 99%
See 3 more Smart Citations
“…The feature space is built using the k-highest similarity neighborhoods to maintain the intrinsic local correlation information. To ensure feature space representation validity in recovering label structures, the weight matrix đť‘Š of the feature space similarity is defined as in the following Equation (2) [2,22]:…”
Section: Phase 1 Instance-level Feature Space Similarity Weightingmentioning
confidence: 99%
“…Existing methods for handling missing labels in multilabel learning are built based on the assumption that missing label information of an instance can be propagated from its k-nearest neighbors [3,13]. Some of these methods use the first-order label correlation exploitation strategy which ignores label correlations [2][3][4].…”
Section: Unified Graph-based Missing Label Propagation (Ug-mlp)mentioning
confidence: 99%
See 2 more Smart Citations
“…With the increasing availability of multi-label data related to multiple labels in an instance, a great quantity of feature selection methods for multi-label learning are developed to reduce dimensions and improve learning performance [14][15][16][17]. These methods commonly can be divided into three categories: filter [18][19][20], wrapper [21,22] and embedded [23] methods, where the filter method is independent of the specific learner, and it has less computation cost and stronger generalization ability.…”
Section: Introductionmentioning
confidence: 99%