2018
DOI: 10.1016/j.asoc.2018.04.016
|View full text |Cite
|
Sign up to set email alerts
|

Local sets for multi-label instance selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(9 citation statements)
references
References 33 publications
0
9
0
Order By: Relevance
“…The research conducted in [48] is also of interest to investigate the impact of instance selection on the underlying structure of a dataset by analyzing the distribution of sample types. However, as of today, instance selection has not been extensively researched in the domain of MLL and to date only few methods have been made available [26][27][28]. As for LDL, we have not been able to find any studies to date.…”
Section: Prototype Selection and Label-specific Feature Learningmentioning
confidence: 98%
See 1 more Smart Citation
“…The research conducted in [48] is also of interest to investigate the impact of instance selection on the underlying structure of a dataset by analyzing the distribution of sample types. However, as of today, instance selection has not been extensively researched in the domain of MLL and to date only few methods have been made available [26][27][28]. As for LDL, we have not been able to find any studies to date.…”
Section: Prototype Selection and Label-specific Feature Learningmentioning
confidence: 98%
“…A few methods are available to approach the instance selection in the MLL domain [26][27][28] but, to the best of our knowledge, there are no studies concerning instance or prototype selection reported in LDL. With regard to the feature selection, most of LDL algorithms are built on a simple feature space where all features are used to predict all the labels.…”
Section: Introductionmentioning
confidence: 99%
“…Local sets for multi-label instance selection [2], is primarily motivated by the concept of local sets given by Brighton and Mellish [25]. They proposed two techniques on the adaptation of local sets for multi-label data.…”
Section: Ml-knnmentioning
confidence: 99%
“…The performance of the sets is evaluated using some traditional and state-of -the-art techniques. These include MlkNN [4], BrkNN [3], Rakel [29], HOMER [24], IBLR [30], LAMLKNN [6], HDLSSm [2] and HDLSBo [2]. Most of them are based on lazy learning, while others are based on space partitioning.With the exception LAMLKNN, HDLSSm and HDLSSo, all approaches are available in mulan.…”
Section: ) Experimental Setupmentioning
confidence: 99%
“…For instance, Multi-Editing Condensing Nearest Neighbor [25] first applies the Multi-Editing technique to reduce the amount of noise and then uses Condensing, mostly keeping those relevant prototypes. The Decremental Reduction Optimization Procedure [26] orders the prototypes according to the distance to their nearest neighbours and then, starting from the furthest ones, prototypes are removed as long as they do not affect the accuracy; the Iterative Case Filtering [27] bases its performance on the coverage and reachability premises to select a prototype subset that is able to maximize the accuracy; it has been recently extended to deal with multi-label classification [28]. In addition, Evolutionary Algorithms have also been adapted to perform PS [29,30].…”
Section: Common Extensions To This Technique Are the Repeated-editingmentioning
confidence: 99%