2014
DOI: 10.1007/s10846-014-0144-4
|View full text |Cite
|
Sign up to set email alerts
|

Lazy Multi-label Learning Algorithms Based on Mutuality Strategies

Abstract: Lazy multi-label learning algorithms have become an important research topic within the multilabel community. These algorithms usually consider the set of standard k-Nearest Neighbors of a new instance to predict its labels (multi-label). The prediction is made by following a voting criteria within the multi-labels of the set of k-Nearest Neighbors of the new instance. This work proposes the use of two alternative strategies to identify the set of these examples: the Mutual and Not Mutual Nearest Neighbors rul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(17 citation statements)
references
References 13 publications
(19 reference statements)
0
17
0
Order By: Relevance
“…Two methods that exemplify this category consists in Binary Relevance k Nearest Neighbor (BRkNN) 14 and Multi-label Mutual k Nearest Neighbor (ML-MUT) 1 .…”
Section: Multi-label Learning Methodsmentioning
confidence: 99%
“…Two methods that exemplify this category consists in Binary Relevance k Nearest Neighbor (BRkNN) 14 and Multi-label Mutual k Nearest Neighbor (ML-MUT) 1 .…”
Section: Multi-label Learning Methodsmentioning
confidence: 99%
“…Hamming loss calculates the percentage of misclassified labels, that is, the sample associated to a wrong label or a label belonging to the true sample which is not predicted (Cherman, Spolaôr, Valverde‐Rebaza, & Monard, ). Hamming loss(),hT=1pi=1pYinormalΔZiL, where Δ is the symmetric difference between two sets. Hamming loss calculates the percentage of labels whose relevance is not predicted correctly.…”
Section: Setup and Benchmarkingmentioning
confidence: 99%
“…Afterwards, any SL classifier can be applied to the constructed SL dataset(s), and the results are then transformed back into ML representation. Such methods can be classified into five groups including binary relevance (BR) (Boutell et al, 2004), methods that combine labels such as label powerset (LP) (Cherman, Monard, & Metz, 2011), pairwise methods such as calibrated label ranking (Fürnkranz, Hüllermeier, Mencía, & Brinker, 2008), select family (Chen, Yan, Zhang, Chen, & Yang, 2007), and ensemble methods such as random-k-label-sets (RAKEL) (Tsoumakas & Vlahavas, 2007) and classifier chain (CC) (Read, Pfahringer, Holmes, & Frank, 2011). Commonly encountered problem transformation methods are explained in the following:…”
Section: Problem Transformationmentioning
confidence: 99%
“…Transfer the diagnosis problem into symptom syndrome classification problem, which is the data mining classification algorithm in question .We first analyzes the classification algorithm based on statistical theory: Naive Bayes, Weighted bipartite graph [2][3][4][5][6][7] , then introduces the improved algorithm of SVM [8] [9] and SVM ProSVM [10] .Finally, we combine the advantages of the four classification methods, and the integrated model is constructed to achieve the purpose of diagnosis. In this paper, based on the proposed algorithm to design and verify the classification of hypertension symptoms of the classification model for the diagnosis of hypertension to provide technical support.…”
Section: Introductionmentioning
confidence: 99%