Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2010 Ninth International Conference on Grid and Cloud Computing 2010
DOI: 10.1109/gcc.2010.23
|View full text |Cite
|
Sign up to set email alerts
|

A New Classification Algorithm Using Mutual Nearest Neighbors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0
1

Year Published

2014
2014
2018
2018

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(21 citation statements)
references
References 14 publications
0
20
0
1
Order By: Relevance
“…Figure 16 compares the AUC results of iNNE with LOF with different proportions of data size for training on the 3 data sets. [32][33][34] The result in Figure 16 shows that iNNE obtains a stable AUC on the 3 data sets regardless of the training data size.…”
Section: Performance On Benchmark Data Setsmentioning
confidence: 99%
See 1 more Smart Citation
“…Figure 16 compares the AUC results of iNNE with LOF with different proportions of data size for training on the 3 data sets. [32][33][34] The result in Figure 16 shows that iNNE obtains a stable AUC on the 3 data sets regardless of the training data size.…”
Section: Performance On Benchmark Data Setsmentioning
confidence: 99%
“…This is because k for k-NN-based algorithms usually needs to be adjusted for different data sizes. [32][33][34]…”
Section: Performance On Benchmark Data Setsmentioning
confidence: 99%
“…When the null-hypothesis is rejected by the Friedmans test, at 95% of confidence level, we can proceed with a post-hoc test to detect which differences among the methods are significant [5]. We ran the Nemenyis multiple comparison test, comparing all algorithms, where the Nemenyi test points out whether there is a significant difference between the algorithms involved in the experimental evaluation, whenever their average rank differs by at least the critical difference 6 . In all cases, the null-hypothesis was rejected by the Friedman test and the Nemenyi test was used.…”
Section: Resultsmentioning
confidence: 99%
“…In this work, we propose the use of the following two other strategies, which have already been used for single-label learning [6], but not for multi-label learning, to identify the set of these examples:…”
Section: Proposed Algorithmsmentioning
confidence: 99%
“…No entanto, a eficiência desses algoritmos pode ser melhorada utilizando recursos como estruturas de dados especiais (Liu et al, 2010 MMP O Multi-class Multi-label Perceptron, proposto por Crammer et al (2003), é um algoritmo da família dos ranqueadores de rótulos baseado no perceptron (Haykin, 1998;Minsky e Papert, 1969). O MMP cria um perceptron para cada rótulo, mas a atualização dos pesos para cada perceptron é realizada de tal maneira que se obtenha um perfeito ranqueamento de todos os rótulos.…”
Section: Adaboost Multirrótulounclassified