2010
DOI: 10.1007/978-3-642-17316-5_11
|View full text |Cite
|
Sign up to set email alerts
|

Nearest Neighbour Distance Matrix Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2011
2011
2012
2012

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…the proposed GA wrapper based feature selection (GA+NNDM) using random initial population. The detailed results of the experiments on NNDM classification using Weka, 1SNNDM, kSNNDM and other algorithms can be found from the previous work in [6]. The results suggest that the application of the GA based wrapper feature selection can significantly improve the classification accuracy rate of the proposed NNDM.…”
Section: Experimental Design and Resultsmentioning
confidence: 85%
See 2 more Smart Citations
“…the proposed GA wrapper based feature selection (GA+NNDM) using random initial population. The detailed results of the experiments on NNDM classification using Weka, 1SNNDM, kSNNDM and other algorithms can be found from the previous work in [6]. The results suggest that the application of the GA based wrapper feature selection can significantly improve the classification accuracy rate of the proposed NNDM.…”
Section: Experimental Design and Resultsmentioning
confidence: 85%
“…It consist of two competing term in which the first term pulls target neighbours closer, and second term pushes different labelled example away from the target instance. The basic implementation of NNDM has been discussed in [6].…”
Section: A the Nndm Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to create the benchmark pool, the publicly available dataset repositories were examined such as the UCI [35], KEEL [36], UCR Time Series [37], NIPS Feature Selection Challenge [38] and previously used dataset in multiclass imbalance publication (IEEE, ACM, Springer, Science Direct, etc). There are 16 selected datasets were chosen for multiclass imbalance data in 5 different domains, example size vary from 100 to 50,000, feature size change from less than 10 to 100 and imbalance ratio range from 1:2 to 1:4559.…”
Section: Experimental Design and Resultsmentioning
confidence: 99%