2017
DOI: 10.14569/ijacsa.2017.080946
|View full text |Cite
|
Sign up to set email alerts
|

Data Distribution Aware Classification Algorithm based on K-Means

Abstract: improvement of the proposed method, several experiments were carried out using different real datasets. The presented results, which are achieved after extensive experiments, prove that the proposed algorithm improves the classification accuracy of KMeans. The achieved performance was also compared against several recent classification studies which are based on different classification schemes.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…The only dataset where the proposed Z-KNN algorithm is not performing better than the competitors is the Wine dataset. From the results, which were observed in [17], it can be deduced that the low performance of the Z-KNN can be explained by the data distribution features of the Wine dataset that can be remedied by introducing the variance effect contribution to the similarity analysis.…”
Section: The Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The only dataset where the proposed Z-KNN algorithm is not performing better than the competitors is the Wine dataset. From the results, which were observed in [17], it can be deduced that the low performance of the Z-KNN can be explained by the data distribution features of the Wine dataset that can be remedied by introducing the variance effect contribution to the similarity analysis.…”
Section: The Resultsmentioning
confidence: 99%
“…On the other hand, especially when a data set with high number of instances and high number of features per instance needs to be classified, the classical K-NN algorithm's classification accuracy performance becomes lower than its other well-known competitors, like K-Means classification [17].…”
Section: A the Classical K-nn Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…In [5] proposes a better K-Means algorithm to improve the classification precision, when K-Means cannot adequately classify data under certain data distribution conditions. The proposal considers the effect of variance on the classification so that the data can be classified with greater exactness.…”
Section: Related Workmentioning
confidence: 99%
“…The K-means machine learning algorithm [18] is used to group a known, assumed, or indicated in advance dataset. In [5] K-means is a classic prototype-based partitioning grouping technique that attempts to group data into k groupings that have been specified by the user.…”
Section: A K-means Algorithmmentioning
confidence: 99%