2019
DOI: 10.1016/j.compeleceng.2017.12.002
|View full text |Cite
|
Sign up to set email alerts
|

A GPU-accelerated parallel K-means algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(15 citation statements)
references
References 12 publications
0
15
0
Order By: Relevance
“…The speed-up varies from 4× to 386×, but also, in this case, it is not possible to perform a direct comparison since there are not enough details about the dataset composition. Finally, in [25], the K-means algorithm was developed on GPU with the Cartesian distance. They adopt a modern GPU with 1536 CUDA cores obtaining a maximum speed-up of 88×, which is very similar to our results.…”
Section: Comparisons and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The speed-up varies from 4× to 386×, but also, in this case, it is not possible to perform a direct comparison since there are not enough details about the dataset composition. Finally, in [25], the K-means algorithm was developed on GPU with the Cartesian distance. They adopt a modern GPU with 1536 CUDA cores obtaining a maximum speed-up of 88×, which is very similar to our results.…”
Section: Comparisons and Discussionmentioning
confidence: 99%
“…Maximum Image Size Data Dimensionality Technology Speed-Up [18] 2,000,000 8 GPU NVIDIA GTX 280 220 [19] 500,000 2 GPU NVIDIA 9600 GT 14 [20] 1,000,000 2 GPU NVIDIA 8800 GTX 60 [21] 15,052,800 3 4 × GPU NVIDIA GTX 750Ti 60 [22] 1,000,000 32 GPU NVIDIA GTX 280 N. A. [23] 16,777,216 3 GPU NVIDIA Tesla C2050 25 [24] 245,057 4 GPU NVIDIA GeForce 210 386 [25] 500,000 16 GPU NVIDIA Quadro K5000 88 [26] N. A. N. A. GPU NVIDIA GTX 1080 18.5 [27] 20,000 10 2 × AMD Opteron quad-core 8 [27] 65,536 10 GPU NVIDIA Tesla 2050 60 [27] 17,692 9 Mitrion MVP FPGA Simulator N. A. Our work 264,408 128 GPU NVIDIA GTX 1060 126…”
Section: Papermentioning
confidence: 99%
“…In connection with this, there is a growing need to parallelise big data clustering while preserving the main clustering structure and reducing computational costs. So, k‐means based parallel algorithms are used to cluster data in various applications [20, 25, 26].…”
Section: Related Workmentioning
confidence: 99%
“…The procedure of data classification is simple and easy for clustering [23,44,45]. K-Means is an algorithm leading to a local optimum satisfying the following conditions [46,47] It is obvious that in K-Means, we need all the training sequence for every iteration and the quantizer is not available as long as the procedure is not completed. Thus, the data must be part of the training sequence, and the groups change with each iteration.…”
Section: B K-means Algorithmmentioning
confidence: 99%