2015 38th International Conference on Telecommunications and Signal Processing (TSP) 2015
DOI: 10.1109/tsp.2015.7296368
|View full text |Cite
|
Sign up to set email alerts
|

Multi-GPU implementation of k-nearest neighbor algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…And the performance of k-NN on different datasets under different sceneries has been discussed. Multi-GPU implementation of k-Nearest Neighbor classification algorithm has been proposed in (Masek et al, 2015) keeping big data in mind [size of big data and sources have been discussed in (Rehman et al, 2015). This technique has been shown efficient and effective for large data.…”
Section: Related Workmentioning
confidence: 99%
“…And the performance of k-NN on different datasets under different sceneries has been discussed. Multi-GPU implementation of k-Nearest Neighbor classification algorithm has been proposed in (Masek et al, 2015) keeping big data in mind [size of big data and sources have been discussed in (Rehman et al, 2015). This technique has been shown efficient and effective for large data.…”
Section: Related Workmentioning
confidence: 99%
“…Rocha et al proposed a compact data structure to represent sparse datasets for efficient GPU KNN data representation and distances computation. Masek et al presented a multi‐GPU implementation scalable to four devices which splits train data uniformly into the many devices. The main advantage is that no intercommunication among GPUs is required and therefore using multiple GPUs does not introduce any additional overhead, which is a clear advantage.…”
Section: Data Mining Tasks and Techniquesmentioning
confidence: 99%
“…Acceleration by GPU is very promising idea that achieved the significant results in past few years. As mentioned in paper [11], using GPU based training on classifier can speed up calculations of k-NN algorithm up to 750× in comparison with a single CPU version. Using this acceleration can be processed much bigger data.…”
Section: Related Workmentioning
confidence: 99%