2017
DOI: 10.1007/s10766-017-0554-6
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Implementation of a Machine Learning Algorithm on GPU

Abstract: The capability for understanding data passes through the ability of producing an effective and fast classification of the information in a time frame that allows to keep and preserve the value of the information itself and its potential. Machine learning explores the study and construction of algorithms that can learn from and make predictions on data. A powerful tool is provided by self-organizing maps (SOM). The goal of learning in the self-organizing map is to cause different parts of the network to respond… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 36 publications
0
2
0
Order By: Relevance
“…It is the most indispensable and important technology in the field of AI. From a professional perspective, ML has highly developed perception and parallel information processing ability [23]. Some traditional and advanced ML algorithms are summarized as follows (Fig.…”
Section: Machine Learningmentioning
confidence: 99%
“…It is the most indispensable and important technology in the field of AI. From a professional perspective, ML has highly developed perception and parallel information processing ability [23]. Some traditional and advanced ML algorithms are summarized as follows (Fig.…”
Section: Machine Learningmentioning
confidence: 99%
“…[19] algorithm used a two-list method to solved problems on a GPU using CUDA (Compute Unified Device Architecture) and [10] parallelized the Doolittle algorithm on multicores and achieved a performance better than the serial version of the Doolittle algorithm. [20] also implemented a parallel version of self-organizing maps algorithm (SOM) on a parallel architecture and achieved good performance against the CPU version.…”
Section: Some Parallel Scheduling Attemptsmentioning
confidence: 99%
“…The parallel programming models using GPUs include CUDA (Compute Unified Device Architecture), OpenCL (Open Computing Language), DirectCompute and many other approaches. Cuomo et al [10] proposed a novel parallel implementation of self-organization map (SOM) neural networks on CUDA-GPU architectures. This approach uses the latest cuBLAS library from NVIDIA to achieve fast accelerated execution of standard linear algebra subroutines.…”
Section: Introductionmentioning
confidence: 99%