The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2010
DOI: 10.1088/1742-6596/256/1/012014
|View full text |Cite
|
Sign up to set email alerts
|

CUDA-accelerated genetic feedforward-ANN training for data mining

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
1
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 1 publication
0
1
0
Order By: Relevance
“…The GPGPU APIs have simplified the development of neural algorithms and ANNs for the graphics hardware significantly [10,16] and a variety of neurocomputing algorithms were ported to the GPUs [10,14,15,16,18,20,22,24]. The CUDA platform was used to achieve 46 to 63 times faster learning of a feedforward ANN by the backpropagation algorithm by Sierra-Canto et al [24] while Lopes and Ribeiro [14] reported a 10 to 40 faster implementation of the multiple backpropagation training of feedforward and multiple feedforward ANNs.…”
Section: Gpu Computingmentioning
confidence: 99%
See 2 more Smart Citations
“…The GPGPU APIs have simplified the development of neural algorithms and ANNs for the graphics hardware significantly [10,16] and a variety of neurocomputing algorithms were ported to the GPUs [10,14,15,16,18,20,22,24]. The CUDA platform was used to achieve 46 to 63 times faster learning of a feedforward ANN by the backpropagation algorithm by Sierra-Canto et al [24] while Lopes and Ribeiro [14] reported a 10 to 40 faster implementation of the multiple backpropagation training of feedforward and multiple feedforward ANNs.…”
Section: Gpu Computingmentioning
confidence: 99%
“…Ghuzva et al [10] presented a coarse-grained implementation of the multilayer perceptron (MLP) on the CUDA platform that operated a set of MLPs in parallel 50 times faster than a sequential CPU-based implementation. The training of a feedforward neural network by genetic algorithms was implemented on CUDA by Patulea et al [20] and it was 10 times faster than a sequential version of the same algorithm. An application of a GPU-powered ANN for speech recognition is due to Scanzio et al [22].…”
Section: Gpu Computingmentioning
confidence: 99%
See 1 more Smart Citation
“…The training of EA-ANN on the graphic platform accelerator is ideal, due to the features of NVIDIA's compute unified device architecture (CUDA) which is a programming framework for the massively parallel GPUs, also to the architecture of GPU that contains thousands of independent floating-point units connected to on-board memory enabling high memory bandwidth, making this device perfect for providing significant speed-up to the intensive parallel applications (Patulea et al, 2014), by respecting memory access rules.…”
Section: Introductionmentioning
confidence: 99%