2012
DOI: 10.1007/978-3-642-33266-1_17
|View full text |Cite
|
Sign up to set email alerts
|

Neural PCA and Maximum Likelihood Hebbian Learning on the GPU

Abstract: This study introduces a novel fine-grained parallel implementation of a neural principal component analysis (neural PCA) variant and the maximum Likelihood Hebbian Learning (MLHL) network designed for modern many-core graphics processing units (GPUs). The parallel implementation as well as the computational experiments conducted in order to evaluate the speedup achieved by the GPU are presented and discussed. The evaluation was done on a well-known artificial data set, the 2D bars data set.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2014
2014
2016
2016

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…AI has been successfully applied to many different fields as, for example, feature selection [18,19]. In this study, an extension of a neural PCA version [20,21] and further EPP [22,23] extensions are used to select the most relevant input features in the data set and to study their internal structure. Some projection methods such as PCA [20,21], MLHL [22] and CMLHL [24][25][26] are applied to analyse the internal structure of the data and find out, which variables determine this internal structure and what they affect in this internal structure.…”
Section: Exploratory Projection Pursuitmentioning
confidence: 99%
See 4 more Smart Citations
“…AI has been successfully applied to many different fields as, for example, feature selection [18,19]. In this study, an extension of a neural PCA version [20,21] and further EPP [22,23] extensions are used to select the most relevant input features in the data set and to study their internal structure. Some projection methods such as PCA [20,21], MLHL [22] and CMLHL [24][25][26] are applied to analyse the internal structure of the data and find out, which variables determine this internal structure and what they affect in this internal structure.…”
Section: Exploratory Projection Pursuitmentioning
confidence: 99%
“…Interestingness is usually defined in terms of how far the distribution is from the Gaussian distribution [27]. One neural implementation of EPP is MLHL [22]. It identifies interestingness by maximizing the probability of the residuals under specific probability density functions that are non-Gaussian.…”
Section: A Neural Implementation Of Exploratory Projection Pursuitmentioning
confidence: 99%
See 3 more Smart Citations