The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2014
DOI: 10.3844/jcssp.2014.2124.2134
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Implementation of Expectation-Maximisation Algorithm for the Training of Gaussian Mixture Models

Abstract: Most machine learning algorithms need to handle large data sets. This feature often leads to limitations on processing time and memory. The Expectation-Maximization (EM) is one of such algorithms, which is used to train one of the most commonly used parametric statistical models, the Gaussian Mixture Models (GMM). All steps of the algorithm are potentially parallelizable once they iterate over the entire data set. In this study, we propose a parallel implementation of EM for training GMM using CUDA. Experiment… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…Several works used GPUs to speed up the EM-algorithm for parameter estimations [3,19,22]. They exploited the innate data parallelism due to the independent computation for different data samples.…”
Section: Related Workmentioning
confidence: 99%
“…Several works used GPUs to speed up the EM-algorithm for parameter estimations [3,19,22]. They exploited the innate data parallelism due to the independent computation for different data samples.…”
Section: Related Workmentioning
confidence: 99%