2015
DOI: 10.1371/journal.pone.0139931
|View full text |Cite|
|
Sign up to set email alerts
|

A Fast Incremental Gaussian Mixture Model

Abstract: This work builds upon previous efforts in online incremental learning, namely the Incremental Gaussian Mixture Network (IGMN). The IGMN is capable of learning from data streams in a single-pass by improving its model after analyzing each data point and discarding it thereafter. Nevertheless, it suffers from the scalability point-of-view, due to its asymptotic time complexity of O(NKD 3) for N data points, K Gaussian components and D dimensions, rendering it inadequate for high-dimensional data. In this work, w… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 56 publications
(38 citation statements)
references
References 21 publications
0
36
0
Order By: Relevance
“…In this paper, the RBF basis centres and widths are fixed throughout learning (see below). More generally, however, various strategies exist to construct and modify them during learning [18]. Overall, the output mapping (from x t to π t ) is equivalent to a Radial Basis Function (RBF) classifier.…”
Section: Value Networkmentioning
confidence: 99%
“…In this paper, the RBF basis centres and widths are fixed throughout learning (see below). More generally, however, various strategies exist to construct and modify them during learning [18]. Overall, the output mapping (from x t to π t ) is equivalent to a Radial Basis Function (RBF) classifier.…”
Section: Value Networkmentioning
confidence: 99%
“…The experimental results showed that ILVQ was superior to other incremental algorithms in both of accuracy and compression ratio. Some incremental learning algorithms based on Gaussian mixture network were also proposed to handle streaming data classification with faster and scalable algorithms [14, 15]. In their works, the performance was evaluated in terms of classification accuracy, number of used components, and learning time but the performance along the course test was not evaluated as time goes by.…”
Section: Related Workmentioning
confidence: 99%
“…Traditional classification techniques are usually created under complete static data given. Many learning algorithms were proposed to solve data stream classification problem, directly, such as [715, 17, 18]. Some stream pattern classification techniques were applied to tackle problems in real world situation.…”
Section: Introductionmentioning
confidence: 99%
“…Hence, the proposed FlexClustS and FlexClustB clustering algorithms are scalable to very large image data sets. In the reconstruction of image using the GMM compact representation, the mixture component without any assignment will be considered as spurious component and therefore be removed as it has almost no negative impact on the model quality [ 39 ].…”
Section: The Flexclust Algorithmsmentioning
confidence: 99%