International Conference on Fuzzy Systems 2010
DOI: 10.1109/fuzzy.2010.5584328
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised incremental learning

Abstract: The paper introduces a hybrid evolving architecture for dealing with incremental learning. It consists of two components: resource allocating neural network (RAN) and growing Gaussian mixture model (GGMM). The architecture is motivated by incrementality on one hand and on the other hand by the possibility to handle unlabeled data along with the labeled one, given that the architecture is dedicated to classification problems. The empirical evaluation shows the efficiency of the proposed hybrid learning architec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2012
2012
2014
2014

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 19 publications
(12 reference statements)
0
6
0
Order By: Relevance
“…This corresponds to tracking drifting priors (an interested reader is referred to [Zhang and Zhou 2010]), and novelty detection (an interested reader is referred to [Markou and Singh 2003;Masud et al 2011]). Furthermore, semi-supervised drift handling techniques based on clustering (an interested reader is referred to [Aggarwal 2005;Bouchachia et al 2010]) are not discussed in this paper.…”
Section: Original Datamentioning
confidence: 99%
“…This corresponds to tracking drifting priors (an interested reader is referred to [Zhang and Zhou 2010]), and novelty detection (an interested reader is referred to [Markou and Singh 2003;Masud et al 2011]). Furthermore, semi-supervised drift handling techniques based on clustering (an interested reader is referred to [Aggarwal 2005;Bouchachia et al 2010]) are not discussed in this paper.…”
Section: Original Datamentioning
confidence: 99%
“…Hence a method that updates the model in a continual and an evolutionary manner can be used in the present study. This problem has been addressed by a few researchers such as Stauffer et al [9], Radford et al [10], Bouchachia et al [5] and Lee [2]. In this study we use some findings of Lee [2].…”
Section: Online Semi-supervised Learningmentioning
confidence: 99%
“…Though it appears similar to the RNB proposed in [13], our method uses different method to update learning parameters online (eq. (4), (5) and (6)). Besides, our method uses online bagging [3] for random selection of input and they use randomized threshold selection.…”
Section: Our Contributionsmentioning
confidence: 99%
“…Hence a method that updates the model in a continual and an evolutionary manner can be used in the present study. This problem has been addressed by a few researchers such as Stauffer et al [8], Radford et al [9], Bouchachia et al [5] and Lee [2]. In this study we use some findings of Lee [2].…”
Section: Online Semi-supervised Learningmentioning
confidence: 99%