2019
DOI: 10.1007/978-3-030-19642-4_20
|View full text |Cite
|
Sign up to set email alerts
|

Passive Concept Drift Handling via Momentum Based Robust Soft Learning Vector Quantization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 16 publications
0
12
0
Order By: Relevance
“…6is not computed at the update step and, therefore, the RSLVQ is feasible for potentially infinite streams. The prototypes in each update step will be optimized with a momentumbased gradient technique designed for RSLVQ [21] given with…”
Section: Robust Soft Learning Vector Quantizationmentioning
confidence: 99%
See 1 more Smart Citation
“…6is not computed at the update step and, therefore, the RSLVQ is feasible for potentially infinite streams. The prototypes in each update step will be optimized with a momentumbased gradient technique designed for RSLVQ [21] given with…”
Section: Robust Soft Learning Vector Quantizationmentioning
confidence: 99%
“…The σ is the width of the Gaussian kernel and together with the number of prototypes, are the only tuneable parameters in the RSLVQ. For a more comprehensive derivation of RSLVQ with momentum SGD see [3,21].…”
Section: Robust Soft Learning Vector Quantizationmentioning
confidence: 99%
“…Also, LVQ algorithms first introduced by [19] have received attention as potential stream classification algorithms [30]. The Robust Soft Learning Vector Quantization (RSLVQ) [29] is a promising probabilistic classifier, which assumes class distributions as Gaussian mixture models learned via Stochastic Gradient Ascent (SGA) and has only been evaluated as a stream classifier in a previous version of this article [16] so far. Also, Generalized Learning Vector Quantization has not been considered as a stream classifier yet.…”
Section: Related Workmentioning
confidence: 99%
“…Hence, we do not optimize hyperparameters in our experiments. Note that in [16] hyperparameters of all classifiers were tuned, which lead to different results.…”
Section: Setupmentioning
confidence: 99%
See 1 more Smart Citation