2015
DOI: 10.1016/j.engappai.2014.10.003
|View full text |Cite
|
Sign up to set email alerts
|

An on-line weighted ensemble of regressor models to handle concept drifts

Abstract: a b s t r a c tMany estimation, prediction, and learning applications have a dynamic nature. One of the most important challenges in machine learning is dealing with concept changes. Underlying changes may make the model designed on old data, inconsistent with new data. Also, algorithms usually specialize in one type of change. Other challenge is reusing previously acquired information in scenarios where changes may recur. This strategy improves the learning accuracy and reduces the processing time. Unfortunat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 64 publications
(9 citation statements)
references
References 40 publications
0
9
0
Order By: Relevance
“…ANNs are primarily appropriate for systems with a complex, large scale structure and unclear information [4]. ANNs are widely applied and they are the most common ML algorithms [1], at the same time they have been suggested in several industrial applications involving soft sensing [44], and in predictive control systems [45]. Hesser, D.F.…”
Section: Artificial Neural Network (Ann)mentioning
confidence: 99%
“…ANNs are primarily appropriate for systems with a complex, large scale structure and unclear information [4]. ANNs are widely applied and they are the most common ML algorithms [1], at the same time they have been suggested in several industrial applications involving soft sensing [44], and in predictive control systems [45]. Hesser, D.F.…”
Section: Artificial Neural Network (Ann)mentioning
confidence: 99%
“…In Soares and Araújo (2015), authors proposed an On‐line Weighted Ensemble (OWE) which is a set of single output regressor models in the non‐stationary data stream and uses a cost‐sensitive boosting strategy to assign small errors to the models that predict accurately the samples predicted poorly by the ensemble.…”
Section: Related Workmentioning
confidence: 99%
“…A total of M = 1000 data points are generated for each dataset. The fourth drifting dataset (Hyperplane) is generated similarly to [29], which includes four different concepts and generates totally M (=1000) data points; the dataset includes nine input variables that are uniformly distributed over the interval [0,1]. For these four artificial datasets, a noise generated by a Gaussian distribution with zero mean is added to the output and each input variable.…”
Section: Data Descriptionmentioning
confidence: 99%