Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '03 2003
DOI: 10.1145/956755.956778
|View full text |Cite
|
Sign up to set email alerts
|

Mining concept-drifting data streams using ensemble classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
437
0
16

Year Published

2009
2009
2017
2017

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 473 publications
(457 citation statements)
references
References 0 publications
4
437
0
16
Order By: Relevance
“…As a result, the discriminative power of a hypothesis should be determined by its performance on each instance in scriptLho(t), rather than its overall performance on the entire set of scriptLho(t). This design takes the class imbalance issue into account and is different from other practices dealing with balanced data streams [14]. In the implementation, we use the logistic loss function, i.e., Δ( f ( x ), y ) = log(1 + exp(− yf ( x ))).…”
Section: Methodsmentioning
confidence: 99%
“…As a result, the discriminative power of a hypothesis should be determined by its performance on each instance in scriptLho(t), rather than its overall performance on the entire set of scriptLho(t). This design takes the class imbalance issue into account and is different from other practices dealing with balanced data streams [14]. In the implementation, we use the logistic loss function, i.e., Δ( f ( x ), y ) = log(1 + exp(− yf ( x ))).…”
Section: Methodsmentioning
confidence: 99%
“…To overcome this problem, Street proposed a method to retiring old classifiers one at a time [37]. Wang assigned weights to classifiers proportional to their accuracy on the most recent data block [38]. Chu viewed the choice of weights as an optimization problem and used logistic regression to settle it [39].…”
Section: Challenges Of Traffic Identificationmentioning
confidence: 99%
“…Vergara et al used a static ensemble of multiple SVMs to cope with the problem of drift in chemical gas sensors [26]. Wang et al also proposed a static ensemble of SVMs, which is similar to that of Vergara et al , only instead of the weight assignment method [27]. Amini et al used an ensemble of SVMs or MLPs on data from a single metal oxide gas sensor (SP3-AQ2, FIS Inc., Hyogo, Japan) operated at six different rectangular heating voltage pulses (temperature modulation), to identify analytes regardless of concentration [28].…”
Section: Related Workmentioning
confidence: 99%
“…Wang et al [27] use the weight, β i = MSE r – MSE i , for each classifier, f i , where MSE i is the mean square error of classifier f i on S T and MSE r is the mean square error of a classifier predicting randomly.…”
Section: Dynamic Classifier Ensemble and Predictionsmentioning
confidence: 99%
See 1 more Smart Citation