2012
DOI: 10.1007/s12530-012-9061-6
|View full text |Cite
|
Sign up to set email alerts
|

Drift detection using uncertainty distribution divergence

Abstract: Data generated from naturally occurring processes tends to be nonstationary. For example, seasonal and gradual changes in climate data and sudden changes in financial data. In machine learning the degradation in classifier performance due to such changes in the data is known as concept drift and there are many approaches to detecting and handling it. Most approaches to detecting concept drift, however, make the assumption that true classes for test examples will be available at no cost shortly after classifica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 36 publications
(22 citation statements)
references
References 25 publications
(24 reference statements)
0
21
0
Order By: Relevance
“…This is because, unlike the feature based change detectors like HDDDM, the margin density (MD) approach implicitly includes the model in the drift detection process. Other unlabeled drift detection techniques developed in literature (Dries and Rückert, 2009;Lindstrom et al, 2013;Dredze et al, 2010;Zliobaite, 2010) and described in Section 2.2.3, also incorporate the notion of a margin. However, these techniques differ from the MD3 approach in the signal being tracked.…”
Section: Experimental Results On Benchmark Concept Drift Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…This is because, unlike the feature based change detectors like HDDDM, the margin density (MD) approach implicitly includes the model in the drift detection process. Other unlabeled drift detection techniques developed in literature (Dries and Rückert, 2009;Lindstrom et al, 2013;Dredze et al, 2010;Zliobaite, 2010) and described in Section 2.2.3, also incorporate the notion of a margin. However, these techniques differ from the MD3 approach in the signal being tracked.…”
Section: Experimental Results On Benchmark Concept Drift Datasetsmentioning
confidence: 99%
“…False alarms lead to wasted human intervention effort and as such are undesirable. The model dependent approaches of (Dries and Rückert, 2009;Lindstrom et al, 2013;Dredze et al, 2010;Zliobaite, 2010), directly consider the classifica-tion process by tracking the posterior probability estimates of classifiers, to detect drift. They can be used with probabilistic classifiers, which output the class probabilities P(Y|X) before thresholding them to generate the final class label.…”
Section: Model Dependent Drift Detection Methodologiesmentioning
confidence: 99%
See 1 more Smart Citation
“…(2) uMD (using Margin Density) [7], CDBD (Confidence Distribution Batch Detection) [6]: Using SVM as a classifier, it performs drift detection on unlabeled data streams. When drift is detected, a new classifier is built with labeled data within a drift warning interval.…”
Section: Experimental Results Under the Limited Access To Class Labelsmentioning
confidence: 99%
“…The method in [5] analyzes a sequence of the posterior estimates derived from the classifier by using the univariate statistical tests such as Two sample t-test and Wilcoxon Rank sum test. CDBD (Confidence Distribution Batch Detection) [6] uses the confidence estimated by the classifier. It uses KullbackLeibler Divergence to compare the distribution of the confidence values.…”
Section: Related Workmentioning
confidence: 99%