2019
DOI: 10.1016/j.ins.2019.03.025
|View full text |Cite
|
Sign up to set email alerts
|

AMANDA: Semi-supervised density-based adaptive model for non-stationary data with extreme verification latency

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 26 publications
(10 citation statements)
references
References 21 publications
0
9
0
Order By: Relevance
“…The accuracy of the proposed NEVE approaches was also compared with DWM [26], Learn ++, NSE [9], RCD [16], EFPT [55] and AMANDA [56] models. We used 3 different drift detectors for the RCD algorithm: DDM [14], EDDM [5] and ECDD [42].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The accuracy of the proposed NEVE approaches was also compared with DWM [26], Learn ++, NSE [9], RCD [16], EFPT [55] and AMANDA [56] models. We used 3 different drift detectors for the RCD algorithm: DDM [14], EDDM [5] and ECDD [42].…”
Section: Resultsmentioning
confidence: 99%
“…A more successful and widely used approach though is to use a group of different classifiers (ensemble) to cope with changes in the environment. Several different ensemble models have been proposed in the literature, including recent approaches like [56][57][58], and may or may not weigh each of its members. Most models using weighted classifier ensembles determine the weights for each classifier using a set of heuristics related to classifier performance in the most recent data received [22].…”
Section: Introductionmentioning
confidence: 99%
“…However, expanding the prediction horizon to N p > 1 should present better dispatch strategies and needs more experimental tests. Finally, we highlight some directions for further research: 1)Density approaches aiming to reduce the bias of the learning model induced by the similar customer's calls contained in the stream [Ferreira et al 2018a]; 2)Since some characteristics of this real-time dataset are non-stationary, learning models for concept-drift can improve the results [Ferreira et al 2018b]; and 3)Development of better MPC objective function with other assumptions such as the total time that a customer does not have energy by month and the customer's location.…”
Section: Discussionmentioning
confidence: 99%
“…• Distributional shifts: a condition that decreases the ML performance through time since the training dataset may differ from the real inputs. Such a situation is also known as a concept-drift [13]. Examples of distributional shift include a change in class attributes such as dimension, physical characteristics, contrast, brightness, and other pixel-related variations in the image.…”
Section: B Out-of-distribution Datamentioning
confidence: 99%