2018 7th Brazilian Conference on Intelligent Systems (BRACIS) 2018
DOI: 10.1109/bracis.2018.00039
|View full text |Cite
|
Sign up to set email alerts
|

Density-Based Core Support Extraction for Non-stationary Environments with Extreme Verification Latency

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 29 publications
0
1
0
Order By: Relevance
“…However, expanding the prediction horizon to N p > 1 should present better dispatch strategies and needs more experimental tests. Finally, we highlight some directions for further research: 1)Density approaches aiming to reduce the bias of the learning model induced by the similar customer's calls contained in the stream [Ferreira et al 2018a]; 2)Since some characteristics of this real-time dataset are non-stationary, learning models for concept-drift can improve the results [Ferreira et al 2018b]; and 3)Development of better MPC objective function with other assumptions such as the total time that a customer does not have energy by month and the customer's location.…”
Section: Discussionmentioning
confidence: 99%
“…However, expanding the prediction horizon to N p > 1 should present better dispatch strategies and needs more experimental tests. Finally, we highlight some directions for further research: 1)Density approaches aiming to reduce the bias of the learning model induced by the similar customer's calls contained in the stream [Ferreira et al 2018a]; 2)Since some characteristics of this real-time dataset are non-stationary, learning models for concept-drift can improve the results [Ferreira et al 2018b]; and 3)Development of better MPC objective function with other assumptions such as the total time that a customer does not have energy by month and the customer's location.…”
Section: Discussionmentioning
confidence: 99%
“…The choice of k represents a trade-off between accuracy in the training set and the amount of memory required to store the samples to be used at runtime. Previous works showed that such density estimation selection can discard up to 90% of the original dataset without a drastic performance drop [9]. Therefore, in this work, we chose k = 10% of the training set.…”
Section: Monitor Buildingmentioning
confidence: 99%