2020 International Conference on Data Mining Workshops (ICDMW) 2020
DOI: 10.1109/icdmw51313.2020.00121
|View full text |Cite
|
Sign up to set email alerts
|

Incremental Rebalancing Learning on Evolving Data Streams

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 12 publications
(13 citation statements)
references
References 10 publications
0
13
0
Order By: Relevance
“…Instead, the methods we propose in this paper (i.e. SML+), [18], Re-balanceStream [19] are considered active approaches. Then, there are other models like OOB and UOB [20], WEOB1 and WEOB2 [21] that, similarly to C-SMOTE, propose only an online rebalance strategy, leaving to the pipelined algorithm the concept drift management (approach type and the concept drift approach).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Instead, the methods we propose in this paper (i.e. SML+), [18], Re-balanceStream [19] are considered active approaches. Then, there are other models like OOB and UOB [20], WEOB1 and WEOB2 [21] that, similarly to C-SMOTE, propose only an online rebalance strategy, leaving to the pipelined algorithm the concept drift management (approach type and the concept drift approach).…”
Section: Related Workmentioning
confidence: 99%
“…We pipelined these algorithms with the C-SMOTE meta-strategy and compared them against the stand-alone versions. Instead, as SML+ models, we tested the ARF RE [18], RB [19], OOB and UOB [20] techniques. Unfortunately, the implementations of the other algorithms cited in the Related Work section were unavailable or did not work.…”
Section: Algorithmsmentioning
confidence: 99%
“…RebalanceStream (RB) [6] uses ADWIN [8] to detect concept drift in the stream by training four models m 1 , m 2 , m 3 , and m 4 in parallel: m 1 is trained with the original samples in input; m 2 uses the samples collected and rebalanced using SMOTE from the beginning to a change (when the last concept drift occurred); m 3 uses the samples collected from a warning (when the most recent concept drift started to occur) to a change; and m 4 uses the same data of m 3 but rebalanced using SMOTE. After a change, the best model among them is chosen and it is used to continue the execution according to k-statistics.…”
Section: Related Workmentioning
confidence: 99%
“…We pipeline these algorithms with the new C-SMOTE meta-strategy and compare against the stand-alone versions. We also compare the ARF RE [5] technique to ARF pipelined with C-SMOTE and the RB [6] strategy to SWT pipelined with C-SMOTE. We show the OOB and UOB [7] results compared to the best technique pipelined with C-SMOTE, too.…”
Section: A Experimental Settingsmentioning
confidence: 99%
See 1 more Smart Citation