2014 International Joint Conference on Neural Networks (IJCNN) 2014
DOI: 10.1109/ijcnn.2014.6889926
|View full text |Cite
|
Sign up to set email alerts
|

Efficient class incremental learning for multi-label classification of evolving data streams

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 10 publications
0
8
0
Order By: Relevance
“…The improved version, ADWIN2, is more efficient than the first one in time and memory consumption [31]. Other research such as [32,33] use two adjustable windows to represent old and new samples. Xioufis et al [9] used two windows per label to capture the positive and negative groups of samples.…”
Section: Proposed Methods 31 the Online Clustering Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…The improved version, ADWIN2, is more efficient than the first one in time and memory consumption [31]. Other research such as [32,33] use two adjustable windows to represent old and new samples. Xioufis et al [9] used two windows per label to capture the positive and negative groups of samples.…”
Section: Proposed Methods 31 the Online Clustering Modelmentioning
confidence: 99%
“…That framework was also combined with ADWIN [31] to form a new algorithm named EaHTps which can handle concept drift. The Pruned Set-based label combination module of EaHTps was improved by Shi et al [32] in which the new frequent label combinations are dynamically recognized to update the set of label combinations. Xioufis et al [9] used BR to solve the MLC by transforming the multi-label task into several binary classification tasks.…”
Section: Multi-label Classification For Data Streammentioning
confidence: 99%
“…Some approaches have been proposed for class-incremental learning (Da et al 2014;Shi et al 2014) and stream multi-label learning (Mu et al 2017;Qu et al 2009;Read et al 2011;. In these two problems, new labels are unobserved during the training stage, but appear in the test stage.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, it is also able to simulate how this dependence and relationship evolve with the change of time. As far as we know, the most often used approach, in [115], [116], [121], is the MOA (Massive Online Analysis) framework, on which we could synthesize the needing dataset with different concept drifts and label dependencies.…”
Section: Synthetic Datasetsmentioning
confidence: 99%