Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.218
|View full text |Cite
|
Sign up to set email alerts
|

Continual Learning for Text Classification with Information Disentanglement Based Regularization

Abstract: Continual learning has become increasingly important as it enables NLP models to constantly learn and gain knowledge over time. Previous continual learning methods are mainly designed to preserve knowledge from previous tasks, without much emphasis on how to well generalize models to new tasks. In this work, we propose an information disentanglement based regularization method for continual learning on text classification. Our proposed method first disentangles text hidden spaces into representations that are … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
23
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(45 citation statements)
references
References 28 publications
1
23
0
Order By: Relevance
“…Sentence Embedding Alignment (Wang et al, 2019) stores sentences and their representations and tries to ensure a simple linear mapping that maps from old representations to new representations given a batch of sentences. Huang et al (2021) propose an information disentanglement and regularization-based approach which disentangles and applies separate regularization to task-agnostic and task-specific representations. Model regularization-based approaches perform regularization directly in the weight space.…”
Section: Continual Learning Algorithms In Nlpmentioning
confidence: 99%
See 2 more Smart Citations
“…Sentence Embedding Alignment (Wang et al, 2019) stores sentences and their representations and tries to ensure a simple linear mapping that maps from old representations to new representations given a batch of sentences. Huang et al (2021) propose an information disentanglement and regularization-based approach which disentangles and applies separate regularization to task-agnostic and task-specific representations. Model regularization-based approaches perform regularization directly in the weight space.…”
Section: Continual Learning Algorithms In Nlpmentioning
confidence: 99%
“…Model expansion-based approaches separate task-specific parameters from irrelevant ones and freeze shared parameters to prevent catastrophic forgetting. While sometimes not explicitly studied, Adapter-based approaches (Wang et al, 2021) could be applied to continual learning. The algorithms learn a single adapter per task without interference with pretrained weights or other tasks; at the same time, knowledge captured in previous tasks can be effectively fused to new tasks (Pfeiffer et al, 2021).…”
Section: Continual Learning Algorithms In Nlpmentioning
confidence: 99%
See 1 more Smart Citation
“…Generally, existing CL methods encompass memory and generative replaybased approaches (Robins, 1995;Lopez-Paz and Ranzato, 2017;Shin et al, 2017), regularization based approaches (Kirkpatrick et al, 2017;Nguyen et al, 2018) and model expansion based approaches (Shin et al, 2017). Recently, continual learning has drawn attention in the NLP field (Sun et al, 2020;Wang et al, 2019b;Huang et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…Both random selection and kcenter methods utilise heuristics to update the memory. EA-EMR [5] and IDBR [18] selected informative samples by referencing the centroid of the cluster via K-Means. iCaRL [19] chose samples, that are nearest to the mean of the distribution.…”
Section: ) Sample Selection Schemesmentioning
confidence: 99%