2023
DOI: 10.48550/arxiv.2301.12314
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Progressive Prompts: Continual Learning for Language Models

Abstract: We introduce Progressive Prompts -a simple and efficient approach for continual learning in language models. Our method allows forward transfer and resists catastrophic forgetting, without relying on data replay or a large number of task-specific parameters. Progressive Prompts learns a new soft prompt for each task and sequentially concatenates it with the previously learned prompts, while keeping the base model frozen. Experiments on standard continual learning benchmarks show that our approach outperforms s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…In Table 2, we compare with the previous state-of-the-art methods: MBPA++ [5], LAMOL [31], IDBR [10] and ProgressivePrompt [25] on full-setting datasets. The results show that C&F achieves the best performances on 4 different orders.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In Table 2, we compare with the previous state-of-the-art methods: MBPA++ [5], LAMOL [31], IDBR [10] and ProgressivePrompt [25] on full-setting datasets. The results show that C&F achieves the best performances on 4 different orders.…”
Section: Resultsmentioning
confidence: 99%
“…IDBR [10] disentangles text representation space into a task generic space and a task specific space. ProgressivePrompt [25] learns a new soft prompt for each task and sequentially concatenates it with the previously learned ones, while keeping the base model frozen.…”
Section: Methods For Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…The model starts with simple prompts and progresses to more challenging ones as it becomes more proficient or as the task requires (Zheng et al 2023) . Progressive prompting allows the model to incrementally build upon its knowledge and skills, leading to more sophisticated output (Razdaibiedina et al 2023) .…”
Section: Progressive Promptingmentioning
confidence: 99%
“…In rehearsal-based methods, data from previously learned tasks are stored in a rehearsal buffer and used in the current task in addition to the current training set [3,4,33]. A more recent promptbased rehearsal-free approach combines powerful pretrained backbones with learnable prompts that retain the knowledge acquired from the different tasks without modifying the weights of the main backbone, thus avoiding forgetting [52,53,41,37]. Our method gets inspiration from the solutions proposed in the latter methods.…”
Section: Incremental Learningmentioning
confidence: 99%