2023
DOI: 10.1007/s10489-022-04441-z
|View full text |Cite
|
Sign up to set email alerts
|

Continual prune-and-select: class-incremental learning with specialized subnetworks

Abstract: We introduce a new continual (or lifelong) learning algorithm called LDA-CP&S that performs segmentation tasks without undergoing catastrophic forgetting. The method is applied to two different surface defect segmentation problems that are learned incrementally, i.e. providing data about one type of defect at a time, while still being capable of predicting every defect that was seen previously. Our method creates a defect-related subnetwork for each defect type via iterative pruning and trains a classifier bas… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 83 publications
0
2
0
Order By: Relevance
“…Introducing task-specific parameters. Research on continual neural pruning assigns some model capacity to each task by iteratively pruning and retraining (sub-)networks that are specialized to each task (Mallya and Lazebnik, 2018;Geng et al, 2021;Dekhovich et al, 2023;Kang et al, 2022;Hung et al, 2019;Gurbuz and Dovrolis, 2022;Jung et al, 2020;Wang et al, 2022). However, while such methods are effective in overcoming forgetting, the evolution and learning behavior of pruned subnetworks provides little interpretability regarding the role of individual parts of the network in solving CL problems.…”
Section: Related Workmentioning
confidence: 99%
“…Introducing task-specific parameters. Research on continual neural pruning assigns some model capacity to each task by iteratively pruning and retraining (sub-)networks that are specialized to each task (Mallya and Lazebnik, 2018;Geng et al, 2021;Dekhovich et al, 2023;Kang et al, 2022;Hung et al, 2019;Gurbuz and Dovrolis, 2022;Jung et al, 2020;Wang et al, 2022). However, while such methods are effective in overcoming forgetting, the evolution and learning behavior of pruned subnetworks provides little interpretability regarding the role of individual parts of the network in solving CL problems.…”
Section: Related Workmentioning
confidence: 99%
“…Recent work has looked at improving the training of PINNs for such systems, including applications of the neural tangent kernel [33], but more work remains to be done. The closest work we are aware of for continual learning with PINNS is the backward-compatible PINNs in [34] and incremental PINNs (iPINNs) in [35]. Backward-compatible PINNs train N PINNs on a sequence of N time domains, and in each new domain enforce that the output from the current PINN satisfies the PINN loss function in the current domain and the output of the previous model on all previous domains.…”
Section: Introductionmentioning
confidence: 99%
“…With its need for finetuning on a series of datasets, this approach does not alleviate the ongoing concern of substantial computational costs for model execution. Contrary to this trend, few research efforts (Dekhovich et al, 2023) have been recently made by applying pruning methods within a continual learning context. However, they still require re-training of the base pre-trained models on the new datasets, with a potential reduction of their generalizability originially set toward various tasks.…”
Section: Introductionmentioning
confidence: 99%