2021
DOI: 10.1109/access.2021.3126027
|View full text |Cite
|
Sign up to set email alerts
|

SPACE: Structured Compression and Sharing of Representational Space for Continual Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
47
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 18 publications
(47 citation statements)
references
References 26 publications
0
47
0
Order By: Relevance
“…Although the starting point is different, our gradient modification process is closely related to gradient constraint methods like OWM (Zeng et al, 2019) and GPM (Saha et al, 2021). OWM uses similar iteratively updated projectors derived from recursively least square(RLS), which regards each layer as an independent linear classifier and uses the output of the previous layer to build the projection matrix.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Although the starting point is different, our gradient modification process is closely related to gradient constraint methods like OWM (Zeng et al, 2019) and GPM (Saha et al, 2021). OWM uses similar iteratively updated projectors derived from recursively least square(RLS), which regards each layer as an independent linear classifier and uses the output of the previous layer to build the projection matrix.…”
Section: Related Workmentioning
confidence: 99%
“…This brings an additional advantage that our method can reuse the hyperparameters of original single-task models. GPM (Saha et al, 2021) projects the gradient of each layer into a lower-dimensional residual space of previous tasks, while the parameter space of RGO is consistent for different tasks. RGO will maintain the network's fitting ability as the number of tasks increases.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations