2022
DOI: 10.1016/j.neunet.2022.02.001
|View full text |Cite
|
Sign up to set email alerts
|

Deep Bayesian Unsupervised Lifelong Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…Architecture-based methods assign isolated parameters for each task. These methods can be further categorized as expanding the model [51,65,26,31,59], or dividing the model [35,52,58,17,11]. However, a major part of the work is limited to the task-incremental setting [52,35,34,17], while other work only considers specific convolutional-based architectures [61,44,11].…”
Section: Related Workmentioning
confidence: 99%
“…Architecture-based methods assign isolated parameters for each task. These methods can be further categorized as expanding the model [51,65,26,31,59], or dividing the model [35,52,58,17,11]. However, a major part of the work is limited to the task-incremental setting [52,35,34,17], while other work only considers specific convolutional-based architectures [61,44,11].…”
Section: Related Workmentioning
confidence: 99%
“…The input of the DPMM module is the latent variable C of the biological information, where c n is the n-th component in C. The prior distribution provided by the DPMM module can constrain c n and infer the clustering label y n of cell n. In this module, c n is assumed to be generated from the Dirichlet process mixture model, and the loss function of this module is defined as the negative log-likelihood of c n and the distribution determined by DPMM [14]. The loss function l DPMM is defined as the following:…”
Section: The Dpmm Modulementioning
confidence: 99%
“…However, their performance degrades for smaller memory size (Cha et al, 2021), and storing these exemplars can introduce security and privacy concerns Shokri & Shmatikov (2015). Architecture-driven CL methods either dynamically expand a network Rusu et al (2016); Li et al (2019b); Zhao et al (2022), or divide into sub-networks to cater for the new tasks (Zhao et al, 2022;Wang et al, 2020;Ke et al, 2020;Rajasegaran et al, 2019a). Such approaches lack scalability, since the network capacity grows with tasks.…”
Section: Related Workmentioning
confidence: 99%