2023
DOI: 10.1016/j.patcog.2023.109310
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge aggregation networks for class incremental learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 15 publications
0
0
0
Order By: Relevance
“…In order to achieve a balance between model stability and plasticity during the training of the dual-model, we have introduced a loss function that considers three aspects: classification loss to address class imbalance, distillation loss to preserve old knowledge, and SupCon loss to learn new knowledge [75].…”
Section: ) Designing the Loss Functionmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to achieve a balance between model stability and plasticity during the training of the dual-model, we have introduced a loss function that considers three aspects: classification loss to address class imbalance, distillation loss to preserve old knowledge, and SupCon loss to learn new knowledge [75].…”
Section: ) Designing the Loss Functionmentioning
confidence: 99%
“…Similarly, F old demonstrates enhanced classification capabilities for images of old classes, given that Ω old is inherently designed to classify the old classes. To optimally harness the information from both F old and F new , we introduce the following dual-model adaptive feature fusion module [75]:…”
Section: ) Dual-model Adaptive Feature Fusionmentioning
confidence: 99%