2019
DOI: 10.1007/978-3-030-20876-9_5
|View full text |Cite
|
Sign up to set email alerts
|

Exploring the Challenges Towards Lifelong Fact Learning

Abstract: So far life-long learning (LLL) has been studied in relatively smallscale and relatively artificial setups. Here, we introduce a new large-scale alternative. What makes the proposed setup more natural and closer to human-like visual systems is threefold: First, we focus on concepts (or facts, as we call them) of varying complexity, ranging from single objects to more complex structures such as objects performing actions, and objects interacting with other objects. Second, as in real-world settings, our setup h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 31 publications
0
4
0
Order By: Relevance
“…For instance in a class-incremental setting, an extra head could be added to the network each time a new category label appears. Alternatively, a projection into an embedding space could be used, as in [7], avoiding the need for a growing network architecture. These are directions for future work.…”
Section: Discussionmentioning
confidence: 99%
“…For instance in a class-incremental setting, an extra head could be added to the network each time a new category label appears. Alternatively, a projection into an embedding space could be used, as in [7], avoiding the need for a growing network architecture. These are directions for future work.…”
Section: Discussionmentioning
confidence: 99%
“…They are chosen because they are more computationally efficient than SI (Zenke, Poole, and Ganguli 2017) and more memory efficient than IMM (Lee et al 2017). Additionally, some experiments such as (Elhoseiny et al 2018) show that MAS has better performance overall. Improved Memory-based parameter adaptation (MBPA++) Sparse experience replay and local adaption for LLL as proposed in (d' Autume et al 2019).…”
Section: Lamal γmentioning
confidence: 99%
“…Shmelkov et al [45] and Liu et al [26] investigated incremental object detection while [24,50] learned image generation. Elhoseiny et al [16] examined continual fact learning by utilizing a visual-semantic embedding. Other works [1,20,54] focused on a reinforcement learning task.…”
Section: Related Workmentioning
confidence: 99%