2019
DOI: 10.1126/scirobotics.aav3150
|View full text |Cite
|
Sign up to set email alerts
|

Beyond imitation: Zero-shot task transfer on robots by learning concepts as cognitive programs

Abstract: Humans can infer concepts from image pairs and apply those in the physical world in a completely different setting, enabling tasks like IKEA assembly from diagrams. If robots could represent and infer high-level concepts, it would significantly improve their ability to understand our intent and to transfer tasks between different environments. To that end, we introduce a computational framework that replicates aspects of human concept learning. Concepts are represented as programs on a novel computer architect… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
49
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 52 publications
(50 citation statements)
references
References 57 publications
(119 reference statements)
0
49
0
Order By: Relevance
“…To avoid this effect, further research on learning the concepts themselves would be desirable. Some attempts have been initially applied in visual concepts, which mention the possibility of applying them to different applications, such as textual or auditory scenarios [83].…”
Section: Applications In Technological Financial and Other Successfmentioning
confidence: 99%
“…To avoid this effect, further research on learning the concepts themselves would be desirable. Some attempts have been initially applied in visual concepts, which mention the possibility of applying them to different applications, such as textual or auditory scenarios [83].…”
Section: Applications In Technological Financial and Other Successfmentioning
confidence: 99%
“…Grid cell outputs provide a periodic tiling of uniform space, which is advantageous for learning and navigating maps when other sensory cues are absent. Similarly encoding snapshots from a graphical model for vision [37] as the input to this sequencer might enable the learning of visuo-spatial concepts and visual routines [38], and model the bi-directional influence hip-275 pocampus has on the visual cortex. We believe these ideas are promising paths for future exploration.…”
mentioning
confidence: 99%
“…Problems in perception still need dynamic inference, which means that the reasoning components will need to go all the way down to sensory regions, so that perception and cognition can work together. In our opinion, hybrid models are more likely to be a combination of graphical models, graph-structured neural networks, causal inference, and probabilistic programs (Lázaro-Gredilla et al, 2019). Neural networks will help to accelerate inference and learning in many parts of these hybrid models.…”
Section: Do Hybrid Models Imply Neural Network For Perception and Symentioning
confidence: 99%