2016
DOI: 10.1080/0952813x.2015.1132268
|View full text |Cite
|
Sign up to set email alerts
|

An integrated system for interactive continuous learning of categorical knowledge

Abstract: This article presents an integrated robot system capable of interactive learning in dialogue with a human. Such a system needs to have several competencies and must be able to process different types of representations. In this article, we describe a collection of mechanisms that enable integration of heterogeneous competencies in a principled way. Central to our design is the creation of beliefs from visual and linguistic information, and the use of these beliefs for planning system behaviour to satisfy inter… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(16 citation statements)
references
References 51 publications
(28 reference statements)
0
16
0
Order By: Relevance
“…Ideally, such interactive systems ought to be able to handle natural, spontaneous human dialogue. However, most work on interactive language grounding learn their systems from synthetic, hand-made dialogues or simulations which lack both in variation and the kinds of dialogue phenomena that occur in everyday conversation; they thus lead to systems which are not robust and cannot handle everyday conversation (Yu et al, 2016c;Skocaj et al, 2016;Yu et al, 2016a). In this paper, we try to change this by training an adaptive learning agent from human-human dialogues in a visual attribute learning task.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Ideally, such interactive systems ought to be able to handle natural, spontaneous human dialogue. However, most work on interactive language grounding learn their systems from synthetic, hand-made dialogues or simulations which lack both in variation and the kinds of dialogue phenomena that occur in everyday conversation; they thus lead to systems which are not robust and cannot handle everyday conversation (Yu et al, 2016c;Skocaj et al, 2016;Yu et al, 2016a). In this paper, we try to change this by training an adaptive learning agent from human-human dialogues in a visual attribute learning task.…”
Section: Related Workmentioning
confidence: 99%
“…from images or videos an-notated with descriptions or definite reference expressions as in (Kennington and Schlangen, 2015;Socher et al, 2014)) or from live interaction as in, e.g. (Skocaj et al, 2016;Yu et al, 2015Yu et al, , 2016cDas et al, 2017Das et al, , 2016de Vries et al, 2016;Thomason et al, 2015Thomason et al, , 2016Tellex et al, 2013). The latter, which we do here, is clearly more appropriate for multimodal systems or robots that are expected to continuously, and incrementally learn from the environment and their users.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…While there have been many efforts to capture the notion of common ground in general [8] and in human-robot interaction settings [6], the computational management of situated spatial dialogue is still under-developed [15,34] and requires creative solutions for reference handling [23], including attempts to incorporate the human's gaze in the system's interpretation procedure [1], and strategies for handling errors [33]. One major challenge concerns the fundamental difference between human concepts represented by natural language, especially in the domain of space [2], and formal systems suited for computational purposes, e.g., spatial reasoning-even if based on qualitative rather than metric relations [25].…”
Section: Spatial Dialoguementioning
confidence: 99%