2011 IEEE/RSJ International Conference on Intelligent Robots and Systems 2011
DOI: 10.1109/iros.2011.6094926
|View full text |Cite
|
Sign up to set email alerts
|

A system for interactive learning in dialogue with a tutor

Abstract: Abstract-In this paper we present representations and mechanisms that facilitate continuous learning of visual concepts in dialogue with a tutor and show the implemented robot system. We present how beliefs about the world are created by processing visual and linguistic information and show how they are used for planning system behaviour with the aim at satisfying its internal drive -to extend its knowledge. The system facilitates different kinds of learning initiated by the human tutor or by the system itself… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 22 publications
0
11
0
Order By: Relevance
“…These models either use toy languages as input (e.g., Siskind, 1996), or childdirected utterances from the CHILDES database (MacWhinney, 2014) paired with artificially generated semantic information. Some models have investigated the acquisition of terminology for visual concepts from simple videos (Fleischman and Roy, 2005;Skocaj et al, 2011). Lazaridou et al (2015) adapt the skip-gram word-embedding model (Mikolov et al, 2013) for learning word representations via a multi-task objective similar to ours, learning from a dataset where some words are individually aligned with corresponding images.…”
Section: Related Workmentioning
confidence: 99%
“…These models either use toy languages as input (e.g., Siskind, 1996), or childdirected utterances from the CHILDES database (MacWhinney, 2014) paired with artificially generated semantic information. Some models have investigated the acquisition of terminology for visual concepts from simple videos (Fleischman and Roy, 2005;Skocaj et al, 2011). Lazaridou et al (2015) adapt the skip-gram word-embedding model (Mikolov et al, 2013) for learning word representations via a multi-task objective similar to ours, learning from a dataset where some words are individually aligned with corresponding images.…”
Section: Related Workmentioning
confidence: 99%
“…The Semantic Map generation can follow different approaches: by relying on hand-crafted ontologies and using traditional AI reasoning techniques [35,36], by exploiting the purely automatic interpretation of perceptual outcomes [37,38,39], or by relying on interactions in a human-robot collaboration setting [40,41]. • is-contain-able(C, t) denotes that the Contain-ability property holds for all the objects of the class C, e.g., is-contain-able(Cup, t);…”
Section: Semantic Mapmentioning
confidence: 99%
“…A dialogue initiated by the robot is very similar to any other planned knowledge-acquiring action and hence is modelled similarly. The dialogue model is based on the work of Skočaj et al [46], Janíček [21] and will only briefly be outlined in the following. There are two types of dialogue actions: engagement, and questions triggered by the deliberative layer.…”
Section: Dialoguementioning
confidence: 99%