2012 IEEE International Conference on Robotics and Biomimetics (ROBIO) 2012
DOI: 10.1109/robio.2012.6491217
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal human-robot interaction with Chatterbot system: Extending AIML towards supporting embodied interactions

Abstract: The research objective of this work is to realize multimodal human-robot interaction based on light-weight Chatterbot system. The dialogue system is integrated into SIGVerse system with immersive multimodal interfaces to achieve interaction in an embodied virtual environment. To validate the feasibility of the proposed design, the actual AIML implementations are described to illustrate (a) Gesture Inputs, (b) Emotional Expressions, (c) Robot Interactive Learning, and (d) Interactive Learning towards Symbol Gro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2013
2013
2016
2016

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 11 publications
(10 reference statements)
0
2
0
Order By: Relevance
“…3) Verbal Communication: The task nodes' descriptions are written in human language and can be used by robot as verbal support (text-to-speech) to human, together with the dialogue engine developed in [3].…”
Section: ) Perception On Objects and Environmentmentioning
confidence: 99%
“…3) Verbal Communication: The task nodes' descriptions are written in human language and can be used by robot as verbal support (text-to-speech) to human, together with the dialogue engine developed in [3].…”
Section: ) Perception On Objects and Environmentmentioning
confidence: 99%
“…3 shows a "Clean Up" task, where the robot is conversing verbally with the human to obtain the information of the objects and the surroundings. Conversational intelligence and dialogue management system [5] are developed to understand the context (including the temporal and spatial information of the objects and environment) in order to deduce the full meaning from the partial verbal instruction giving by the human. In the Okonomiyaki collaborative cooking task (Fig.…”
Section: Hri Research Implementationsmentioning
confidence: 99%