2007
DOI: 10.1007/978-3-540-73281-5_108
|View full text |Cite
|
Sign up to set email alerts
|

Integrating Language, Vision and Action for Human Robot Dialog Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0
1

Year Published

2009
2009
2019
2019

Publication Types

Select...
4
2
2

Relationship

3
5

Authors

Journals

citations
Cited by 33 publications
(29 citation statements)
references
References 13 publications
0
28
0
1
Order By: Relevance
“…The robot torso (Figure 1) being built as part of the project [10,11] consists of a pair of mechanical arms with grippers and an animatronic talking head. The input channels consist of speech recognition, object recognition, gesture recognition, and robot sensors; the outputs include synthesized speech, emotional expressions, head motions, and robot actions.…”
Section: The Jast Robotmentioning
confidence: 99%
“…The robot torso (Figure 1) being built as part of the project [10,11] consists of a pair of mechanical arms with grippers and an animatronic talking head. The input channels consist of speech recognition, object recognition, gesture recognition, and robot sensors; the outputs include synthesized speech, emotional expressions, head motions, and robot actions.…”
Section: The Jast Robotmentioning
confidence: 99%
“…Studies on the social interaction of human-computer interfaces have included conversations with robots [11][12][13] and virtual agents [14][15][16][17]. An important cue to recognize social interaction is nonverbal information.…”
Section: Introductionmentioning
confidence: 99%
“…An important cue to recognize social interaction is nonverbal information. These studies employ some multimodal information such as hand gestures, head nods, face direction, and gaze direction as well as spoken language to build teamwork in the collaboration with a robot [11,12] or to create a chance to address the user [13]. Meanwhile, Maatman et al [14] and Kopp et al [15] studied the natural behavior of the agent while the user is speaking.…”
Section: Introductionmentioning
confidence: 99%
“…Schrempf et al (2005) proposed a method to synchronize robot and human actions using a Dynamic Bayesian Network. Rickert et al (2007) presented a collaborative robot that is equipped with speech recognition and visual object recognition, and is able to follow the operator hands. This robot uses these informations to anticipate on the next task.…”
Section: Human-robot Collaboration In Manufacturingmentioning
confidence: 99%