Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - ACL '03 2003
DOI: 10.3115/1075096.1075166
|View full text |Cite
|
Sign up to set email alerts
|

Towards a model of face-to-face grounding

Abstract: We investigate the verbal and nonverbal means for grounding, and propose a design for embodied conversational agents that relies on both kinds of signals to establish common ground in human-computer interaction. We analyzed eye gaze, head nods and attentional focus in the context of a direction-giving task. The distribution of nonverbal behaviors differed depending on the type of dialogue move being grounded, and the overall pattern reflected a monitoring of lack of negative feedback. Based on these results, w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

5
122
0

Year Published

2006
2006
2017
2017

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 154 publications
(127 citation statements)
references
References 15 publications
5
122
0
Order By: Relevance
“…The system interpreted these cues as grounding actions using an incremental common ground model, and varied its generation strategies accordingly. Nakano et al [2003] found that their system elicited from its interlocutors many of the same qualitative dynamics they found in human-human conversations.…”
Section: Grounding With Multimodal Communicative Actionmentioning
confidence: 91%
See 2 more Smart Citations
“…The system interpreted these cues as grounding actions using an incremental common ground model, and varied its generation strategies accordingly. Nakano et al [2003] found that their system elicited from its interlocutors many of the same qualitative dynamics they found in human-human conversations.…”
Section: Grounding With Multimodal Communicative Actionmentioning
confidence: 91%
“…The system detects and adapts to the nonverbal grounding cues that followers spontaneously provide in human-human conversations. For example, Nakano et al [2003] found that when listeners could follow instructions, they would nod and continue to direct their attention to the map, but when something was unclear, listeners would gaze up toward the direction-giver and wait attentively for clarification. The system interpreted these cues as grounding actions using an incremental common ground model, and varied its generation strategies accordingly.…”
Section: Grounding With Multimodal Communicative Actionmentioning
confidence: 99%
See 1 more Smart Citation
“…This requires further research into the integrated use in dialogue of verbal and non-verbal means (see, e.g., Nakano et al, 2003 ). From the point of view of applications, this is an area which has recently gained interest in the context of the construction of Embodied Conversational Agents that is, computer-animated characters that can engage in a dialogue with a user or other computer-animated characters (e.g., Cassell et al, 2000;Prendinger and Ishizuka, 2004).…”
Section: Trends and Outlookmentioning
confidence: 99%
“…They can perform several gestures like greeting, counting with fingers, deictic and beat gestures, and facial expressions (happy, sad, surprised). Speech output is generated using Loquendo Text-to-Speech [9], and lips are adequately animated.…”
Section: Gaze-based Presentationmentioning
confidence: 99%