1999
DOI: 10.1080/088395199117360
|View full text |Cite
|
Sign up to set email alerts
|

The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational agents

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
150
0
3

Year Published

2000
2000
2007
2007

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 282 publications
(153 citation statements)
references
References 13 publications
0
150
0
3
Order By: Relevance
“…This modality can help to resolve the problem to which of the visible agents a user directs a question. The role of gaze in dialogue and conversation has been studied by Cassell et al [3]. In Nijholt and Hulstijn [8] it is discussed how we can incorporate such results in annotated templates that are used for generation of system utterances in a dialogue system.…”
Section: Gaze Behavior Among Multiple Conversational Agentsmentioning
confidence: 99%
“…This modality can help to resolve the problem to which of the visible agents a user directs a question. The role of gaze in dialogue and conversation has been studied by Cassell et al [3]. In Nijholt and Hulstijn [8] it is discussed how we can incorporate such results in annotated templates that are used for generation of system utterances in a dialogue system.…”
Section: Gaze Behavior Among Multiple Conversational Agentsmentioning
confidence: 99%
“…If the aim is to build a full agent able to respond to emotions, including dialog-related attitudes and feelings makes the implementation harder. This is not just an algorithmic or hardware problem but also one of design: perhaps requiring multiple simultaneous threads of control (something not supported by today's standard architectures for dialog management) in order to allow reactive (shallow, emotion-based, conventional) responses to execute swiftly and somewhat autonomously from more deliberative, content-based response planning (Cassell and Thorisson, 1999).…”
Section: Prospects and Open Questionsmentioning
confidence: 99%
“…Schmandt (Schmandt, 1994) built a system which gave driving directions and used the length and pitch slope of user utterances to control the pace of its delivery. Thorisson and Cassell's (1999) Ymir was a multi-modal animated system which detected the onset and offset of the user's voice, among other things, and used this to determine when to be listening/not-listening and taking-a-turn/yielding-the-turn; the version of the system which did this was ranked higher and considered more "helpful" by users. Ward and Tsukahara (1999) built a system which detected a prosodic feature cuing back-channel feedback (uh-huh etc.)…”
mentioning
confidence: 99%
“…This feedback helps to obtain a more smooth conversation and exchange of information. Cassell et al [2] have compared different kinds of nonverbal feedback in some experiments. They distinguished:…”
Section: Importance Of Nonverbal Behaviormentioning
confidence: 99%
“…In summary: often a speaker looks away from a hearer when she starts an utterance and she looks towards he hearer when ending an utterance. 2 Cassell et al [1,3] have investigated the relation between propositional content of utterances and gaze behavior. This is interesting from the point of view that we can make further steps towards generating the behavior of agents (syntactic, semantic and pragmatic content of utterances, intonation, face expressions, gaze behavior, head and body movements) from a representation of (the history of) previous interactions, the representation of believes, desires and intentions and the representation of some personality characteristics.…”
Section: Embodied Agents and Gaze Behaviormentioning
confidence: 99%