2004
DOI: 10.1002/cav.5
|View full text |Cite
|
Sign up to set email alerts
|

Specifying and animating facial signals for discourse in embodied conversational agents

Abstract: People highlight the intended interpretation of their utterances within a larger discourse by a diverse set of non-verbal signals. These signals represent a key challenge for animated conversational agents because they are pervasive, variable, and need to be coordinated judiciously in an effective contribution to conversation. In this paper, we describe a freely available cross-platform real-time facial animation system, RUTH, that animates such highlevel signals in synchrony with speech and lip movements. RUT… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2005
2005
2018
2018

Publication Types

Select...
7
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 41 publications
(10 citation statements)
references
References 42 publications
(20 reference statements)
0
7
0
Order By: Relevance
“…Foster [22] has investigated the influence on facial displays of the intended usermodel evaluation in the context of the COMIC multimodal dialogue system; this system is described in detail in Section 3.4. The studies used the RUTH talking head [23] to compare different methods of using data from a single-speaker corpus to select facial displays based on the intended user-model evaluation and other contextual factors. The results of these experiments demonstrate that participants are able to identify the intended user-model evaluation based on the motions of the talking head, and that they prefer outputs where the user model expressed in speech matches the facial displays.…”
Section: Evaluation Of Individual Aspectsmentioning
confidence: 99%
“…Foster [22] has investigated the influence on facial displays of the intended usermodel evaluation in the context of the COMIC multimodal dialogue system; this system is described in detail in Section 3.4. The studies used the RUTH talking head [23] to compare different methods of using data from a single-speaker corpus to select facial displays based on the intended user-model evaluation and other contextual factors. The results of these experiments demonstrate that participants are able to identify the intended user-model evaluation based on the motions of the talking head, and that they prefer outputs where the user model expressed in speech matches the facial displays.…”
Section: Evaluation Of Individual Aspectsmentioning
confidence: 99%
“…the way segments combine, other properties may become important. For instance, the cyclicity of head nods and shakes of listeners with respect to their difference in communicative function was considered in the work of DiCarlo et al 12 The timing with respect to other signals may also bear significance. Several authors (see below) have looked at the relation between head movements and speech but also the relation between head movement and facial expressions are of interest.…”
Section: The Movementsmentioning
confidence: 99%
“…We have reframed our ongoing activities so that we can find new synergies between research and teaching. For example, we are currently working to expand the repertoire of animated action in our freely-available talking head RUTH (DeCarlo et al, 2004). In our next release, we expect to make different kinds of resources available than in the initial release.…”
Section: Looking Aheadmentioning
confidence: 99%