2004
DOI: 10.1007/978-3-540-24842-2_10
|View full text |Cite
|
Sign up to set email alerts
|

Data-Driven Tools for Designing Talking Heads Exploiting Emotional Attitudes

Abstract: Abstract. Audio/visual speech, in the form of labial movement and facial expression data, was utilized in order to semi-automatically build a new Italian expressive and emotive talking head capable of believable and emotional behavior. The methodology, the procedures and the specific software tools utilized for this scope will be described together with some implementation examples.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2004
2004
2012
2012

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 8 publications
(7 reference statements)
0
3
0
Order By: Relevance
“…Two synthetic 3D face models were used in the study, one originating from Sweden [5] and one from Italy [6]. The Swedish face, a male, is made up of approximately 1,500 polygons, whereas the Italian face is a textured young female built using around 25,000 polygons.…”
Section: Methodsmentioning
confidence: 99%
“…Two synthetic 3D face models were used in the study, one originating from Sweden [5] and one from Italy [6]. The Swedish face, a male, is made up of approximately 1,500 polygons, whereas the Italian face is a textured young female built using around 25,000 polygons.…”
Section: Methodsmentioning
confidence: 99%
“…We are currently developing this new software version in order to easily integrate LUCIA in a website; there are many promising functionality for web applications: a virtual guide for any website (which we are exploiting in the wikimemo.it project -The portal of Italian Language and Culture); a storyteller for e-book reading; a digital tutor for hearing impaired; a personal assistant for smart-phone and mobile devices. The early results can be observed in [44] 4 Emotional synthesis Audio Visual emotional rendering was developed working on true real emotional audio and visual databases whose content was used to automatically train emotion specific intonation and voice quality models to be included in FESTIVAL, our Italian TTS system [25,26,27,28] and also to define specific emotional visual rendering to be implemented in LUCIA [29,30,31]. Fig.…”
Section: Data Acquisition Environmentmentioning
confidence: 99%
“…Visual speech synthesis can be accomplished either through manipulation of video images ( [4], [5]) or based on two-or three dimensional models of the human face and/or speech organs that are under control of a set of deformation parameters, as described by for example [6], [7], [8] and [9].…”
Section: Synthesis Of Visible Speechmentioning
confidence: 99%