2001
DOI: 10.1109/79.924885
|View full text |Cite
|
Sign up to set email alerts
|

Automatic face cloning and animation using real-time facial feature tracking and speech acquisition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2002
2002
2014
2014

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…(1) Organ characteristics: the forehead, eyebrows, eyes, nose, mouth and chin (2) The pitch characteristics of the organs (3) Characteristics of the face shape Our system builds 6 standard emotional face model including anger, disgust, fear, sad, surprise and joy as showing in Figure 4. …”
Section: A Image-based Facial Feature Detection and Labelingmentioning
confidence: 99%
See 1 more Smart Citation
“…(1) Organ characteristics: the forehead, eyebrows, eyes, nose, mouth and chin (2) The pitch characteristics of the organs (3) Characteristics of the face shape Our system builds 6 standard emotional face model including anger, disgust, fear, sad, surprise and joy as showing in Figure 4. …”
Section: A Image-based Facial Feature Detection and Labelingmentioning
confidence: 99%
“…Different from the automatically face cloning and animation [2], our algorithm detect the facial feature points for estimate user's emotional state, avatar's facial animation is driven by these emotional state sequence. Note that affect detection through has became a research hot spot recently.…”
Section: Introductionmentioning
confidence: 99%
“…In such methods, intensity values [11][12][13][14] or Gabor features [15] are usually used as part of the data term in curve evolution. Similarly, in [16][17][18] 3-D wire-frame models are constructed and iteratively deformed using intensity values. Spatial relations are taken into account by constraining the shape of facial features using subspace representations, such as principal component analysis (PCA), in the shape space, leading to so-called active shape models (ASM) [11][12][13].…”
Section: Previous Workmentioning
confidence: 99%
“…For tracking to be fully automatic, some studies have employed a generic facial motion model. Goto et al [13] used separate simple tracking rules for eyes, lips, and other facial features. Pighin et al [23,24] proposed tracking animation-purposed facial motion based on linear combination of 3D face model bases.…”
Section: Related Workmentioning
confidence: 99%