2010
DOI: 10.1007/s11390-010-9353-x
|View full text |Cite
|
Sign up to set email alerts
|

A New Classifier for Facial Expression Recognition: Fuzzy Buried Markov Model

Abstract: Zhan YZ, Cheng KY, Chen YB et al. A new classifier for facial expression recognition: Fuzzy buried Markov model. Abstract To overcome the disadvantage of classical recognition model that cannot perform well enough when there are some noises or lost frames in expression image sequences, a novel model called fuzzy buried Markov model (FBMM) is presented in this paper. FBMM relaxes conditional independence assumptions for classical hidden Markov model (HMM) by adding the specific cross-observation dependencies be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 14 publications
(16 reference statements)
0
0
0
Order By: Relevance
“…In the literature [23][24][25], researchers have done much work on the research of virtual expression technology in recent decades and achieved certain research results, and he pointed out that the methods of driving virtual characters to generate expressions like real people are roughly divided into three categories: text-based methods, speech-based methods, and face expression capture-based methods. Among them, text-based and speech-based driving methods are mainly used to drive the mouth patterns of virtual characters, with the disadvantage that they are not sufficient to generate facial expression animations with a realistic sense [26][27][28]. The literature [29][30][31] argues that face expression capture-based methods mainly use special devices such as monocular and head-mounted cameras to capture face expression motion parameters to drive virtual character models, which can generate more realistic facial expression animations.…”
Section: Introductionmentioning
confidence: 99%
“…In the literature [23][24][25], researchers have done much work on the research of virtual expression technology in recent decades and achieved certain research results, and he pointed out that the methods of driving virtual characters to generate expressions like real people are roughly divided into three categories: text-based methods, speech-based methods, and face expression capture-based methods. Among them, text-based and speech-based driving methods are mainly used to drive the mouth patterns of virtual characters, with the disadvantage that they are not sufficient to generate facial expression animations with a realistic sense [26][27][28]. The literature [29][30][31] argues that face expression capture-based methods mainly use special devices such as monocular and head-mounted cameras to capture face expression motion parameters to drive virtual character models, which can generate more realistic facial expression animations.…”
Section: Introductionmentioning
confidence: 99%