2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2014
DOI: 10.1109/smc.2014.6974508
|View full text |Cite
|
Sign up to set email alerts
|

Facial expression recognition using anatomy based facial graph

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…Furthermore, some current methods select landmarks with significant contributions to avoid redundant information [75,113]. Landmarks locating external contour and nose are frequently discarded [40,76] (see Figs. 6a, b) because they are considered irrelevant to facial affects.…”
Section: Landmark-level Graphsmentioning
confidence: 99%
See 2 more Smart Citations
“…Furthermore, some current methods select landmarks with significant contributions to avoid redundant information [75,113]. Landmarks locating external contour and nose are frequently discarded [40,76] (see Figs. 6a, b) because they are considered irrelevant to facial affects.…”
Section: Landmark-level Graphsmentioning
confidence: 99%
“…Studies of point-light displays in emotion perception also show that more complex representations seem to be redundant [117]. To this end, work like [72,74,76,115] manually reduced edges based on muscle anatomy and FACS. Another type of approach is exploiting triangulation algorithms [40], such as the Delaunay triangulation [36], to generate graph edges consistent with true facial muscle distribution and uniform for different subjects.…”
Section: Landmark-level Graphsmentioning
confidence: 99%
See 1 more Smart Citation
“…These included a wide variety of systems discussed in detail in Section III-C and comprise the papers used in later sections. Input only publications focused on a method of input to a robotic system such as facial recognition [16] or speech recognition [17]. Output only papers focused on robots conveying some emotion and often evaluated the output, such as audio [18] or robotic gait [19].…”
Section: B Step 2: Preliminary Sortingmentioning
confidence: 99%
“…However, with recent advancements in face analysis study, improved facial landmark detection algorithms are presented in several studies. [11][12][13][14] Towards facial landmark graph features, Lei et al, 15 presented a method that only employed 28 brow and lip landmarks, which contributed significantly to micro-expressions. While, other studies [16][17][18][19] presented graph-based methods using AU to define landmarks of interest.…”
Section: Introductionmentioning
confidence: 99%