2018
DOI: 10.1007/s11760-018-1318-5
|View full text |Cite
|
Sign up to set email alerts
|

Low-rank sparse coding and region of interest pooling for dynamic 3D facial expression recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 33 publications
0
4
0
Order By: Relevance
“…Therefore, a 1 minute background sound with video were set up before the formal experiment to avoid the effect of visual factors. After the background sound, the main recording then plays after a 2-second transition, for 2 minutes (Zarbakhsh and Demirel, 2018).…”
Section: Stimulimentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, a 1 minute background sound with video were set up before the formal experiment to avoid the effect of visual factors. After the background sound, the main recording then plays after a 2-second transition, for 2 minutes (Zarbakhsh and Demirel, 2018).…”
Section: Stimulimentioning
confidence: 99%
“…As a environment evaluation tool, FaceReader (Noldus, 2014), a software based on facial expression recognition (FER), has been applied in psychological evaluations (Zarbakhsh and Demirel, 2018;Bartlett et al, 2005;Amor et al, 2014). Video cameras have been the predominant method of measuring facial expressions in this context (Oliver et al, 2000).…”
Section: Introductionmentioning
confidence: 99%
“…Psychologists generally believe that expressions are a quantitative form of changes in emotions. As a tool for evaluating emotions, the software FaceReader, based on facial expression recognition (FER), has been applied in psychological evaluation (Bartlett et al, 2005;Amor et al, 2014;Zarbakhsh and Demirel, 2018). The effectiveness of FER has been proven in many previous studies, and it can measure emotions with more than 87% efficacy (Terzis et al, 2010).…”
Section: Introductionmentioning
confidence: 99%
“…In equation (3), Φ shows the feature quantity, and ∇Φ stands for the gradient of Φ. The matching items corresponding to the local template are represented by E LBF ; the sampling components corresponding to edge pixels are represented by E RGB , and the fusion template of the visual image region can be obtained by sparse linear segmentation (Zarbakhsh and Demirel, 2018 ). The function Data of the fusion template is expressed as:…”
Section: Methodsmentioning
confidence: 99%