2015
DOI: 10.1016/j.patcog.2015.04.012
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal learning for facial expression recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 94 publications
(25 citation statements)
references
References 20 publications
0
25
0
Order By: Relevance
“…In our algorithm, LBP features are extracted and processed before they are sent into the classifiers. For most papers, researchers extract features from images and normalize the data to 0-1, such as [23] normalizes the data to 0-1. For our algorithm framework, the images data are normalized to 0-1, by using this method a better result can be gained.…”
Section: Lbp Correctionmentioning
confidence: 99%
See 3 more Smart Citations
“…In our algorithm, LBP features are extracted and processed before they are sent into the classifiers. For most papers, researchers extract features from images and normalize the data to 0-1, such as [23] normalizes the data to 0-1. For our algorithm framework, the images data are normalized to 0-1, by using this method a better result can be gained.…”
Section: Lbp Correctionmentioning
confidence: 99%
“…Different features describe different characters of the images, therefore researchers used some features merging together to apply the superiority of all the features [9,23]. For our algorithm, LBP and HOG descriptors are applied to utilize the texture and oritension information of these expressions.…”
Section: Hog Processing and Features Fusionmentioning
confidence: 99%
See 2 more Smart Citations
“…In [17], video and audio modality inputs were employed to learn bimodal deep belief networks (DBN). In [18], multimodal deep neural networks (DNN) were proposed to study the correlation between texture and landmark modalities for facial expression reorganization, wherein several stacked autoencoders (AE) were used. In [19], bimodal DNN were used to determine driver fatigue expression.…”
Section: Introductionmentioning
confidence: 99%