ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019
DOI: 10.1109/icassp.2019.8683737
|View full text |Cite
|
Sign up to set email alerts
|

Facial Micro-expression Spotting and Recognition Using Time Contrasted Feature with Visual Memory

Abstract: Facial micro-expressions are sudden involuntary minute muscle movements which reveal true emotions that people try to conceal. Spotting a micro-expression and recognizing it is a major challenge owing to its short duration and intensity. Many works pursued traditional and deep learning based approaches to solve this issue but compromised on learning low level features and higher accuracy due to unavailability of datasets. This motivated us to propose a novel joint architecture of spatial and temporal network w… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…ELRCN [33], adopting a similar strategy to Kim et al, innovated by training CNN and LSTM jointly to ensure ME features' internal consistency and to decrease computational time. Nag et al [34] introduced a joint network to extract discriminative temporal features, distinguishing MEs from rapid muscle movements. Wang et al [35] introduced MESNet, which employs 2D convolution for spatial feature extraction and 1D convolution for modeling temporal relations.…”
Section: Deep Learning Methodsmentioning
confidence: 99%
“…ELRCN [33], adopting a similar strategy to Kim et al, innovated by training CNN and LSTM jointly to ensure ME features' internal consistency and to decrease computational time. Nag et al [34] introduced a joint network to extract discriminative temporal features, distinguishing MEs from rapid muscle movements. Wang et al [35] introduced MESNet, which employs 2D convolution for spatial feature extraction and 1D convolution for modeling temporal relations.…”
Section: Deep Learning Methodsmentioning
confidence: 99%
“…Hong [14] used a sliding window to detect micro-expressions in samples with a fixed number of frames and treated micro-expression spotting as a binary classification task. Nag [15] proposed a joint architecture of temporal and spatial information to detect the onset frame and offset frame of microexpressions. Verburg M [16] applied the computed HOOF features into a recurrent neural network (RNN) for micro-expression localization, which combined deep learning and traditional methods and applied them to micro-expression spotting.…”
Section: Related Workmentioning
confidence: 99%
“…The DTSCNN was the first work in MER that utilized shallow two-stream neural network with inputs of optical-flow sequences. Nag et al [29] proposed an unified architecture for microexpression spotting and recognition, in which spatial and temporal network extracts time-contrasted features from the feature maps to contrast out subtle motions of micro-expressions. Wang et al proposed transferring Long-term Convolutional Neural Network (TL-CNN) [30] which utilized transfer learning from macroexpression to micro-expression database for MER.…”
Section: Features Evaluated On the Single Databasementioning
confidence: 99%