2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII) 2019
DOI: 10.1109/acii.2019.8925529
|View full text |Cite
|
Sign up to set email alerts
|

Temporally Coherent Visual Representations for Dimensional Affect Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 23 publications
0
9
0
Order By: Relevance
“…However, the current AU labelled video datasets are limited in terms of the total duration and the number of subjects. To address this significant challenge, leveraging the naturally available supervision cues, such as temporal coherency of facial actions in a video [62], [63], is an alternative approach worth considering to make the 3D face more expressive in a label-efficient manner.…”
Section: Summary and Discussionmentioning
confidence: 99%
“…However, the current AU labelled video datasets are limited in terms of the total duration and the number of subjects. To address this significant challenge, leveraging the naturally available supervision cues, such as temporal coherency of facial actions in a video [62], [63], is an alternative approach worth considering to make the 3D face more expressive in a label-efficient manner.…”
Section: Summary and Discussionmentioning
confidence: 99%
“…Temporal Modelling in Emotion Recognition: Most existing methods model the temporal dynamics of continuous emotions using deterministic approaches such as Time-Delay Neural Networks [43], RNNs, LSTMs and GRUs [32,28,26,71,8,59], multi-head attention models [20], 3D Convolutions [76,35], 3D ConvLSTMs [19], and temporal-hourglass CNNs [10]. While these deterministic models are capable of effectively learning the temporal dynamics, they do not take the inherent stochastic nature of the continuous emotion labels into account.…”
Section: Related Workmentioning
confidence: 99%
“…The recent growth of video-based datasets has encouraged the inclusion of temporal modelling, which has shown to improve models' training (Xie et al, 2016;Cootes et al, 1998). Relevant examples in Affective Computing include the works of Tellamekala et al (Tellamekala and Valstar, 2019) and Ma et al (Ma et al, 2019). In their work, Tellamekala et al (Tellamekala and Valstar, 2019) enforce temporal coherency and smoothness aspects on their feature representation by constraining the differences between adjacent frames, while Ma et al resort to the utilisation of LSTM RNNs with residual connections applied to multi-modal data.…”
Section: Related Workmentioning
confidence: 99%
“…Relevant examples in Affective Computing include the works of Tellamekala et al (Tellamekala and Valstar, 2019) and Ma et al (Ma et al, 2019). In their work, Tellamekala et al (Tellamekala and Valstar, 2019) enforce temporal coherency and smoothness aspects on their feature representation by constraining the differences between adjacent frames, while Ma et al resort to the utilisation of LSTM RNNs with residual connections applied to multi-modal data. Furthermore, the use of attention has also been recently explored by Xiaohua et al (Xiaohua et al, 2019) and Li et al (Li et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation