Proceedings of the 2022 International Conference on Multimedia Retrieval 2022
DOI: 10.1145/3512527.3531385
|View full text |Cite
|
Sign up to set email alerts
|

Mobile Emotion Recognition via Multiple Physiological Signals using Convolution-augmented Transformer

Abstract: Recognising and monitoring emotional states play a crucial role in mental health and well-being management. Importantly, with the widespread adoption of smart mobile and wearable devices, it has become easier to collect long-term and granular emotion-related physiological data passively, continuously, and remotely. This creates new opportunities to help individuals manage their emotions and well-being in a less intrusive manner using off-the-shelf low-cost devices. Pervasive emotion recognition based on physio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 42 publications
0
1
0
Order By: Relevance
“…Table VI shows the baseline classification results for arousal, valence, and four quadrants in comparison with the proposed method, as well as state-of-the-art methods implemented on the K-EmoCon dataset [17], [40]. Accuracy (Acc.)…”
Section: B Baseline Classification Resultsmentioning
confidence: 99%
“…Table VI shows the baseline classification results for arousal, valence, and four quadrants in comparison with the proposed method, as well as state-of-the-art methods implemented on the K-EmoCon dataset [17], [40]. Accuracy (Acc.)…”
Section: B Baseline Classification Resultsmentioning
confidence: 99%
“…They have been first implemented for natural language processing (NLP) and employ attention mechanisms to analyze sequences of words and are appropriate for use in other applications, such as time-series forecasting, medical, physiological signal analysis, and human activity recognition [155]. Recent studies also use these architectures for recognizing emotions from physiological signals (see [156] and [155]). Yang et al [156] combined CNN architectures with conformer blocks and tested them on PPG, EDA, and ST data from the K-Emocon dataset.…”
Section: B Deep Learning Approachesmentioning
confidence: 99%
“…Recent studies also use these architectures for recognizing emotions from physiological signals (see [156] and [155]). Yang et al [156] combined CNN architectures with conformer blocks and tested them on PPG, EDA, and ST data from the K-Emocon dataset. They achieved 77.37% and 79.42% accuracies for detecting valence and arousal levels.…”
Section: B Deep Learning Approachesmentioning
confidence: 99%
“…Chowdhary et al in 2021 used ConvNet on pre-trained models for identifying emotions from facial expressions in CK+ dataset [21]. Ahmed et al and Zhang et al have worked on emotion recognition from speech signals, by analysing the linguistic parameters, and the melogram pattern of the speech signals[32,33].K-EmoCon dataset was published in 2020, only few researchers have worked on K-EmoCon till now, none of whom used all the modalities of the dataset for emotion recognition. Gupta et al[22], Zitouni et al[23], Alskafi.…”
mentioning
confidence: 99%
“…Quan et al used only audio and visual signals of 16 participants for emotion recognition[18]. Yang et al has used CNN on K-EmoCon for identifying emotion recognition using physiological signals only, not the face expressions or linguistic patterns at all[33]. Alhussein used CNN on only linguistic and speech signals of K-EmoCon to identify the emotions in arousal and valance dimensional emotions, having state of the art f1score of 82%[34].…”
mentioning
confidence: 99%