2019 International Conference on Multimodal Interaction 2019
DOI: 10.1145/3340555.3353716
|View full text |Cite
|
Sign up to set email alerts
|

CorrFeat: Correlation-based Feature Extraction Algorithm using Skin Conductance and Pupil Diameter for Emotion Recognition

Abstract: To recognize emotions using less obtrusive wearable sensors, we present a novel emotion recognition method that uses only pupil diameter (PD) and skin conductance (SC). Psychological studies show that these two signals are related to the attention level of humans exposed to visual stimuli. Based on this, we propose a feature extraction algorithm that extract correlation-based features for participants watching the same video clip. To boost performance given limited data, we implement a learning system without … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 31 publications
1
7
0
Order By: Relevance
“…In this stage, intra-modality features and are fused using a correlation-based feature extraction method [ 82 ]. The purpose of correlation-based feature extraction is to extract features which (a) maximize the correlation coefficient between two modalities and (b) fuse the features between different instances.…”
Section: Methodsmentioning
confidence: 99%
“…In this stage, intra-modality features and are fused using a correlation-based feature extraction method [ 82 ]. The purpose of correlation-based feature extraction is to extract features which (a) maximize the correlation coefficient between two modalities and (b) fuse the features between different instances.…”
Section: Methodsmentioning
confidence: 99%
“…Other physiological signals used include: electrocardiography (ECG) [ 19 21 ], electromyography (EMG) [ 22 ], electrodermal activity (EDA) [ 19 , 20 , 23 ], heart rate [ 24 26 ], respiration rate and depth [ 24 , 27 ], and arterial pressure [ 24 ]. Eye-tracking [ 28 – 32 ] and pupil width [ 33 36 ] are also used to recognize emotions.…”
Section: Introductionmentioning
confidence: 99%
“…Theoretically, the end-to-end model should result in the best performance as the features are directly connected with the ground truth labels [68], which means the deep representation is trained to best recognize these labels. However, according to previous studies [3], [69], if we train the network using fine-grained emotion labels and fully-supervised learning methods, the end-to-end model will overfit because of the temporal resolution mismatch between physiological signals and fine-grained self-reports due to different interoception levels across individuals [70]. Thus, we compare these three types of methods to find out whether the endto-end, deep feature extraction (deepfeat, section 3.2.1) still has the problem of overfitting for weakly-supervised learning.…”
Section: Feature Extraction Layersmentioning
confidence: 95%
“…The pairwise correlation-based features (pcorrfeat) are extracted by maximizing the correlation coefficient for every two signals from users who watch the same video stimuli [69]. The idea is inspired by the hypothesis that the same stimuli will trigger relatively similar emotions across physiological responses among different users [77], [78].…”
Section: Pcorrfeatmentioning
confidence: 99%