ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2022
DOI: 10.1109/icassp43922.2022.9746128
|View full text |Cite
|
Sign up to set email alerts
|

Human Emotion Recognition Using Multi-Modal Biological Signals Based On Time Lag-Considered Correlation Maximization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 28 publications
0
1
0
Order By: Relevance
“…Tatarian et al (2022) provide a multimodal interaction that focuses on proxemics of interpersonal navigating, gaze mechanics, kinesics, and social conversation, examining the impact of multimodal actions on relative social IQ using both subjective and objective assessments in a seven-minute encounter with 105 participants. Moroto et al (2022) develop a recognition approach that considers the time delay to get genuinely near the reality of the occurring mechanism of feelings, with experimental findings demonstrating the usefulness of taking into account the time lag between gazing and brain function data. He et al (2022) present a unique multimodal M2NN model using the merging of EEG and fNIRS inputs to increase the recognition speed and generalization capacity of MI, combining spatial-temporal extraction of features, multimodal feature synthesis, and MTL.…”
Section: Major Application Areas Of Multimodal Hrimentioning
confidence: 99%
“…Tatarian et al (2022) provide a multimodal interaction that focuses on proxemics of interpersonal navigating, gaze mechanics, kinesics, and social conversation, examining the impact of multimodal actions on relative social IQ using both subjective and objective assessments in a seven-minute encounter with 105 participants. Moroto et al (2022) develop a recognition approach that considers the time delay to get genuinely near the reality of the occurring mechanism of feelings, with experimental findings demonstrating the usefulness of taking into account the time lag between gazing and brain function data. He et al (2022) present a unique multimodal M2NN model using the merging of EEG and fNIRS inputs to increase the recognition speed and generalization capacity of MI, combining spatial-temporal extraction of features, multimodal feature synthesis, and MTL.…”
Section: Major Application Areas Of Multimodal Hrimentioning
confidence: 99%