2023
DOI: 10.1109/taffc.2021.3106254
|View full text |Cite
|
Sign up to set email alerts
|

Investigating Multisensory Integration in Emotion Recognition Through Bio-Inspired Computational Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 76 publications
(78 reference statements)
0
1
0
Order By: Relevance
“…However, these probability models are rarely applied to real data. On the other hand, researchers have achieved advanced performance in tasks like multimodal emotion recognition [14], while they do not provide a probabilistic interpretation of these models.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…However, these probability models are rarely applied to real data. On the other hand, researchers have achieved advanced performance in tasks like multimodal emotion recognition [14], while they do not provide a probabilistic interpretation of these models.…”
Section: Discussionmentioning
confidence: 99%
“…For instance, while the model proposed in this paper was derived and numerically simulated for only two sensory modalities, its framework can be extended to multiple sensory modalities. Furthermore, research suggests that the effectiveness of multisensory integration may be related to the synchrony of neural spike firing [1,4,14]. This paper only considered fully synchronous visual and auditory stimuli, and investigation on asynchronous multisensory stimuli is needed in the future.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The Enhancement model (Benssassi and Ye 2023) is inspired by the enhancement theory for multisensory recognition, where visual information significantly influences auditory cortex activity in the human brain (Molholm et al 2002;Jessen and Kotz 2013). In this model, the auditory features extraction layer receives inputs not only from the auditory input layer but also from the visual layer.…”
Section: Related Workmentioning
confidence: 99%