2023
DOI: 10.1155/2023/9645611
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Emotion Recognition Based on Cascaded Multichannel and Hierarchical Fusion

Abstract: Humans express their emotions in a variety of ways, which inspires research on multimodal fusion-based emotion recognition that utilizes different modalities to achieve information complementation. However, extracting deep emotional features from different modalities and fusing them remain a challenging task. It is essential to exploit the advantages of different extraction and fusion approaches to capture the emotional information contained within and across modalities. In this paper, we present a novel multi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 53 publications
(58 reference statements)
0
3
0
Order By: Relevance
“…Most companies depend on the reactions that clients have to all of their products and services. Intelligent systems make it possible to determine whether a customer is interested in a product or service based on their emotional response to a captured image or video [13] people have elaborated emotional profiles on their face, these words are universal.…”
Section: Literature Surveymentioning
confidence: 99%
“…Most companies depend on the reactions that clients have to all of their products and services. Intelligent systems make it possible to determine whether a customer is interested in a product or service based on their emotional response to a captured image or video [13] people have elaborated emotional profiles on their face, these words are universal.…”
Section: Literature Surveymentioning
confidence: 99%
“…The article [41] three corpus containing e-commerce data belonging to different languages such as Turkish, Arabic, and English were created and the performances of deep learning and machine learning methods were examined comparatively by performing sentiment analysis on them. The study [42] offered a unique multimodal emotion identification framework that names multimodal emotion identification based on cascaded multichannel and hierarchical fusion (CMC-HF). This framework makes use of visual, voice, and text data as multimodal inputs concurrently.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In the case of multimodal emotion recognition from facial and speech features, deep neural networks [ 13 , 14 , 15 ] have been employed to extract relevant features from each modality. Networks learn to extract features relevant to emotion classification.…”
Section: Introductionmentioning
confidence: 99%