Proceedings of the 2023 ACM International Conference on Multimedia Retrieval 2023
DOI: 10.1145/3591106.3592243
|View full text |Cite
|
Sign up to set email alerts
|

EMP: Emotion-guided Multi-modal Fusion and Contrastive Learning for Personality Traits Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 46 publications
0
3
0
Order By: Relevance
“…Han et al 13 created a multi‐party conversation‐based personality dataset derived from CPED, consisting of 1195 data samples for personality recognition, and introduced a speaker‐aware layering named SH‐Transformer converter. Wang et al 4 employed an emotion‐guided multi‐modal fusion and contrastive learning framework for identifying personality characteristics.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Han et al 13 created a multi‐party conversation‐based personality dataset derived from CPED, consisting of 1195 data samples for personality recognition, and introduced a speaker‐aware layering named SH‐Transformer converter. Wang et al 4 employed an emotion‐guided multi‐modal fusion and contrastive learning framework for identifying personality characteristics.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, deep learning has become a mainstream method in personality recognition research. Researchers mostly conduct research based on multi‐modal methods 4,5 . Different data modalities for analysis and modeling are used to achieve data understanding and information extraction.…”
Section: Introductionmentioning
confidence: 99%
“…The Image-grounded Text Decoder is activated by the Language Modeling Loss (LM) objective function, and its goal is to generate textual descriptions of a given image. an emotion-guided multi-modal fusion and contrastive learning framework [33]. The goal is to extract more profound and distinct features from various modalities and align features from different modalities .…”
Section: Multi-modal Fusionmentioning
confidence: 99%