Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment &Amp; Social Media Analysis 2022
DOI: 10.18653/v1/2022.wassa-1.1
|View full text |Cite
|
Sign up to set email alerts
|

On the Complementarity of Images and Text for the Expression of Emotions in Social Media

Abstract: Authors of posts in social media communicate their emotions and what causes them with text and images. While there is work on emotion and stimulus detection for each modality separately, it is yet unknown if the modalities contain complementary emotion information in social media. We aim at filling this research gap and contribute a novel, annotated corpus of English multimodal Reddit posts. On this resource, we develop models to automatically detect the relation between image and text, an emotion stimulus cat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 9 publications
0
0
0
Order By: Relevance
“…) and detecting emotion stimuli in images (Dellagiacoma et al, 2011;Fan et al, 2018, i.a. ), also multimodally (Khlyzova et al, 2022;Cevher et al, 2019). However, we are not aware of any work in computer vision that interprets situations and the interactions of events with the help of appraisal theories.…”
Section: Open Research Tasksmentioning
confidence: 98%
“…) and detecting emotion stimuli in images (Dellagiacoma et al, 2011;Fan et al, 2018, i.a. ), also multimodally (Khlyzova et al, 2022;Cevher et al, 2019). However, we are not aware of any work in computer vision that interprets situations and the interactions of events with the help of appraisal theories.…”
Section: Open Research Tasksmentioning
confidence: 98%
“…Yang et al (2020) proposed a new multimodal sentiment analysis model based on multiview attention network, which utilizes continuously updated memory networks to achieve sentiment analysis. Khlyzova et al (2022) contributed a novel annotated English multimodal corpus and developed a model (i.e., TISM) for automatically detecting the relationship between images and texts, as well as the categories of emotional stimuli and emotions. For image text relationships, the information in the text can predict whether images are needed for emotional understanding.…”
Section: Image and Text Emotional Analysismentioning
confidence: 99%
“…Which aims to be an effective way to detect emotions like neutral, happy, sad, and surprising these four emotions from frontal facial emotions also using the CNN algorithm. There is also emotion detection from social media, which can detect people's emotions from text [15], and emotion detection from people's pupil variation or the movement of their pupils [16].…”
Section: Introductionmentioning
confidence: 99%