2021
DOI: 10.1609/icwsm.v15i1.18080
|View full text |Cite
|
Sign up to set email alerts
|

Exercise? I thought you said 'Extra Fries’: Leveraging Sentence Demarcations and Multi-hop Attention for Meme Affect Analysis

Abstract: Today's Internet is awash in memes as they are humorous, satirical, or ironic which make people laugh. According to a survey, 33% of social media users in age bracket [13-35] send memes every day, whereas more than 50% send every week. Some of these memes spread rapidly within a very short time-frame, and their virality depends on the novelty of their (textual and visual) content. A few of them convey positive messages, such as funny or motivational quotes; while others are meant to mock/hurt someone's feeling… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 42 publications
(31 reference statements)
0
11
0
Order By: Relevance
“…Their findings suggest that meme classification architectures exhibit adaptability across different affective computing tasks. Furthermore, Pramanick et al [9], who reported the best-performing sentiment classification solution to the Memotion 1.0 dataset [13], showed that the same architecture outperforms all, or all but one, competing solution when individually trained on eight affect dimensions.…”
Section: Meme Affective Classifiersmentioning
confidence: 99%
See 4 more Smart Citations
“…Their findings suggest that meme classification architectures exhibit adaptability across different affective computing tasks. Furthermore, Pramanick et al [9], who reported the best-performing sentiment classification solution to the Memotion 1.0 dataset [13], showed that the same architecture outperforms all, or all but one, competing solution when individually trained on eight affect dimensions.…”
Section: Meme Affective Classifiersmentioning
confidence: 99%
“…A typical approach to building a multimodal meme classifier is to generate unimodal representations of each modality before fusing these representations into a multimodal representation of the meme, such as in [3,10,12,9]. Furthermore, the literature presents a wide range of deep learning representations used for each visual and textual modality [6,7,13], with no clear evidence that any of the options would consistently outperform all others.…”
Section: Meme Affective Classifiersmentioning
confidence: 99%
See 3 more Smart Citations