2021
DOI: 10.3390/e23040394
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media

Abstract: With the online presence of more than half the world population, social media plays a very important role in the lives of individuals as well as businesses alike. Social media enables businesses to advertise their products, build brand value, and reach out to their customers. To leverage these social media platforms, it is important for businesses to process customer feedback in the form of posts and tweets. Sentiment analysis is the process of identifying the emotion, either positive, negative or neutral, ass… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(15 citation statements)
references
References 28 publications
0
15
0
Order By: Relevance
“…Moreover, attention-based LSTM was exploited to identify the user"s expression habit. In a subsequent study [48], the researchers proposed a multi-head self-attention-based GRU model to detect sarcasm while considering automatic, lexical, contextual, and handcrafted features. Feature embedding was performed by a pretrained model and was enhanced using the multi-head self-attention layers to identify keywords that contribute more to classification.…”
Section: Dl-based Approachesmentioning
confidence: 99%
“…Moreover, attention-based LSTM was exploited to identify the user"s expression habit. In a subsequent study [48], the researchers proposed a multi-head self-attention-based GRU model to detect sarcasm while considering automatic, lexical, contextual, and handcrafted features. Feature embedding was performed by a pretrained model and was enhanced using the multi-head self-attention layers to identify keywords that contribute more to classification.…”
Section: Dl-based Approachesmentioning
confidence: 99%
“…Then, the features of the three approaches were rebuilt and merged into a single feature vector for estimation. Reference [14] mainly focuses on recognizing sarcasm in textual conversation from social media platforms and websites. As a result, an interpretable DL technique utilizing gated recurrent units (GRUs) and multi-head self-attention modules was developed.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Finally, a wholly associated layer with sigmoid actuation is utilized to get the last grouping score. In the following work, [64], the authors combined their earlier approach with the multi-headed attention mechanism to efficiently identify the contextual sarcasm. They have kept the fully-connected single-layered Gated Recurrent module for the classification task.…”
Section: Benchmark Analysismentioning
confidence: 99%