2021
DOI: 10.1007/s12559-021-09821-0
|View full text |Cite
|
Sign up to set email alerts
|

CAT-BiGRU: Convolution and Attention with Bi-Directional Gated Recurrent Unit for Self-Deprecating Sarcasm Detection

Abstract: Sarcasm detection has been a well-studied problem for the computational linguistic researchers. However, research related to different categories of sarcasm has still not gained much attention. Self-Deprecating Sarcasm (SDS) is a special category of sarcasm in which users apply sarcasm over themselves, and it is extensively used in social media platforms, mainly as an advertising tool for the brand endorsement, product campaign, and digital marketing with an aim to increase the sales volume. In this paper, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
10

Relationship

1
9

Authors

Journals

citations
Cited by 29 publications
(15 citation statements)
references
References 47 publications
0
15
0
Order By: Relevance
“…The attention system is implemented afterward to consider the expression habits of users. Kamal and Abulaish [16] modeled a new Convolutional and Attention with Bi-directional GRU (CAT-BiGRU) method, which has an input layer, embedded layer, convolution layer, Bi-directional GRU (BiGRU) layer, and two attention layers [17]. The convolution layer extracts SDS-related semantic and syntactic characteristics from the embedded layer; the BiGRU layer retrieves contextual data from the features, which are extracted in succeeding and preceding directions; and the attention layers retrieve SDS-related complete context representation from the input text [18].…”
Section: Literature Reviewmentioning
confidence: 99%
“…The attention system is implemented afterward to consider the expression habits of users. Kamal and Abulaish [16] modeled a new Convolutional and Attention with Bi-directional GRU (CAT-BiGRU) method, which has an input layer, embedded layer, convolution layer, Bi-directional GRU (BiGRU) layer, and two attention layers [17]. The convolution layer extracts SDS-related semantic and syntactic characteristics from the embedded layer; the BiGRU layer retrieves contextual data from the features, which are extracted in succeeding and preceding directions; and the attention layers retrieve SDS-related complete context representation from the input text [18].…”
Section: Literature Reviewmentioning
confidence: 99%
“…In the second level, the polarity between the sentiment semantic features and all the words in the sentence is captured to detect sarcasm by combining the LSTM and CNN networks. Later, a more complex framework was proposed in [45], in which the researchers proposed a Self-Deprecating Sarcasm (SDS) framework that incorporates GloVe embedding, CNN to extract features, bidirectional gated recurrent unit (BiGRU) to extract context information that would be useful for SDS classification, and two attention layers to assign higher weights to SDS-identified sarcastic words.…”
Section: Dl-based Approachesmentioning
confidence: 99%
“…Du et al [ 21 ] designed a dual-channel structure to effectively analyze the semantics of the target text and its sentimental context. Kamal and Abulaish [ 22 ] proposed a convolution and attention model with a bi-directional gated recurrent unit to detect self-deprecating sarcasm, where syntactic and semantic features and a comprehensive context representation were captured. Lou et al [ 12 ] employed a graph convolutional network based on affective information and syntactical information from sentences in order to capture long-range incongruity patterns and inconsistent expressions in the context for sarcasm detection.…”
Section: Related Workmentioning
confidence: 99%