2021
DOI: 10.7717/peerj-cs.645
|View full text |Cite
|
Sign up to set email alerts
|

Detecting sarcasm in multi-domain datasets using convolutional neural networks and long short term memory network model

Abstract: Sarcasm emerges as a common phenomenon across social networking sites because people express their negative thoughts, hatred and opinions using positive vocabulary which makes it a challenging task to detect sarcasm. Although various studies have investigated the sarcasm detection on baseline datasets, this work is the first to detect sarcasm from a multi-domain dataset that is constructed by combining Twitter and News Headlines datasets. This study proposes a hybrid approach where the convolutional neural net… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

3
6

Authors

Journals

citations
Cited by 41 publications
(23 citation statements)
references
References 43 publications
0
23
0
Order By: Relevance
“…Similarly, the study in [ 53 ] used an extra tree classifier (ETC) for the same task. In addition, the study in [ 54 ] used the CNN-LSTM model for sarcasm detection, and the study in [ 55 ] has performed sentiment analysis using the stacked Bi-LSTM model. For a fair comparison, these models were deployed using the COVID-19 vaccination tweets dataset that was collected in this study.…”
Section: Resultsmentioning
confidence: 99%
“…Similarly, the study in [ 53 ] used an extra tree classifier (ETC) for the same task. In addition, the study in [ 54 ] used the CNN-LSTM model for sarcasm detection, and the study in [ 55 ] has performed sentiment analysis using the stacked Bi-LSTM model. For a fair comparison, these models were deployed using the COVID-19 vaccination tweets dataset that was collected in this study.…”
Section: Resultsmentioning
confidence: 99%
“…With the help of confusion matrix performance of proposed model is analysed. The performance of proposed model is analysed using accuracy, precision, recall and f-measure ( Rupapara et al, 2021 ; Jamil et al, 2021 ; Rustam et al, 2021 ). These parameters can be measured with the help of following formulae where TP represents true positives, TN is true negatives, FP shows false positives and FN shows false negatives.…”
Section: Experimental Results Evaluation and Discussionmentioning
confidence: 99%
“…To substantiate the performance of the proposed voting classifier, it is also compared with deep learning models. We have used three deep learning models for experiments including LSTM [ 57 ], CNN [ 58 ] and BiLSTM [ 59 ] for comparison purposes. Layered architecture and hyperparameter values are presented in Fig 7 .…”
Section: Resultsmentioning
confidence: 99%
“…To substantiate the performance of the proposed voting classifier, it is also compared with deep learning models. We have used three deep learning models for experiments including LSTM [57], CNN [58] and BiLSTM [59] Classification results of deep learning models on balanced and imbalanced datasets are presented in Table 13. It can be observed that LSTM achieves the highest result with a 0.70 value of accuracy, precision, recall, and F1 score on imbalanced data while CNN has shown the lowest results.…”
Section: Performance Comparison With Deep Neural Networkmentioning
confidence: 99%