Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1093
|View full text |Cite
|
Sign up to set email alerts
|

Reasoning with Sarcasm by Reading In-Between

Abstract: Sarcasm is a sophisticated speech act which commonly manifests on social communities such as Twitter and Reddit. The prevalence of sarcasm on the social web is highly disruptive to opinion mining systems due to not only its tendency of polarity flipping but also usage of figurative language. Sarcasm commonly manifests with a contrastive theme either between positive-negative sentiments or between literal-figurative scenarios. In this paper, we revisit the notion of modeling contrast in order to reason with sar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
116
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 125 publications
(122 citation statements)
references
References 40 publications
0
116
0
Order By: Relevance
“…As a baseline, we implement the SIARN (Single-Dimension Intra-Attention Network) model proposed by (Tay et al, 2018), since it achieves the best published results on both our datasets. SIARN only looks at the tweet being classified, that is SIARN(t, e t ) = m (t).…”
Section: Contextual Sarcasm Detection Modelsmentioning
confidence: 99%
“…As a baseline, we implement the SIARN (Single-Dimension Intra-Attention Network) model proposed by (Tay et al, 2018), since it achieves the best published results on both our datasets. SIARN only looks at the tweet being classified, that is SIARN(t, e t ) = m (t).…”
Section: Contextual Sarcasm Detection Modelsmentioning
confidence: 99%
“…They observe mainly three approaches to the sarcasm detection problem -semi-supervised extraction of sarcastic patterns Ptáček et al, 2014;Bouazizi and Ohtsuki, 2015;Riloff et al, 2013;Joshi et al, 2015), use of hashtag based supervision Abercrombie and Hovy, 2016), and use of contextual information for sarcasm detection (Hazarika et al, 2018;Wallace et al, 2014;Rajadesingan et al, 2015). Recently, Tay et al (2018) presented an attention-based neural model to explicitly model contrast and incongruity. Kolchinski and Potts (2018) presented two methods for representing authors in the context of textual sarcasm detection; they show that augmenting a bidirectional RNN with these representations improves performance in sarcasm detection.…”
Section: Related Workmentioning
confidence: 99%
“…The winning solution of SemEval 2016 (Deriu et al, 2016) utilized ensembles of convolutional neural networks (CNN). Recurrent-based models such as the bidirectional long short-term memory (BiLSTM) (Hochreiter and Schmidhuber, 1997;Graves et al, 2013) are popular and standard strong baselines for many opinion mining tasks including sentiment analysis (Tay et al, 2017) and sarcasm detection (Tay et al, 2018c). These neural models such as the BiLSTM are capable of modeling semantic compositionality and produce a feature vector which can be used for classification.…”
Section: Related Workmentioning
confidence: 99%
“…In our work, we adapt this to model the similarities between (1) lexicon-context and (2) contrasting polarities which borrows inspiration from (Riloff et al, 2013). Since our matching problem is derived from the same sequence (identified by a lexicon prior), this work can be interpreted as a form of self-attention (Vaswani et al, 2017) which draws relations to the intra-attentive model for sarcasm detection (Tay et al, 2018c).…”
Section: Related Workmentioning
confidence: 99%