Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1050
|View full text |Cite
|
Sign up to set email alerts
|

Magnets for Sarcasm: Making Sarcasm Detection Timely, Contextual and Very Personal

Abstract: Sarcasm is a pervasive phenomenon in social media, permitting the concise communication of meaning, affect and attitude. Concision requires wit to produce and wit to understand, which demands from each party knowledge of norms, context and a speaker's mindset. Insight into a speaker's psychological profile at the time of production is a valuable source of context for sarcasm detection. Using a neural architecture, we show significant gains in detection accuracy when knowledge of the speaker's mood at the time … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
77
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 114 publications
(78 citation statements)
references
References 23 publications
1
77
0
Order By: Relevance
“…Last but not least, we empirically show that explicitly modeling the turns helps and provides better results than just concatenating the current turn and prior turn (and/or succeeding turn). This experimental result supports the conceptual claim that both we and Ghosh and Veale (2017) make that it is important to keep the C_TURN and the P_TURN (S_TURN) separate (e.g., modeled by different LSTMs), as the model is designed to recognize a possible inherent incongruity between them. This incongruity might become diffuse if the inputs are combined too soon (i.e., using one LSTM on combined current turn and context).…”
Section: Related Worksupporting
confidence: 80%
See 2 more Smart Citations
“…Last but not least, we empirically show that explicitly modeling the turns helps and provides better results than just concatenating the current turn and prior turn (and/or succeeding turn). This experimental result supports the conceptual claim that both we and Ghosh and Veale (2017) make that it is important to keep the C_TURN and the P_TURN (S_TURN) separate (e.g., modeled by different LSTMs), as the model is designed to recognize a possible inherent incongruity between them. This incongruity might become diffuse if the inputs are combined too soon (i.e., using one LSTM on combined current turn and context).…”
Section: Related Worksupporting
confidence: 80%
“…Independently, Ghosh and Veale (2017) have proposed a similar architecture based on Bi-LSTMs to detect sarcasm in Twitter. Unlike Ghosh and Veale (2017), our prior work used attention-based LSTMs that allowed us to investigate whether we can identify what part of the conversation context triggered the sarcastic reply, and showed results both on discussion forum data and Twitter. This paper substantially extends our prior work introduced in Ghosh, Fabbri, and Muresan (2017).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…), as such we omit comparisons for the sake of brevity and focus comparisons with recent neural models instead. Moreover, since our work focuses only on document-level sarcasm detection, we do not compare against models that use external information such as user profiles, context, personality information (Ghosh and Veale, 2017) or emoji-based distant supervision (Felbo et al, 2017). For our model, we report results on both multi-dimensional and single-dimensional intraattention.…”
Section: Compared Methodsmentioning
confidence: 99%
“…(Ghosh and Veale, 2016) proposed a convolutional long-short-term memory network (CNN-LSTM-DNN) that achieves state-of-the-art performance. While our work focuses on document-only sarcasm detection, several notable works have proposed models that exploit personality information (Ghosh and Veale, 2017) and user context (Amir et al, 2016). Novel methods for sarcasm detection such as gaze / cognitive features (Mishra et al, 2016(Mishra et al, , 2017 have also been explored.…”
Section: Deep Learning For Sarcasm Detectionmentioning
confidence: 99%