2015
DOI: 10.1007/978-3-319-26190-4_6
|View full text |Cite
|
Sign up to set email alerts
|

Twitter Sarcasm Detection Exploiting a Context-Based Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 62 publications
(35 citation statements)
references
References 12 publications
0
35
0
Order By: Relevance
“…Manual: [Riloff et al 2013;Maynard and Greenwood 2014;Ptácek et al 2014;Abhijit Mishra and Bhattacharyya 2016;Abercrombie and Hovy 2016] Hashtag-based: González-Ibánez et al 2011;Reyes et al 2013;Barbieri et al 2014a;Ghosh et al 2015b;Bharti et al 2015;Liebrecht et al 2013;Bouazizi and Ohtsuki 2015a;Wang et al 2015;Barbieri et al 2014b;Bamman and Smith 2015;Fersini et al 2015;Khattri et al 2015;Rajadesingan et al 2015 [Lukin and Walker 2013;Reyes and Rosso 2014;Buschmeier et al 2014;Liu et al 2014;Filatova 2012] Other datasets [Tepperman et al 2006;Kreuz and Caucci 2007;Veale and Hao 2010;Rakov and Rosenberg 2013;Ghosh et al 2015a;Joshi et al 2016a;Abercrombie and Hovy 2016]…”
Section: Text Form Related Work Tweetsmentioning
confidence: 99%
See 1 more Smart Citation
“…Manual: [Riloff et al 2013;Maynard and Greenwood 2014;Ptácek et al 2014;Abhijit Mishra and Bhattacharyya 2016;Abercrombie and Hovy 2016] Hashtag-based: González-Ibánez et al 2011;Reyes et al 2013;Barbieri et al 2014a;Ghosh et al 2015b;Bharti et al 2015;Liebrecht et al 2013;Bouazizi and Ohtsuki 2015a;Wang et al 2015;Barbieri et al 2014b;Bamman and Smith 2015;Fersini et al 2015;Khattri et al 2015;Rajadesingan et al 2015 [Lukin and Walker 2013;Reyes and Rosso 2014;Buschmeier et al 2014;Liu et al 2014;Filatova 2012] Other datasets [Tepperman et al 2006;Kreuz and Caucci 2007;Veale and Hao 2010;Rakov and Rosenberg 2013;Ghosh et al 2015a;Joshi et al 2016a;Abercrombie and Hovy 2016]…”
Section: Text Form Related Work Tweetsmentioning
confidence: 99%
“…Bamman and Smith [2015] use binary logistic regression. Wang et al [2015] use SVM-HMM in order to incorporate sequence nature of output labels in a conversation. Liu et al [2014] compare several classification approaches including bagging, boosting, etc.…”
Section: Learning Algorithmsmentioning
confidence: 99%
“…For both corpora adding 'pre" and "post" messages do not seem to affect significantly the F1 scores, even though using the "post" message as context seems to improve for the sarcastic class (Oraby et al 2017). Unlike the above approaches that model the utterance and context together, Wang et al (2015) and Joshi et al (2016a) use a sequence labeling approach and show that conversation helps in sarcasm detection. Inspired by this idea of modeling the current turn and context separately, in our prior work (Ghosh, Fabbri, and Muresan 2017) -which this paper substantially extends -, we proposed a deep learning architecture based on LSTMs, where one LSTM reads the context (prior turn) and one LSTM reads the current turn, and showed that this type of architecture outperforms a simple LSTM that just reads the current turn.…”
Section: Related Workmentioning
confidence: 99%
“…To build the conversation context, for each sarcastic and non-sarcastic tweet we used the "reply to status" parameter in the tweet to determine whether it was in reply to a previous tweet: if so, we downloaded the last tweet (i.e., "local conversation context") to which the original tweet was replying to (Bamman and Smith 2015). In addition, we also collected the entire threaded conversation when available (Wang et al 2015). Although we have collected over 200K tweets in the first step, around 13% of them were a reply to another tweet, and thus our final Twitter conversations set contains 25,991 instances (12,215 instances for sarcastic class and 13,776 instances for the non-sarcastic class).…”
Section: Datamentioning
confidence: 99%
“…The work closest to ours is by Wang et al (2015). They use a labeled dataset of 1500 tweets, the labels for which are obtained automatically.…”
Section: Performance On Features Reported In Prior Workmentioning
confidence: 99%