Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing 2016
DOI: 10.18653/v1/d16-1104
|View full text |Cite
|
Sign up to set email alerts
|

Are Word Embedding-based Features Useful for Sarcasm Detection?

Abstract: This paper makes a simple increment to state-ofthe-art in sarcasm detection research. Existing approaches are unable to capture subtle forms of context incongruity which lies at the heart of sarcasm. We explore if prior work can be enhanced using semantic similarity/discordance between word embeddings. We augment word embedding-based features to four feature sets reported in the past. We also experiment with four types of word embeddings. We observe an improvement in sarcasm detection, irrespective of the word… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
89
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
6
2
2

Relationship

1
9

Authors

Journals

citations
Cited by 149 publications
(89 citation statements)
references
References 17 publications
0
89
0
Order By: Relevance
“…Word embeddings have proved to be useful for various tasks, such as Part of Speech Tagging (Collobert and Weston, 2008), Named Entity Recognition Sentence Classification (Kim, 2014), Sentiment Analysis (Liu et al, 2015), Sarcasm Detection (Joshi et al, 2016). Medical domain specific pre-trained word embeddings were released by different groups, such as Pyysalo et al (2013), Brokos et al (2016), etc.…”
Section: Related Workmentioning
confidence: 99%
“…Word embeddings have proved to be useful for various tasks, such as Part of Speech Tagging (Collobert and Weston, 2008), Named Entity Recognition Sentence Classification (Kim, 2014), Sentiment Analysis (Liu et al, 2015), Sarcasm Detection (Joshi et al, 2016). Medical domain specific pre-trained word embeddings were released by different groups, such as Pyysalo et al (2013), Brokos et al (2016), etc.…”
Section: Related Workmentioning
confidence: 99%
“…The researchers combined lexical with sentiment, syntactic and semantic Word2Vec cluster features for irony detection in English tweets using an SVM and obtained a top F 1 -score of 68%. Similarly, Joshi et al (2016) used an SVM classifier and expanded their set of lexical and sentiment features with different word embedding features. They showed that incorporating Word2Vec and dependency weight-based word embeddings results the most beneficial for irony detection, yielding F-scores of up to 81%.…”
Section: Computational Approaches To Ironymentioning
confidence: 99%
“…Due to this neutrality, the lexicon based methods are unable to capture the incongruity present. Therefore, maximum and minimum GloVe (Pennington et al, 2014) cosine similarity between any two words in a tweet are used as features in our system (Joshi et al, 2016).…”
Section: Feature Extractionmentioning
confidence: 99%