Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1372
|View full text |Cite
|
Sign up to set email alerts
|

Humor Detection: A Transformer Gets the Last Laugh

Abstract: Much previous work has been done in attempting to identify humor in text. In this paper we extend that capability by proposing a new task: assessing whether or not a joke is humorous. We present a novel way of approaching this problem by building a model that learns to identify humorous jokes based on ratings gleaned from Reddit pages, consisting of almost 16,000 labeled instances. Using these ratings to determine the level of humor, we then employ a Transformer architecture for its advantages in learning from… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
45
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 79 publications
(52 citation statements)
references
References 13 publications
0
45
0
Order By: Relevance
“…Transformer architectures trained on language modeling have been recently adapted to downstream tasks demonstrating state-of-the-art performance (Weller and Seppi, 2019;Gupta and Durrett, 2019;Maronikolakis et al, 2020). In this paper, we adapt and subsequently combine transformers with external linguistic information for complaint prediction.…”
Section: Transformer-based Modelsmentioning
confidence: 99%
“…Transformer architectures trained on language modeling have been recently adapted to downstream tasks demonstrating state-of-the-art performance (Weller and Seppi, 2019;Gupta and Durrett, 2019;Maronikolakis et al, 2020). In this paper, we adapt and subsequently combine transformers with external linguistic information for complaint prediction.…”
Section: Transformer-based Modelsmentioning
confidence: 99%
“…Recently released models such as BERT (Devlin et al, 2018) and RoBERTa (Liu et al, 2020), exploit the use of pre-training and bidirectional transformers to enable efficient solutions obtaining state-of-the-art performance. Pre-trained embeddings significantly outperform the previous state-of-the-art in similar problems such as humor detection (Weller and Seppi, 2019), and subjectivity detection (Pant et al, 2020).…”
Section: Introductionmentioning
confidence: 98%
“…Early work on humor recognition (Mihalcea and Strapparava, 2005) proposed heuristic-based humor-specific stylistic features, for example alliteration, antonymy, and adult slang. More recent work (Yang et al, 2015;Chen and Soo, 2018;Weller and Seppi, 2019) regarded the problem as a text classification task, and adopted statistical machine learning methods and neural networks to train models on humor datasets. However, only few of the deep learning methods have tried to establish a connection between humor recognition and humor theories.…”
Section: Introductionmentioning
confidence: 99%