Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.141
|View full text |Cite
|
Sign up to set email alerts
|

MelBERT: Metaphor Detection via Contextualized Late Interaction using Metaphorical Identification Theories

Abstract: Automated metaphor detection is a challenging task to identify the metaphorical expression of words in a sentence. To tackle this problem, we adopt pre-trained contextualized models, e.g., BERT and RoBERTa. To this end, we propose a novel metaphor detection model, namely metaphor-aware late interaction over BERT (MelBERT). Our model not only leverages contextualized word representation but also benefits from linguistic metaphor identification theories to detect whether the target word is metaphorical. Our empi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 30 publications
(25 citation statements)
references
References 35 publications
0
12
0
Order By: Relevance
“…Second, we consider adding features dedicated to more effectively leveraging the information embedded in the target sentence, regarding the MWE and its neighboring words as separate objects. We import some ideas from prior work for metaphor detection (Mao et al, 2019;Choi et al, 2021), exploiting the conceptual relationship between metaphors and idiomatic expressions.…”
Section: Features Based On Inner-sentence Contextmentioning
confidence: 99%
“…Second, we consider adding features dedicated to more effectively leveraging the information embedded in the target sentence, regarding the MWE and its neighboring words as separate objects. We import some ideas from prior work for metaphor detection (Mao et al, 2019;Choi et al, 2021), exploiting the conceptual relationship between metaphors and idiomatic expressions.…”
Section: Features Based On Inner-sentence Contextmentioning
confidence: 99%
“…In practice, Choi et al (2021) introduce two metaphor identification theories (Metaphor Identification Procedure (MIP; Group (2007), Steen (2010)) and Selectional Preference Violation (SPV; Wilks (1975)) into their model to better capture metaphors, which we expect also might be helpful for the procedure of identifying idiomatic expressions. The basic ideas of MIP and SPV are that a metaphor can be identified when we discover the difference between its literal and contextual meaning, and that it can also be detected when its semantics is distinguishable from that of its context.…”
Section: Related Workmentioning
confidence: 99%
“…The basic ideas of MIP and SPV are that a metaphor can be identified when we discover the difference between its literal and contextual meaning, and that it can also be detected when its semantics is distinguishable from that of its context. To realize the concepts, for MIP, Choi et al (2021) employ a target word's contextualized and isolated representations, while for SPV, they utilize the contextualized representations of the target word and the sentence including the word. We adopt some of their ideas and customize them for our purpose, i.e., modeling features for idiomaticity detection.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Sun et al [21] proposed a detailed process on how to further pre-train new texts and fine-tune for classification task, achieving a new record accuracy. Models such as FinBERT [16], ClinicalBERT [1], BioBERT [15], SCIBERT [2], and E-BERT [23] that were further pre-trained on huge domain corpora (e.g.billions of news articles, clinical texts or PMC Full-text and abstracts) were referred as Domain-adaptive Pretrained (DAPT) BERT and models further pre-trained on task-specific data are referred as Task-adaptive Pre-trained (TAPT) BERT by Gururangan et al [9] such as MelBERT [4] (Methaphor Detection BERT). Although DAPT models usually achieve better performance (1-8% higher), TAPT models also demonstrated competitive and sometimes even higher performance (2% higher) according to Gururangan et al [9].…”
Section: Related Workmentioning
confidence: 99%