Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.602
|View full text |Cite
|
Sign up to set email alerts
|

Don’t Invite BERT to Drink a Bottle: Modeling the Interpretation of Metonymies Using BERT and Distributional Representations

Abstract: In this work, we carry out two experiments in order to assess the ability of BERT to capture the meaning shift associated with metonymic expressions. We test the model on a new dataset that is representative of the most common types of metonymy. We compare BERT with the Structured Distributional Model (SDM), a model for the representation of words in context which is based on the notion of Generalized Event Knowledge. The results reveal that, while BERT ability to deal with metonymy is quite limited, SDM is go… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…Various studies examine the Transformer-based language model BERT's (Devlin et al, 2019) ability to capture tropes like metonyms (Pedinotti and Lenci, 2020), idioms (Kurfalı and Östling, 2020), and multiple types of figurative language (Shwartz and Dagan, 2019). Kurfalı and Östling (2020) detect idioms based on the dissimilarity of BERT's representations of a PIE and its context, assuming that contextual discrepancies indicate figurative usage.…”
Section: Tropes In Transformermentioning
confidence: 99%
See 1 more Smart Citation
“…Various studies examine the Transformer-based language model BERT's (Devlin et al, 2019) ability to capture tropes like metonyms (Pedinotti and Lenci, 2020), idioms (Kurfalı and Östling, 2020), and multiple types of figurative language (Shwartz and Dagan, 2019). Kurfalı and Östling (2020) detect idioms based on the dissimilarity of BERT's representations of a PIE and its context, assuming that contextual discrepancies indicate figurative usage.…”
Section: Tropes In Transformermentioning
confidence: 99%
“…Kurfalı and Östling (2020) detect idioms based on the dissimilarity of BERT's representations of a PIE and its context, assuming that contextual discrepancies indicate figurative usage. Pedinotti and Lenci (2020) measure whether BERT detects meaning shift for metonymic expressions but find cloze probabilities more indicative than vector similarities. Shwartz and Dagan (2019) find that BERT is better at detecting figurative meaning shift than at predicting implicit meaning -e.g.…”
Section: Tropes In Transformermentioning
confidence: 99%
“…2020; Pedinotti and Lenci, 2020). As a consequence, they usually fall far behind the performance of the same PLM fine-tuned with (i) sense annotations (Hadiwinoto et al, 2019;Blevins and Zettlemoyer, 2020) or (ii) external (e.g., WordNet) knowledge (Levine et al, 2020).…”
Section: Introductionmentioning
confidence: 99%