Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1378
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End Sequential Metaphor Identification Inspired by Linguistic Theories

Abstract: End-to-end training with Deep Neural Networks (DNN) is a currently popular method for metaphor identification. However, standard sequence tagging models do not explicitly take advantage of linguistic theories of metaphor identification. We experiment with two DNN models which are inspired by two human metaphor identification procedures. By testing on three public datasets, we find that our models achieve state-of-the-art performance in end-to-end metaphor identification.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
113
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 84 publications
(131 citation statements)
references
References 34 publications
(53 reference statements)
0
113
1
Order By: Relevance
“…1 Mao et al (2019) and Dankers et al (2019) recently presented improved approaches to modelling metaphors by relying on (psycho)linguistically motivated theories of human metaphor processing. Mao et al (2019) proposed two adaptations of the model of Gao et al (2018): Firstly, concatenating the hidden states of the Bi-LSTM to a context representation capturing surrounding words within the current sentence, to model selectional preferences. Secondly, including word embeddings both at the input and classification layer, to explicitly model the discrepancy between a word's literal and its contextualised meaning.…”
Section: Deep Learning For Metaphor Identificationmentioning
confidence: 99%
See 1 more Smart Citation
“…1 Mao et al (2019) and Dankers et al (2019) recently presented improved approaches to modelling metaphors by relying on (psycho)linguistically motivated theories of human metaphor processing. Mao et al (2019) proposed two adaptations of the model of Gao et al (2018): Firstly, concatenating the hidden states of the Bi-LSTM to a context representation capturing surrounding words within the current sentence, to model selectional preferences. Secondly, including word embeddings both at the input and classification layer, to explicitly model the discrepancy between a word's literal and its contextualised meaning.…”
Section: Deep Learning For Metaphor Identificationmentioning
confidence: 99%
“…Succeeding research has moved on to corpus-based techniques, such as the use of distributional and vector space models (Shutova, 2011;, and more recently, deep learning methods (Rei et al, 2017). Current metaphor identification approaches cast the problem in the sequence labelling paradigm and apply convolutional (Wu et al, 2018), recurrent (Gao et al, 2018;Mao et al, 2019;Dankers et al, 2019) and transformer-based neural models (Dankers et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Several existing models have exploited this difference (e.g. Mao et al, 2019;Gao et al, 2018). Usually, the target domain is something intangible, whilst the source domain relates more closely to our real-world experience.…”
Section: Concreteness and Contextmentioning
confidence: 99%
“…Document embeddings were employed in an attempt to exploit wider context to improve metaphor detection in addition to other word representations including GLoVe, ELMo and skip-thought (Kiros et al, 2015). Mao et al (2018Mao et al ( , 2019 explored the idea of selectional preferences violation (Wilks, 1978) in a neural architecture to identify metaphoric words. Mao's proposed approaches emphasised the importance of the context to identify metaphoricity by employing context-dependent and context-independent word embeddings.…”
Section: Related Workmentioning
confidence: 99%
“…Mao's proposed approaches emphasised the importance of the context to identify metaphoricity by employing context-dependent and context-independent word embeddings. Mao et al (2019) also proposed employing multi-head attention to compare the targeted word representation with its context. An interesting approach was introduced by Dankers et al (2019) to model the interplay between metaphor identification and emotion regression.…”
Section: Related Workmentioning
confidence: 99%