Proceedings of the Second Workshop on Figurative Language Processing 2020
DOI: 10.18653/v1/2020.figlang-1.31
|View full text |Cite
|
Sign up to set email alerts
|

Being neighbourly: Neural metaphor identification in discourse

Abstract: Existing approaches to metaphor processing typically rely on local features, such as immediate lexico-syntactic contexts or information within a given sentence. However, a large body of corpus-linguistic research suggests that situational information and broader discourse properties influence metaphor production and comprehension. In this paper, we present the first neural metaphor processing architecture that models a broader discourse through the use of attention mechanisms. Our models advance the state of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
25
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(30 citation statements)
references
References 20 publications
0
25
0
Order By: Relevance
“…Impressive results 1 were presented in the 2018 Metaphor Detection Shared Task (Leong et al, 2018), with most of the groups using neural models with other linguistic elements like POS tags, Word-Net features, concreteness scores and more (Wu et al, 2018;Swarnkar and Singh, 2018;Pramanick et al, 2018;Bizzoni and Ghanimifard, 2018), as well as in the more recent 2020 Shared Task , with the majority of groups using some variation of BERT in addition to the other features Gao and Zhang, 2002;Kuo and Carpuat, 2020;Torres Rivera et al, 2020;Kumar and Sharma, 2020;Hall Maudslay et al, 2020;Stemle and Onysko, 2020;Liu et al, 2020;Brooks and Youssef, 2020;Alnafesah et al, 2020;Wan et al, 2020;Dankers et al, 2020).…”
Section: Metaphor Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Impressive results 1 were presented in the 2018 Metaphor Detection Shared Task (Leong et al, 2018), with most of the groups using neural models with other linguistic elements like POS tags, Word-Net features, concreteness scores and more (Wu et al, 2018;Swarnkar and Singh, 2018;Pramanick et al, 2018;Bizzoni and Ghanimifard, 2018), as well as in the more recent 2020 Shared Task , with the majority of groups using some variation of BERT in addition to the other features Gao and Zhang, 2002;Kuo and Carpuat, 2020;Torres Rivera et al, 2020;Kumar and Sharma, 2020;Hall Maudslay et al, 2020;Stemle and Onysko, 2020;Liu et al, 2020;Brooks and Youssef, 2020;Alnafesah et al, 2020;Wan et al, 2020;Dankers et al, 2020).…”
Section: Metaphor Detectionmentioning
confidence: 99%
“…yet not directly comparable to ours, since they used different train-test separations and evaluation, seeDankers et al (2020) …”
mentioning
confidence: 90%
“…Recently, Transformer based pre-trained language models become the most popular architecture in the metaphor detection shared task . Multitask learning (Dankers et al, 2019;Rohanian et al, 2020; and discourse context (Dankers et al, 2020) have been exploited as well. Discussion The grammatical relation-level and token-level metaphor detection consider different aspects of information.…”
Section: Related Workmentioning
confidence: 99%
“…Impressive results 1 were presented in the 2018 Metaphor Detection Shared Task (Leong et al, 2018), with most of the groups using neural models with other linguistic elements like POS tags, Word-Net features, concreteness scores and more Swarnkar and Singh, 2018;Pramanick et al, 2018;Bizzoni and Ghanimifard, 2018), as well as in the more recent 2020 Shared Task , with the majority of groups using some variation of BERT in addition to the other features Gao and Zhang, 2002;Kuo and Carpuat, 2020;Torres Rivera et al, 2020;Kumar and Sharma, 2020;Hall Maudslay et al, 2020;Stemle and Onysko, 2020;Brooks and Youssef, 2020;Chen et al, 2020;Alnafesah et al, 2020;Wan et al, 2020;Dankers et al, 2020).…”
Section: Metaphor Detectionmentioning
confidence: 99%