2020
DOI: 10.1609/aaai.v34i05.6326
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Task Learning for Metaphor Detection with Graph Convolutional Neural Networks and Word Sense Disambiguation

Abstract: The current deep learning works on metaphor detection have only considered this task independently, ignoring the useful knowledge from the related tasks and knowledge resources. In this work, we introduce two novel mechanisms to improve the performance of the deep learning models for metaphor detection. The first mechanism employs graph convolutional neural networks (GCN) with dependency parse trees to directly connect the words of interest with their important context words for metaphor detection. The GCN net… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(29 citation statements)
references
References 24 publications
0
23
0
1
Order By: Relevance
“…To maintain consistency with the results by Gao et al (2018) and Le et al (2020), we present our results both on their prechosen sets, as well as on randomly chosen splits (rand-CV).…”
Section: Moh-xmentioning
confidence: 93%
See 2 more Smart Citations
“…To maintain consistency with the results by Gao et al (2018) and Le et al (2020), we present our results both on their prechosen sets, as well as on randomly chosen splits (rand-CV).…”
Section: Moh-xmentioning
confidence: 93%
“…We noticed that in general, higher F1-scores are gained for splits where the training set and evaluation set contain instances of the same verbs. Previously Model P R F1 Gao et al (2018) 79.1 73.5 75.6 Le et al (2020) 79.7 80.5 79.6 Mao et al (2019) 77. Mao et al (2019).…”
Section: Moh-xmentioning
confidence: 99%
See 1 more Smart Citation
“…Gao et al (2018) and Mao et al (2019) respectively utilized GloVe (Pennington et al, 2014) and ELMo (Peters et al, 2018) embeddings. Le et al (2020) proposed to construct a graph CNN guided by dependency trees of sentences for metaphor detection and to construct a multi-task learning framework for the WSD task, utilizing the knowledge from WSD to improve metaphor detection. Instead of treating metaphor detection as a sequence tagging task, Su et al (2020) proposed a novel reading comprehension paradigm based on a pre-trained language model, using features from POS tags and local texts.…”
Section: Metaphor Detectionmentioning
confidence: 99%
“…The best results posted for the 2020 shared task are the new state-of-the-art for both VUA 7 and TOEFL corpora. 7 While a number of recent systems were evaluted on VUA data (Le et al, 2020;Dankers et al, 2019;Mao et al, 2019;Gao et al, 2018), their results are not directly comparable to the shared task, since they evaluated on all parts of speech, including function words. See Dankers et al (2020) for a discussion.…”
Section: Performance Wrt 2018 Shared Taskmentioning
confidence: 99%