Proceedings of the Second Workshop on Figurative Language Processing 2020
DOI: 10.18653/v1/2020.figlang-1.3
|View full text |Cite
|
Sign up to set email alerts
|

A Report on the 2020 VUA and TOEFL Metaphor Detection Shared Task

Abstract: In this paper, we report on the shared task on metaphor identification on VU Amsterdam Metaphor Corpus and on a subset of the TOEFL Native Language Identification Corpus. The shared task was conducted as apart of the ACL 2020 Workshop on Processing Figurative Language.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 34 publications
(27 citation statements)
references
References 50 publications
0
9
0
Order By: Relevance
“…An effective approach to transfer learning used frequently in recent years is the pre-training of language models, such as BERT [4], on large amounts of unsupervised data. The knowledge from this pre-training phase is then transferred to the subsequent fine-tuning phase on task-and domain-specific data, which has shown significant improvements on several benchmark datasets, e.g., [12] and [1]. Subsequently, several languageand domain-specific adaptations of language models have been developed for non-English data or to further improve the performance of the original models on specific tasks.…”
Section: Related Researchmentioning
confidence: 99%
“…An effective approach to transfer learning used frequently in recent years is the pre-training of language models, such as BERT [4], on large amounts of unsupervised data. The knowledge from this pre-training phase is then transferred to the subsequent fine-tuning phase on task-and domain-specific data, which has shown significant improvements on several benchmark datasets, e.g., [12] and [1]. Subsequently, several languageand domain-specific adaptations of language models have been developed for non-English data or to further improve the performance of the original models on specific tasks.…”
Section: Related Researchmentioning
confidence: 99%
“…First, it relies heavily on a number of models (metaphor classification, FrameNet frame tagging, and COMET symbol extraction), each of which introduces error. Specifically, metaphor detection remains a difficult task (with state-of-the-art results < .77 F1 (Leong et al, 2020)), making the identification of initial metaphors difficult. FrameNet tagging is also error prone, with the micro-F1 score for frame tagging being 70.9.…”
Section: Source/target Pairs From Framenet Taggingmentioning
confidence: 99%
“…Such pervasiveness of metaphor in language and thought -as well as the ambiguity it creates -make metaphor a challenge to various NLP applications, such as machine translation, information retrieval and extraction, question answering, opinion mining, etc. The interest of the NLP community to computational metaphor research expressed itself in the series of dedicated workshops in 2013-2016 [10][11][12][13] and the two metaphor detection shared tasks in 2018 and 2020 [14,15].…”
Section: The Task Of Computational Metaphor Identificationmentioning
confidence: 99%
“…Besides, topic models, along with the other types of features, were suggested for use to the participants of the First and the Second shared tasks on metaphor detection [14,15].…”
Section: Topic Modelling In Metaphor Identification: Previous Workmentioning
confidence: 99%