Proceedings of the Workshop on Figurative Language Processing 2018
DOI: 10.18653/v1/w18-0908
|View full text |Cite
|
Sign up to set email alerts
|

An LSTM-CRF Based Approach to Token-Level Metaphor Detection

Abstract: Automatic processing of figurative languages is gaining popularity in NLP community for their ubiquitous nature and increasing volume. In this era of web 2.0, automatic analysis of metaphors is important for their extensive usage. Metaphors are a part of figurative language that compares different concepts, often on a cognitive level. Many approaches have been proposed for automatic detection of metaphors, even using sequential models or neural networks. In this paper, we propose a method for detection of meta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 20 publications
(29 reference statements)
0
8
0
Order By: Relevance
“…Many neural models with various features and architectures were introduced in the 2018 VUA Metaphor Detection Shared Task. They include LSTM-based models and CRFs augmented by linguistic features, such as WordNet, POS tags, concreteness score, unigrams, lemmas, verb clusters, and sentence-length manipulation (Swarnkar and Singh, 2018;Pramanick et al, 2018;Mosolova et al, 2018;Bizzoni and Ghanimifard, 2018;Wu et al, 2018). Researchers also studied different word embeddings, such as embeddings trained from corpora representing different levels of language mastery (Stemle and Onysko, 2018) and binarized vectors that reflect the General Inquirer dictionary category of a word (Mykowiecka et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Many neural models with various features and architectures were introduced in the 2018 VUA Metaphor Detection Shared Task. They include LSTM-based models and CRFs augmented by linguistic features, such as WordNet, POS tags, concreteness score, unigrams, lemmas, verb clusters, and sentence-length manipulation (Swarnkar and Singh, 2018;Pramanick et al, 2018;Mosolova et al, 2018;Bizzoni and Ghanimifard, 2018;Wu et al, 2018). Researchers also studied different word embeddings, such as embeddings trained from corpora representing different levels of language mastery (Stemle and Onysko, 2018) and binarized vectors that reflect the General Inquirer dictionary category of a word (Mykowiecka et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Baseline 2 is "all-16" in BeigmanKlebanov et al (2018). 6 https://github.com/bot-zen/naacl flp st MAP(Pramanick et al, 2018) used a hybrid architecture of Bi-directional LSTM and Conditional Random Fields (CRF) for metaphor detection, relying on features such as token, lemma and POS, and using word2vec embeddings trained on English Wikipedia. Specifically, the authors considered contextual information within a sentence for generating predictions.…”
mentioning
confidence: 99%
“…Most neural models treat metaphor identification as a sequence labelling task, outputing a sequence of metaphoricity labels for a sequence of input words (usually a sentence) (Bizzoni and Ghanimifard, 2018;Dankers et al, 2019;Gao et al, 2018;Gong et al, 2020;Mao et al, 2019;Mykowiecka et al, 2018;Pramanick et al, 2018;Su et al, 2020;Wu et al, 2018). The first sequence labelling systems typically represented an input sentence as a sequence of pre-trained word embeddings and produced a task-and context-specific sentence representation through bidirectional long short-term memory (BiLSTM) (Dankers et al, 2019;Gao et al, 2018;Mykowiecka et al, 2018;Pramanick et al, 2018). Bizzoni and Ghanimifard (2018) experimented with separating long sentences into smaller chunks, which led to a 6% increase in F-score when using a BiLSTM architecture.…”
Section: Neural Architecturesmentioning
confidence: 99%