Proceedings of the Workshop on Figurative Language Processing 2018
DOI: 10.18653/v1/w18-0911
|View full text |Cite
|
Sign up to set email alerts
|

Bigrams and BiLSTMs Two Neural Networks for Sequential Metaphor Detection

Abstract: We present and compare two alternative deep neural architectures to perform word-level metaphor detection on text: a bi-LSTM model and a new structure based on recursive feedforward concatenation of the input. We discuss different versions of such models and the effect that input manipulation -specifically, reducing the length of sentences and introducing concreteness scores for words -have on their performance.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
25
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 27 publications
(27 citation statements)
references
References 24 publications
(32 reference statements)
0
25
0
Order By: Relevance
“…Many neural models with various features and architectures were introduced in the 2018 VUA Metaphor Detection Shared Task. They include LSTM-based models and CRFs augmented by linguistic features, such as WordNet, POS tags, concreteness score, unigrams, lemmas, verb clusters, and sentence-length manipulation (Swarnkar and Singh, 2018;Pramanick et al, 2018;Mosolova et al, 2018;Bizzoni and Ghanimifard, 2018;Wu et al, 2018). Researchers also studied different word embeddings, such as embeddings trained from corpora representing different levels of language mastery (Stemle and Onysko, 2018) and binarized vectors that reflect the General Inquirer dictionary category of a word (Mykowiecka et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Many neural models with various features and architectures were introduced in the 2018 VUA Metaphor Detection Shared Task. They include LSTM-based models and CRFs augmented by linguistic features, such as WordNet, POS tags, concreteness score, unigrams, lemmas, verb clusters, and sentence-length manipulation (Swarnkar and Singh, 2018;Pramanick et al, 2018;Mosolova et al, 2018;Bizzoni and Ghanimifard, 2018;Wu et al, 2018). Researchers also studied different word embeddings, such as embeddings trained from corpora representing different levels of language mastery (Stemle and Onysko, 2018) and binarized vectors that reflect the General Inquirer dictionary category of a word (Mykowiecka et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…In terms of computational metaphor identification, feature-engineering has been widely discussed (Leong et al, 2018). Unigrams, imageability, concreteness, abstractness, word embedding and semantic classes are features, commonly employed by supervised machine learning (Turney et al, 2011;Assaf et al, 2013;Tsvetkov et al, 2014;Klebanov et al, 2016), deep learning (Rei et al, 2017;Gutierrez et al, 2017;Bizzoni and Ghanimifard, 2018) and unsupervised leaning Mao et al, 2018) approaches.…”
Section: Related Workmentioning
confidence: 99%
“…OCOTA (Bizzoni and Ghanimifard, 2018) experimented with a deep neural network composed of a Bi-LSTM preceded and followed by fully connected layers, as well as a simpler model that has a sequence of fully connected neural networks. The authors also experiment with word embeddings trained on various data, with explicit features based on concreteness, and with preprocessing that addresses variability in sentence length.…”
Section: System Descriptionsmentioning
confidence: 99%