Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and Their Applications 2017
DOI: 10.18653/v1/w17-1903
|View full text |Cite
|
Sign up to set email alerts
|

Improving Verb Metaphor Detection by Propagating Abstractness to Words, Phrases and Individual Senses

Abstract: refer to things that can not be seen, heard, felt, smelled, or tasted as opposed to concrete words. Among other applications, the degree of abstractness has been shown to be a useful information for metaphor detection. Our contribution to this topic are as follows: i) we compare supervised techniques to learn and extend abstractness ratings for huge vocabularies ii) we learn and investigate norms for multi-word units by propagating abstractness to verb-noun pairs which lead to better metaphor detection, iii) w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
48
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(49 citation statements)
references
References 28 publications
(24 reference statements)
1
48
0
Order By: Relevance
“…It assigns the metaphor label if the word is annotated metaphorically more frequently than as literally in the training set, and the literal label otherwise. We also compare our (2) a neural similarity network with skip-gram word embeddings (Rei et al, 2017), (3) a balanced logistic regression classifier on target verb lemma that uses a set of features based on multisense abstractness rating (Köper and im Walde, 2017), and (4) a CNN-LSTM ensemble model with weighted-softmax classifier which incorporates pre-trained word2vec, POS tags, and word cluster features (Wu et al, 2018). 2 We experiment with both sequence labeling model (SEQ) and classification model (CLS) for the verb classification task, and the sequence labeling model (SEQ) for the sequence labeling task.…”
Section: Comparison Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…It assigns the metaphor label if the word is annotated metaphorically more frequently than as literally in the training set, and the literal label otherwise. We also compare our (2) a neural similarity network with skip-gram word embeddings (Rei et al, 2017), (3) a balanced logistic regression classifier on target verb lemma that uses a set of features based on multisense abstractness rating (Köper and im Walde, 2017), and (4) a CNN-LSTM ensemble model with weighted-softmax classifier which incorporates pre-trained word2vec, POS tags, and word cluster features (Wu et al, 2018). 2 We experiment with both sequence labeling model (SEQ) and classification model (CLS) for the verb classification task, and the sequence labeling model (SEQ) for the sequence labeling task.…”
Section: Comparison Systemsmentioning
confidence: 99%
“…While the verbal arguments provide strong cues, providing the full sentential context supports more accurate prediction, as seen in Table 1. Even in the few cases when the full sentence is used (Köper and im Walde, 2017;Turney et al, 2011;Jang et al, 2016) existing models have used unigram-based features with limited expressivity. We investigate two common task formulations: (1) given a target verb in a sentence, classifying whether it is metaphorical or not, and (2) The experts started examining the Soviet Union with a microscope to study perceived changes.…”
Section: Introductionmentioning
confidence: 99%
“…Köper and im Walde (2017) try detecting all metaphoric verbs in the Amsterdam corpus using this single feature. Bizzoni et al (2017) show how a network trained for metaphor detection on pairs of word embeddings can "side-learn" noun abstractness.…”
Section: Input Manipulationmentioning
confidence: 99%
“…Interestingly, a similar approach -a combination of fully connected networks and pre-trained word embeddings -has also been used as a preprocessing step to metaphor detection, in order to learn word and sense abstractness scores to be used as features in a metaphor identification pipeline (Köper and im Walde, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…According to Köper and im Walde (2017), "abstract words refer to things that can not be seen, heard, felt, smelled, or tasted as opposed to concrete words." Abstractness of any word is studied by placing the word on a scale ranging between abstract and concrete, known as abstractness ratings.…”
Section: Abstractness Ratingsmentioning
confidence: 99%