Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) 2016
DOI: 10.18653/v1/p16-2017
|View full text |Cite
|
Sign up to set email alerts
|

Semantic classifications for detection of verb metaphors

Abstract: We investigate the effectiveness of semantic generalizations/classifications for capturing the regularities of the behavior of verbs in terms of their metaphoricity. Starting from orthographic word unigrams, we experiment with various ways of defining semantic classes for verbs (grammatical, resource-based, distributional) and measure the effectiveness of these classes for classifying all verbs in a running text as metaphor or non metaphor.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
41
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 47 publications
(43 citation statements)
references
References 15 publications
0
41
1
Order By: Relevance
“…We follow the approach proposed by Klebanov et al (2016) to use the lemmatizing strategy. The first module in our model is a lemmatizer.…”
Section: Cnn-lstm Model With Crf or Softmax Inferencementioning
confidence: 99%
See 1 more Smart Citation
“…We follow the approach proposed by Klebanov et al (2016) to use the lemmatizing strategy. The first module in our model is a lemmatizer.…”
Section: Cnn-lstm Model With Crf or Softmax Inferencementioning
confidence: 99%
“…Existing computational approaches to detect metaphors are mainly based on lexicons (Mohler et al, 2013;Dodge et al, 2015) and supervised methods (Turney et al, 2011;Heintz et al, 2013;Klebanov et al, 2014Klebanov et al, , 2015Klebanov et al, , 2016. Lexiconbased methods are free from data annotation, but they are unable to detect novel metaphorical usages and capture the contextual information.…”
Section: Introductionmentioning
confidence: 99%
“…The lemmatized form of the verb has improved generalization in other systems(Beigman Klebanov et al, 2016).3 We use the default parameters of the XGBoost package: a maximum tree depth of 3, 100 trees, and η = 0.1.…”
mentioning
confidence: 99%
“…In their baseline paper (Beigman Klebanov et al, 2016) demonstrate the positive influence of features derived from the WordNet dictionary.…”
Section: Adding General Inquirer Datamentioning
confidence: 99%