Proceedings of the 15th Conference of the European Chapter of The Association for Computational Linguistics: Volume 2 2017
DOI: 10.18653/v1/e17-2010
|View full text |Cite
|
Sign up to set email alerts
|

Detecting negation scope is easy, except when it isn't

Abstract: Several corpora have been annotated with negation scope-the set of words whose meaning is negated by a cue like the word "not"-leading to the development of classifiers that detect negation scope with high accuracy. We show that for nearly all of these corpora, this high accuracy can be attributed to a single fact: they frequently annotate negation scope as a single span of text delimited by punctuation. For negation scopes not of this form, detection accuracy is low and undersampling the easy training example… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

3
41
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 28 publications
(45 citation statements)
references
References 6 publications
3
41
1
Order By: Relevance
“…Modeling syntax in addition to surface word order is helpful, as shown by an ensemble of BiLSTM and D-LSTM models outperforming either model alone. Our results also show that cross-lingual word embeddings are not really necessary, suggesting that the model mainly relies on PoS, syntax, and punctuation boundaries-with the latter result reinforcing previous findings (Fancellu et al, 2017).…”
Section: Introductionsupporting
confidence: 88%
See 3 more Smart Citations
“…Modeling syntax in addition to surface word order is helpful, as shown by an ensemble of BiLSTM and D-LSTM models outperforming either model alone. Our results also show that cross-lingual word embeddings are not really necessary, suggesting that the model mainly relies on PoS, syntax, and punctuation boundaries-with the latter result reinforcing previous findings (Fancellu et al, 2017).…”
Section: Introductionsupporting
confidence: 88%
“…Modeling syntax is useful, though not on its own. The ensembles that incorporate syntax outperform other models on both F 1 and PCS in both the monolingual and crosslingual settings, showing that syntax is indeed beneficial-note that they outpeform the stateof-the-art BiLSTM of Fancellu et al (2016Fancellu et al ( , 2017. 4 The D-LSTM outperforms the GCN in the monolingual settings but the latter performs better in terms of full scope spans detected in the when training in English and testing in Chinese.…”
Section: Resultsmentioning
confidence: 93%
See 2 more Smart Citations
“…• Following Fancellu et al (2017), we provide a thorough comparison of our proposed model with other state-of-the-art models and analyze their behaviour in the absence of potential "linear clues", the presence of which might result in highly accurate predictions even for syntax-unaware token representations.…”
Section: Introductionmentioning
confidence: 99%