Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2016
DOI: 10.18653/v1/p16-1047
|View full text |Cite
|
Sign up to set email alerts
|

Neural Networks For Negation Scope Detection

Abstract: Automatic negation scope detection is a task that has been tackled using different classifiers and heuristics. Most systems are however 1) highly-engineered, 2) English-specific, and 3) only tested on the same genre they were trained on. We start by addressing 1) and 2) using a neural network architecture. Results obtained on data from the *SEM2012 shared task on negation scope detection show that even a simple feed-forward neural network using word-embedding features alone, performs on par with earlier classi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
97
1
3

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 73 publications
(101 citation statements)
references
References 18 publications
0
97
1
3
Order By: Relevance
“…The system of Packard et al (2014) also relies on cue-and scope predictions from the so-called UiO1 system of , however, and these predictions are only provided in the form of pre-computed system output for the *SEM shared task data; the underlying UiO1 system is not itself available. 4 In the system comparison reported by Fancellu et al (2016), the results of the *SEM shared task competition systems are based on predicted cues while the results of Packard et al (2014) and Fancellu et al (2016) are for gold cues, making them not comparable.…”
Section: End-to-end Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The system of Packard et al (2014) also relies on cue-and scope predictions from the so-called UiO1 system of , however, and these predictions are only provided in the form of pre-computed system output for the *SEM shared task data; the underlying UiO1 system is not itself available. 4 In the system comparison reported by Fancellu et al (2016), the results of the *SEM shared task competition systems are based on predicted cues while the results of Packard et al (2014) and Fancellu et al (2016) are for gold cues, making them not comparable.…”
Section: End-to-end Resultsmentioning
confidence: 99%
“…At the same time, we see that the combined system of Packard et al (2014) achieves the highest absolute scores, and we return to this point when discussing end-to-end results below. Fi-nally, note that Fancellu et al (2016) report scope results on the *SEM evaluation data (gold cues only) for a suite of different classifiers based on a bi-directional LSTM, with the best configuration obtaining a scope-level F-score of 77.77. In sum, we observe two things; (i) our scope classifier achieves competetive performance, and (ii) despite the large differences in terms of types of approaches and architectures for the various scope systems considered here, there are not large differences in terms of performance.…”
Section: Scope Resolutionmentioning
confidence: 99%
See 1 more Smart Citation
“…In contrast, the recent success of Neural Network models for negation scope detection (Fancellu et al, 2016) suggested investigating whether a character-based recurrent model can perform on par or better than this previous work. After describing our model in Section 2, we show in Section 3.3 that a character-level representation with no feature engineering is able to achieve similar recall as models that use word-alignment information, as well as other features, to tackle the problem of data sparsity.…”
Section: Introductionmentioning
confidence: 94%
“…Recently, neural network models have been proposed to overcome some of the limitations of rule-based techniques. A feedforward and bidirectional Long Short Term Memory (BiLSTM) networks for generic negation scope detection was proposed in [16]. In [17] a gated recurrent units (GRUs) are used to represent the clinical relations and their context, along with an attention mechanism.…”
Section: Introductionmentioning
confidence: 99%