The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
BioNLP 2017 2017
DOI: 10.18653/v1/w17-2320
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Domain Adaptation for Clinical Negation Detection

Abstract: Detecting negated concepts in clinical texts is an important part of NLP information extraction systems. However, generalizability of negation systems is lacking, as cross-domain experiments suffer dramatic performance losses. We examine the performance of multiple unsupervised domain adaptation algorithms on clinical negation detection, finding only modest gains that fall well short of in-domain performance.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 16 publications
0
10
0
Order By: Relevance
“…For the in-domain but out-of-sample case, a domain fine-tuned rule based system seems to transfer well (Sykes et al, 2020). For all other cases, transfer is challenging, both for rule-based and machine-learning models (Wu et al, 2014;Miller et al, 2017;Sykes et al, 2020), with machine learning models benefiting from the addition of in-domain data to the training set. Lin et al (2020) demonstrate that a pretrained BERT model can improve the results of domain transfer for negation detection, but the results are still lower for outof-domain datasets than in-domain datasets if we compare to the results of earlier models in Miller et al (2017).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…For the in-domain but out-of-sample case, a domain fine-tuned rule based system seems to transfer well (Sykes et al, 2020). For all other cases, transfer is challenging, both for rule-based and machine-learning models (Wu et al, 2014;Miller et al, 2017;Sykes et al, 2020), with machine learning models benefiting from the addition of in-domain data to the training set. Lin et al (2020) demonstrate that a pretrained BERT model can improve the results of domain transfer for negation detection, but the results are still lower for outof-domain datasets than in-domain datasets if we compare to the results of earlier models in Miller et al (2017).…”
Section: Related Workmentioning
confidence: 99%
“…Despite the amount of progress on negation detection for clinical texts, however, there is still ample evidence that while fitting systems on a particular dataset is straightforward, generalising negation detection across datasets is challenging (Wu et al, 2014). This is true both for out-of-domain evaluation, such as training on a dataset of medical articles with evaluation on a dataset of clinical text (Wu et al, 2014;Miller et al, 2017), as well as for out-of-sample evaluation, where the training and test datasets are from the same domain but may have differences due to different annotation style, or distribution of named entities (Sykes et al, 2020). For the in-domain but out-of-sample case, a domain fine-tuned rule based system seems to transfer well (Sykes et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The interpretable vignettes also revealed that classification of prostate cancer death was problematic when negation appeared in the text. Our bag-of-words feature representation would not be expected to handle negation, so the application of methods to detect negation in clinical text data (37,38) would likely boost performance. Off-the-shelf classifiers achieved good performance on the CAP dataset.…”
Section: Discussionmentioning
confidence: 99%
“…Our method is therefore capable of learning from the large historic EMR, even if these datasets were not annotated for this purpose. This is important because FP classifiers trained on one dataset do not perform as well as those trained on in-domain data, 5,21 with a similar finding on a veterinary disease classification task. 22 Cheng et al 5 showed that a classifier trained to detect negation cues and scope on out-of-domain data in the form of human clinical notes performed similarly to the rule-based NegEx 10 algorithm.…”
Section: Introductionmentioning
confidence: 99%