The wide adoption of electronic health records (EHRs) has enabled a wide range of applications leveraging EHR data. However, the meaningful use of EHR data largely depends on our ability to efficiently extract and consolidate information embedded in clinical text where natural language processing (NLP) techniques are essential. Semantic textual similarity (STS) that measures the semantic similarity between text snippets plays a significant role in many NLP applications. In the general NLP domain, STS shared tasks have made available a huge collection of text snippet pairs with manual annotations in various domains. In the clinical domain, STS can enable us to detect and eliminate redundant information that may lead to a reduction in cognitive burden and an improvement in the clinical decision-making process. This paper elaborates our efforts to assemble a resource for STS in the medical domain, MedSTS. It consists of a total of 174,629 sentence pairs gathered from a clinical corpus at Mayo Clinic. A subset of MedSTS (MedSTS_ann) containing 1,068 sentence pairs was annotated by two medical experts with semantic similarity scores of 0-5 (low to high similarity). We further analyzed the medical concepts in the MedSTS corpus, and tested four STS systems on the MedSTS_ann corpus. In the future, we will organize a shared task by releasing the MedSTS_ann corpus to motivate the community to tackle the real world clinical problems. KeywordsElectronic health records, semantic textual similarity, natural language processing, clinical semantic textual similarity resource 1
Data is foundational to high-quality artificial intelligence (AI). Given that a substantial amount of clinically relevant information is embedded in unstructured data, natural language processing (NLP) plays an essential role in extracting valuable information that can benefit decision making, administration reporting, and research. Here, we share several desiderata pertaining to development and usage of NLP systems, derived from two decades of experience implementing clinical NLP at the Mayo Clinic, to inform the healthcare AI community. Using a framework, we developed as an example implementation, the desiderata emphasize the importance of a user-friendly platform, efficient collection of domain expert inputs, seamless integration with clinical data, and a highly scalable computing infrastructure.
Background Semantic textual similarity is a common task in the general English domain to assess the degree to which the underlying semantics of 2 text segments are equivalent to each other. Clinical Semantic Textual Similarity (ClinicalSTS) is the semantic textual similarity task in the clinical domain that attempts to measure the degree of semantic equivalence between 2 snippets of clinical text. Due to the frequent use of templates in the Electronic Health Record system, a large amount of redundant text exists in clinical notes, making ClinicalSTS crucial for the secondary use of clinical text in downstream clinical natural language processing applications, such as clinical text summarization, clinical semantics extraction, and clinical information retrieval. Objective Our objective was to release ClinicalSTS data sets and to motivate natural language processing and biomedical informatics communities to tackle semantic text similarity tasks in the clinical domain. Methods We organized the first BioCreative/OHNLP ClinicalSTS shared task in 2018 by making available a real-world ClinicalSTS data set. We continued the shared task in 2019 in collaboration with National NLP Clinical Challenges (n2c2) and the Open Health Natural Language Processing (OHNLP) consortium and organized the 2019 n2c2/OHNLP ClinicalSTS track. We released a larger ClinicalSTS data set comprising 1642 clinical sentence pairs, including 1068 pairs from the 2018 shared task and 1006 new pairs from 2 electronic health record systems, GE and Epic. We released 80% (1642/2054) of the data to participating teams to develop and fine-tune the semantic textual similarity systems and used the remaining 20% (412/2054) as blind testing to evaluate their systems. The workshop was held in conjunction with the American Medical Informatics Association 2019 Annual Symposium. Results Of the 78 international teams that signed on to the n2c2/OHNLP ClinicalSTS shared task, 33 produced a total of 87 valid system submissions. The top 3 systems were generated by IBM Research, the National Center for Biotechnology Information, and the University of Florida, with Pearson correlations of r=.9010, r=.8967, and r=.8864, respectively. Most top-performing systems used state-of-the-art neural language models, such as BERT and XLNet, and state-of-the-art training schemas in deep learning, such as pretraining and fine-tuning schema, and multitask learning. Overall, the participating systems performed better on the Epic sentence pairs than on the GE sentence pairs, despite a much larger portion of the training data being GE sentence pairs. Conclusions The 2019 n2c2/OHNLP ClinicalSTS shared task focused on computing semantic similarity for clinical text sentences generated from clinical notes in the real world. It attracted a large number of international teams. The ClinicalSTS shared task could continue to serve as a venue for researchers in natural language processing and medical informatics communities to develop and improve semantic textual similarity techniques for clinical text.
Update This article was updated on December 6, 2019, because of a previous error. On page 1936, in Table VII, “Performance of the Bearing Surface Algorithm,” the row that had read “Bearing surface predicted by algorithm” now reads “Bearing surface predicted by algorithm*.” An erratum has been published: J Bone Joint Surg Am. 2020 Jan 2;102(1):e4. Update This article was updated on March 31, 2020, because of a previous error. On page 1934, in Table IV (“THA Bearing Surface-Related Keywords in Operative Notes”), the row that had read “Femoral stem; stem; HFx-stem; femoral component; femoral component/stem; permanent prosthesis; stem fem cemented” now reads “Femoral head; ball; delta head; delta ceramic head; ion treated; BIOLOX delta; ceramic head; ceramic femoral head; ceramic offset head; ceramic (size) head; alumina ceramic head; alumina prosthetic head; alumna ceramic head; BIOLOX ceramic head; BIOLOX delta head; BIOLOX femoral head; BIOLOX delta ceramic head.” An erratum has been published: J Bone Joint Surg Am. 2020 May 6;102(9):e43. Background: Manual chart review is labor-intensive and requires specialized knowledge possessed by highly trained medical professionals. Natural language processing (NLP) tools are distinctive in their ability to extract critical information from raw text in electronic health records (EHRs). As a proof of concept for the potential application of this technology, we examined the ability of NLP to correctly identify common elements described by surgeons in operative notes for total hip arthroplasty (THA). Methods: We evaluated primary THAs that had been performed at a single academic institution from 2000 to 2015. A training sample of operative reports was randomly selected to develop prototype NLP algorithms, and additional operative reports were randomly selected as the test sample. Three separate algorithms were created with rules aimed at capturing (1) the operative approach, (2) the fixation method, and (3) the bearing surface category. The algorithms were applied to operative notes to evaluate the language used by 29 different surgeons at our center and were applied to EHR data from outside facilities to determine external validity. Accuracy statistics were calculated with use of manual chart review as the gold standard. Results: The operative approach algorithm demonstrated an accuracy of 99.2% (95% confidence interval [CI], 97.1% to 99.9%). The fixation technique algorithm demonstrated an accuracy of 90.7% (95% CI, 86.8% to 93.8%). The bearing surface algorithm demonstrated an accuracy of 95.8% (95% CI, 92.7% to 97.8%). Additionally, the NLP algorithms applied to operative reports from other institutions yielded comparable performance, demonstrating external validity. Conclusions: NLP-enabled algorithms are a promising alternative to the current gold standard of manual chart review for identifying common data elements from orthopaedic operative notes. The present study provides a proof of concept for use of NLP techniques in clinical research studies and registry-development endeavors to reliably extract data of interest in an expeditious and cost-effective manner.
Background and Objective:Silent cerebrovascular disease (SCD), comprised of silent brain infarction (SBI) and white matter disease (WMD), is commonly found incidentally on neuroimaging scans obtained in routine clinical care. However, their prognostic significance is not known. We aimed to estimate the incidence of, and risk increase in, future stroke in patients with incidentally-discovered SCD.Methods:Patients in Kaiser Permanente Southern California (KPSC) health system aged ≥ 50, without prior ischemic stroke, transient ischemic attack, or dementia/Alzheimer’s disease receiving a head CT or MRI between 2009-2019 were included. SBI and WMD were identified by natural language processing (NLP) from the neuroimage report.Results:Among 262,875 individuals receiving neuroimaging, NLP identified 13,154 (5.0%) with SBI and 78,330 (29.8%) with WMD. The incidence of future stroke was 32.5 (95% CI 31.1, 33·9) per 1,000 patient-years for patients with SBI; 1.·3 (95% CI 18.9, 19.8) for patients with WMD and 6.8 (95% CI 6.7, 7.0) for patients without SCD. The crude HR associated with SBI was 3.40 (95% CI 3.25 to 3.56); and for WMD was 2.63 (95% CI 2.54 to 2·71). With MRI-discovered SBI, the adjusted HR was 2.95 (95% CI 2.53 to 3.44) for those < age 65 and 2.15 (95% CI 1.91 to 2.41) for those ≥ age 65. With CT scan, the adjusted HR was 2.48 (95% CI 2.19 to 2.81) for those < age 65 and 1.81 (95% CI 1.71 to 1.91) for those >= age 65. The adjusted HR associated with a finding of WMD was 1.76 (95% CI 1.69 to 1.82) and was not modified by age or imaging modality.Discussion:Incidentally-discovered SBI and WMD are common and associated with increased risk of subsequent symptomatic stroke representing an important opportunity for stroke prevention.
Background Silent brain infarction (SBI) is defined as the presence of 1 or more brain lesions, presumed to be because of vascular occlusion, found by neuroimaging (magnetic resonance imaging or computed tomography) in patients without clinical manifestations of stroke. It is more common than stroke and can be detected in 20% of healthy elderly people. Early detection of SBI may mitigate the risk of stroke by offering preventative treatment plans. Natural language processing (NLP) techniques offer an opportunity to systematically identify SBI cases from electronic health records (EHRs) by extracting, normalizing, and classifying SBI-related incidental findings interpreted by radiologists from neuroimaging reports. Objective This study aimed to develop NLP systems to determine individuals with incidentally discovered SBIs from neuroimaging reports at 2 sites: Mayo Clinic and Tufts Medical Center. Methods Both rule-based and machine learning approaches were adopted in developing the NLP system. The rule-based system was implemented using the open source NLP pipeline MedTagger, developed by Mayo Clinic. Features for rule-based systems, including significant words and patterns related to SBI, were generated using pointwise mutual information. The machine learning models adopted convolutional neural network (CNN), random forest, support vector machine, and logistic regression. The performance of the NLP algorithm was compared with a manually created gold standard. The gold standard dataset includes 1000 radiology reports randomly retrieved from the 2 study sites (Mayo and Tufts) corresponding to patients with no prior or current diagnosis of stroke or dementia. 400 out of the 1000 reports were randomly sampled and double read to determine interannotator agreements. The gold standard dataset was equally split to 3 subsets for training, developing, and testing. Results Among the 400 reports selected to determine interannotator agreement, 5 reports were removed due to invalid scan types. The interannotator agreements across Mayo and Tufts neuroimaging reports were 0.87 and 0.91, respectively. The rule-based system yielded the best performance of predicting SBI with an accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 0.991, 0.925, 1.000, 1.000, and 0.990, respectively. The CNN achieved the best score on predicting white matter disease (WMD) with an accuracy, sensitivity, specificity, PPV, and NPV of 0.994, 0.994, 0.994, 0.994, and 0.994, respectively. Conclusions We adopted a standardized data abstraction and modeling process to developed NLP techniques (rule-based and machine learning) to detect incidental SBIs and WMDs from annotated neuroimaging reports. Validation statistics suggested a high feasibility of detecting SBIs and WMDs from EHRs using NLP.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.