Background
The aim of this study was to build electronic algorithms using a combination of structured data and natural language processing (NLP) of text notes for potential safety surveillance of nine post-operative complications.
Methods
Post-operative complications from six medical centers in the Southeastern United States were obtained from the Veterans Affairs Surgical Quality Improvement Program (VASQIP) registry. Development and test datasets were constructed using stratification by facility and date of procedure for patients with and without complication. Algorithms were developed from VASQIP outcome definitions using NLP coded concepts, regular expressions, and structured data. The VASQIP nurse reviewer served as the reference standard for evaluating sensitivity and specificity. The algorithms were designed in the development and evaluated in the test dataset.
Results
Sensitivity and specificity in the test set were 85% and 92% for acute renal failure, 80% and 93% for sepsis, 56% and 94% for deep vein thrombosis, 80% and 97% for pulmonary embolism, 88% and 89% for acute myocardial infarction, 88% and 92% for cardiac arrest, 80% and 90% for pneumonia, 95% and 80% for urinary tract infection, and 80% and 93% for wound infection, respectively. A third of the complications occurred outside of the hospital setting.
Conclusions
Computer algorithms on data extracted from the electronic health record produced respectable sensitivity and specificity across a large sample of patients seen in six different medical centers. This study demonstrates the utility of combining natural language processing with structured data for mining the information contained within the electronic health record.
Early prediction of patient outcomes is key to unlocking the potential for targeted preventive care. This protocol describes a practical workflow for developing deep learning risk models for early prediction of various clinical and operational outcomes using structured electronic health record (EHR) data, discussing the prediction of acute kidney injury (AKI) as an exemplar. The protocol consists of 34 steps grouped into the following stages: formal problem definition, data pre-processing, architecture selection, calibration and uncertainty estimation, generalisability evaluation. Additionally, we demonstrate the application of this protocol to three other endpoints -mortality, length of stay and 30-day hospital readmission -for both continuous predictions (e.g. triggered every 6h) and static predictions (e.g. triggered at 24h post admission). The performance on these additional endpoints exceeded most comparable literature benchmarks. This protocol is accompanied by an open-source codebase that illustrates key considerations for EHR modeling and may be customised to alternate data formats and prediction tasks.
BackgroundWe developed an accurate, stakeholder-informed, automated, natural language processing (NLP) system to measure the quality of heart failure (HF) inpatient care, and explored the potential for adoption of this system within an integrated health care system.ObjectiveTo accurately automate a United States Department of Veterans Affairs (VA) quality measure for inpatients with HF.MethodsWe automated the HF quality measure Congestive Heart Failure Inpatient Measure 19 (CHI19) that identifies whether a given patient has left ventricular ejection fraction (LVEF) <40%, and if so, whether an angiotensin-converting enzyme inhibitor or angiotensin-receptor blocker was prescribed at discharge if there were no contraindications. We used documents from 1083 unique inpatients from eight VA medical centers to develop a reference standard (RS) to train (n=314) and test (n=769) the Congestive Heart Failure Information Extraction Framework (CHIEF). We also conducted semi-structured interviews (n=15) for stakeholder feedback on implementation of the CHIEF.ResultsThe CHIEF classified each hospitalization in the test set with a sensitivity (SN) of 98.9% and positive predictive value of 98.7%, compared with an RS and SN of 98.5% for available External Peer Review Program assessments. Of the 1083 patients available for the NLP system, the CHIEF evaluated and classified 100% of cases. Stakeholders identified potential implementation facilitators and clinical uses of the CHIEF.ConclusionsThe CHIEF provided complete data for all patients in the cohort and could potentially improve the efficiency, timeliness, and utility of HF quality measurements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.