Opioid initiation for postsurgical and musculoskeletal pain is associated with the highest dose and duration at initiation, respectively, relative to other indications.
Objective Large clinical databases are increasingly used for research and quality improvement. We describe an approach to data quality assessment from the General Medicine Inpatient Initiative (GEMINI), which collects and standardizes administrative and clinical data from hospitals. Methods The GEMINI database contained 245 559 patient admissions at 7 hospitals in Ontario, Canada from 2010 to 2017. We performed 7 computational data quality checks and iteratively re-extracted data from hospitals to correct problems. Thereafter, GEMINI data were compared to data that were manually abstracted from the hospital’s electronic medical record for 23 419 selected data points on a sample of 7488 patients. Results Computational checks flagged 103 potential data quality issues, which were either corrected or documented to inform future analysis. For example, we identified the inclusion of canceled radiology tests, a time shift of transfusion data, and mistakenly processing the chemical symbol for sodium (“Na”) as a missing value. Manual validation identified 1 important data quality issue that was not detected by computational checks: transfusion dates and times at 1 site were unreliable. Apart from that single issue, across all data tables, GEMINI data had high overall accuracy (ranging from 98%–100%), sensitivity (95%–100%), specificity (99%–100%), positive predictive value (93%–100%), and negative predictive value (99%–100%) compared to the gold standard. Discussion and Conclusion Computational data quality checks with iterative re-extraction facilitated reliable data collection from hospitals but missed 1 critical quality issue. Combining computational and manual approaches may be optimal for assessing the quality of large multisite clinical databases.
Objective: Large clinical databases are increasingly being used for research and quality improvement, but there remains uncertainty about how computational and manual approaches can be used together to assess and improve the quality of extracted data. The General Medicine Inpatient Initiative (GEMINI) database extracts and standardizes a broad range of data from clinical and administrative hospital data systems, including information about attending physicians, room transfers, laboratory tests, diagnostic imaging reports, and outcomes such as death in-hospital. We describe computational data quality assessment and manual data validation techniques that were used for GEMINI. Methods: The GEMINI database currently contains 245,559 General Internal Medicine patient admissions at 7 hospital sites in Ontario, Canada from 2010-2017. We performed 7 computational data quality checks followed by manual validation of 23,419 selected data points on a sample of 7,488 patients across participating hospitals. After iteratively re-extracting data as needed based on the computational data quality checks, we manually validated GEMINI data against the data that could be obtained using the hospital electronic medical record (i.e. the data clinicians would see when providing care), which we considered the gold standard. We calculated accuracy, sensitivity, specificity, and positive and negative predictive values of GEMINI data. Results: Computational checks identified multiple data quality issues: for example, the inclusion of cancelled radiology tests, a time shift of transfusion data, and mistakenly processing the symbol for sodium, Na, as a missing value. Manual data validation revealed that GEMINI data were ultimately highly reliable compared to the gold standard across nearly all data tables. One important data quality issue was identified by manual validation that was not detected by computational checks, which was that the dates and times of blood transfusion data at one site were not reliable. This resulted in low sensitivity (66%) and positive predictive value (75%) for blood transfusion data at that site. Apart from this single issue, GEMINI data were highly reliable across all data tables, with high overall accuracy (ranging from 98-100%), sensitivity (95-100%), specificity (99-100%), positive predictive value (93-100%), and negative predictive value (99-100%) compared to the gold standard. Discussion and Conclusion: Iterative assessment and improvement of data quality based primarily on computational checks permitted highly reliable extraction of multisite clinical and administrative data. Computational checks identified nearly all of the data quality issues in this initiative but one critical quality issue was only identified during manual validation. Combining computational checks and manual validation may be the optimal method for assessing and improving the quality of large multi-site clinical databases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.