Objective This research aims to evaluate the impact of eligibility criteria on recruitment and observable clinical outcomes of COVID-19 clinical trials using electronic health record (EHR) data. Materials and Methods On June 18, 2020, we identified frequently used eligibility criteria from all the interventional COVID-19 trials in ClinicalTrials.gov (n = 288), including age, pregnancy, oxygen saturation, alanine/aspartate aminotransferase, platelets, and estimated glomerular filtration rate. We applied the frequently used criteria to the EHR data of COVID-19 patients in Columbia University Irving Medical Center (CUIMC) (March 2020–June 2020) and evaluated their impact on patient accrual and the occurrence of a composite endpoint of mechanical ventilation, tracheostomy, and in-hospital death. Results There were 3251 patients diagnosed with COVID-19 from the CUIMC EHR included in the analysis. The median follow-up period was 10 days (interquartile range 4–28 days). The composite events occurred in 18.1% (n = 587) of the COVID-19 cohort during the follow-up. In a hypothetical trial with common eligibility criteria, 33.6% (690/2051) were eligible among patients with evaluable data and 22.2% (153/690) had the composite event. Discussion By adjusting the thresholds of common eligibility criteria based on the characteristics of COVID-19 patients, we could observe more composite events from fewer patients. Conclusions This research demonstrated the potential of using the EHR data of COVID-19 patients to inform the selection of eligibility criteria and their thresholds, supporting data-driven optimization of participant selection towards improved statistical power of COVID-19 trials.
A QA program can be implemented in busy endoscopy units. There are significant problems, however, in ensuring that such a program is effective: these include inadequate funding/staffing, lack of suitable information technology and lack of clear guidelines for dealing with poor performance.
We present Chia, a novel, large annotated corpus of patient eligibility criteria extracted from 1,000 interventional, Phase IV clinical trials registered in ClinicalTrials.gov. This dataset includes 12,409 annotated eligibility criteria, represented by 41,487 distinctive entities of 15 entity types and 25,017 relationships of 12 relationship types. Each criterion is represented as a directed acyclic graph, which can be easily transformed into Boolean logic to form a database query. Chia can serve as a shared benchmark to develop and test future machine learning, rule-based, or hybrid methods for information extraction from free-text clinical trial eligibility criteria. Background & Summary Clinical trial eligibility criteria specify rules for screening clinical trial participants and play a central role in clinical research in that they are interpreted, implemented, and adapted by multiple stakeholders at various phases in the clinical research life cycle 1. After being defined by investigators, eligibility criteria are used and interpreted by clinical research coordinators for screening and recruitment. Then, they are used by query analysts and research volunteers for patient screening. Later, they are summarized in meta-analyses for developing clinical practice guidelines and, eventually, interpreted by physicians to screen patients for evidence-based care. Hence, eligibility criteria affect recruitment, results dissemination, and evidence synthesis. Despite their importance, recent studies highlight the often negative impact these criteria have on the generalizability of a given trial's findings in the real world 2,3. When eligibility criteria lack population representativeness, the enrolled participants cannot unbiasedly represent those who will be treated according to the results from that study 4. Given that eligibility criteria are written in free text, it is laborious to answer this representativeness question at scale 5. A related challenge is to assess the comparability of trial populations, especially for multi-site studies: e.g.,, given two clinical trials investigating the same scientific question, can we tell if they are studying comparable cohorts? The manual labor required from domain experts for such appraisal is prohibitive. Another challenge is patient recruitment, or finding eligible patients for a clinical trial, which remains the leading cause of early trial termination 6,7. Unsuccessful recruitment wastes financial investment and research opportunities, on top of missed opportunities, inconvenience, or frustration of patients when the clinical trial is terminated early or cancelled. Computable representations of eligibility criteria promise to overcome the above challenges and to improve study feasibility and recruitment success 8. The Biomedical Informatics research community has produced various knowledge representations for clinical trial eligibility criteria 9 , though nearly all of them predate the current state-of-the-art in machine learning, and some even predate contemporary electro...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.