Objective: Background incidence rates are routinely used in safety studies to evaluate an association of an exposure and outcome. Systematic research on sensitivity of rates to the choice of the study parameters is lacking.Materials and Methods: We used 12 data sources to systematically examine the influence of age, race, sex, database, time-at-risk, season and year, prior observation and clean window on incidence rates using 15 adverse events of special interest for COVID-19 vaccines as an example. For binary comparisons we calculated incidence rate ratios and performed random-effect meta-analysis.Results: We observed a wide variation of background rates that goes well beyond age and database effects previously observed. While rates vary up to a factor of 1,000 across age groups, even after adjusting for age and sex, the study showed residual bias due to the other parameters. Rates were highly influenced by the choice of anchoring (e.g., health visit, vaccination, or arbitrary date) for the time-at-risk start. Anchoring on a healthcare encounter yielded higher incidence comparing to a random date, especially for short time-at-risk. Incidence rates were highly influenced by the choice of the database (varying by up to a factor of 100), clean window choice and time-at-risk duration, and less so by secular or seasonal trends.Conclusion: Comparing background to observed rates requires appropriate adjustment and careful time-at-risk start and duration choice. Results should be interpreted in the context of study parameter choices.
Objective
More than one third of appropriately treated patients with epilepsy have continued seizures despite two or more medication trials, meeting criteria for drug‐resistant epilepsy (DRE). Accurate and reliable identification of patients with DRE in observational data would enable large‐scale, real‐world comparative effectiveness research and improve access to specialized epilepsy care. In the present study, we aim to develop and compare the performance of computable phenotypes for DRE using the Observational Medical Outcomes Partnership (OMOP) Common Data Model.
Methods
We randomly sampled 600 patients from our academic medical center's electronic health record (EHR)‐derived OMOP database meeting previously validated criteria for epilepsy (January 2015–August 2021). Two reviewers manually classified patients as having DRE, drug‐responsive epilepsy, undefined drug responsiveness, or no epilepsy as of the last EHR encounter in the study period based on consensus definitions. Demographic characteristics and codes for diagnoses, antiseizure medications (ASMs), and procedures were tested for association with DRE. Algorithms combining permutations of these factors were applied to calculate sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for DRE. The F1 score was used to compare overall performance.
Results
Among 412 patients with source record‐confirmed epilepsy, 62 (15.0%) had DRE, 163 (39.6%) had drug‐responsive epilepsy, 124 (30.0%) had undefined drug responsiveness, and 63 (15.3%) had insufficient records. The best performing phenotype for DRE in terms of the F1 score was the presence of ≥1 intractable epilepsy code and ≥2 unique non‐gabapentinoid ASM exposures each with ≥90‐day drug era (sensitivity = .661, specificity = .937, PPV = .594, NPV = .952, F1 score = .626). Several phenotypes achieved higher sensitivity at the expense of specificity and vice versa.
Significance
OMOP algorithms can identify DRE in EHR‐derived data with varying tradeoffs between sensitivity and specificity. These computable phenotypes can be applied across the largest international network of standardized clinical databases for further validation, reproducible observational research, and improving access to appropriate care.
Objective Chart review as the current gold standard for phenotype evaluation cannot support observational research at scale. It is expensive, time-consuming, and variable. We aimed to evaluate the ability of structured data to support efficient patient status ascertainment and develop a standardized and scalable alternative to chart review. Methods We developed Knowledge-Enhanced Electronic Patient Profile Review system (KEEPER) that extracts the patient structured data elements relevant to a given phenotype and presents them in a standardized fashion that follows clinical reasoning principles. We evaluated its performance compared to manual chart review for four conditions (diabetes type I, acute appendicitis, end stage renal disease and chronic obstructive lung disease) using randomized two-period, two-sequence crossover design. Inter-method agreement, inter-rater agreement, accuracy, and review duration were measured. Results Ascertaining patient status with KEEPER was twice as fast compared to manual chart review. 88.1% of the patients were classified concordantly using full chart and KEEPER, but agreement varied depending on the condition. Two clinicians agreed in classification of patient status in 91.2% of the cases when using KEEPER compared to 76.3% when using full chart. Patient classification aligned with the gold standard in 88.1% and 86.9% of the cases respectively. Conclusion This proof-of-concept study demonstrated that structured data can be used for efficient patient ascertainment if are limited to only relevant subset and organized according to the clinical reasoning principles. A system that implements these principles can achieve similar accuracy and higher inter-rater reliability compared to chart review at a fraction of time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.