The proposed framework enhances the interpretation of findings at external validation of prediction models.
The use of individual participant data (IPD) from multiple studies is an increasingly popular approach when developing a multivariable risk prediction model. Corresponding datasets, however, typically differ in important aspects, such as baseline risk. This has driven the adoption of meta-analytical approaches for appropriately dealing with heterogeneity between study populations. Although these approaches provide an averaged prediction model across all studies, little guidance exists about how to apply or validate this model to new individuals or study populations outside the derivation data. We consider several approaches to develop a multivariable logistic regression model from an IPD meta-analysis (IPD-MA) with potential between-study heterogeneity. We also propose strategies for choosing a valid model intercept for when the model is to be validated or applied to new individuals or study populations. These strategies can be implemented by the IPD-MA developers or future model validators. Finally, we show how model generalizability can be evaluated when external validation data are lacking using internal-external cross-validation and extend our framework to count and time-to-event data. In an empirical evaluation, our results show how stratified estimation allows study-specific model intercepts, which can then inform the intercept to be used when applying the model in practice, even to a population not represented by included studies. In summary, our framework allows the development (through stratified estimation), implementation in new individuals (through focused intercept choice), and evaluation (through internal-external validation) of a single, integrated prediction model from an IPD-MA in order to achieve improved model performance and generalizability.
Context The evidence that measurement of the common carotid intima-media thickness (CIMT) improves the risk scores in prediction of the absolute risk of cardiovascular events is inconsistent. Objective To determine whether common CIMT has added value in 10-year risk prediction of first-time myocardial infarctions or strokes, above that of the Framingham Risk Score. Data Sources Relevant studies were identified through literature searches of databases (
BackgroundA fundamental aspect of epidemiological studies concerns the estimation of factor-outcome associations to identify risk factors, prognostic factors and potential causal factors. Because reliable estimates for these associations are important, there is a growing interest in methods for combining the results from multiple studies in individual participant data meta-analyses (IPD-MA). When there is substantial heterogeneity across studies, various random-effects meta-analysis models are possible that employ a one-stage or two-stage method. These are generally thought to produce similar results, but empirical comparisons are few.ObjectiveWe describe and compare several one- and two-stage random-effects IPD-MA methods for estimating factor-outcome associations from multiple risk-factor or predictor finding studies with a binary outcome. One-stage methods use the IPD of each study and meta-analyse using the exact binomial distribution, whereas two-stage methods reduce evidence to the aggregated level (e.g. odds ratios) and then meta-analyse assuming approximate normality. We compare the methods in an empirical dataset for unadjusted and adjusted risk-factor estimates.ResultsThough often similar, on occasion the one-stage and two-stage methods provide different parameter estimates and different conclusions. For example, the effect of erythema and its statistical significance was different for a one-stage (OR = 1.35, ) and univariate two-stage (OR = 1.55, ). Estimation issues can also arise: two-stage models suffer unstable estimates when zero cell counts occur and one-stage models do not always converge.ConclusionWhen planning an IPD-MA, the choice and implementation (e.g. univariate or multivariate) of a one-stage or two-stage method should be prespecified in the protocol as occasionally they lead to different conclusions about which factors are associated with outcome. Though both approaches can suffer from estimation challenges, we recommend employing the one-stage method, as it uses a more exact statistical approach and accounts for parameter correlation.
Early health technology assessment is increasingly being used to support health economic evidence development during early stages of clinical research. Such early models can be used to inform research and development about the design and management of new medical technologies to mitigate the risks, perceived by industry and the public sector, associated with market access and reimbursement. Over the past 25 years it has been suggested that health economic evaluation in the early stages may benefit the development and diffusion of medical products. Early health technology assessment has been suggested in the context of iterative economic evaluation alongside phase I and II clinical research to inform clinical trial design, market access, and pricing. In addition, performing early health technology assessment was also proposed at an even earlier stage for managing technology portfolios. This scoping review suggests a generally accepted definition of early health technology assessment to be “all methods used to inform industry and other stakeholders about the potential value of new medical products in development, including methods to quantify and manage uncertainty”. The present review also aimed to identify recent published empirical studies employing an early-stage assessment of a medical product. With most included studies carried out to support a market launch, the dominant methodology was early health economic modeling. Further methodological development is required, in particular, by combining systems engineering and health economics to manage uncertainty in medical product portfolios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.