This position paper summarizes relevant theory and current practice regarding the analysis of longitudinal clinical trials intended to support regulatory approval of medicinal products, and it reviews published research regarding methods for handling missing data. It is one strand of the PhRMA initiative to improve efficiency of late-stage clinical research and gives recommendations from a cross-industry team. We concentrate specifically on continuous response measures analyzed using a linear model, when the goal is to estimate and test treatment differences at a given time point. Traditionally, the primary analysis of such trials handled missing data by simple imputation using the last, or baseline, observation cam'ed forward method (LOCF, B O G ) followed by analysis of (co)variance at the chosen time point. However, thegeneral statistical and scientific community has moved away from these simple methods in favor of joint analysis of data from all time points based on a multivariate model (eg. of a mixed-effects type). One such newer method, a likelihood-based mixedefiects model repeated measures (MMRM) approach, has received considerable attention in the clinical trials literature. We discuss specific concerns raised by regulatory agencies with regard to MMRM and review published evidence comparing LOCF and MMRM in terms of validi9, bias, power, and type I error. Our main conclusion is that the mixed model approach is more eficient and reliable as a method of primary analysis, and should be preferred to the inherently biased and statistically invalid simple imputation approaches. We also summarize other methods of handling missing data that are useful as sensitivity analyses for assessing the potential effect of data missing not at random.
This study compares two methods for handling missing data in longitudinal trials: one using the lastobservation-carried-forward (LOCF) method and one based on a multivariate or mixed model for repeated measurements (MMRM). Using data sets simulated to match six actual trials, I imposed several drop-out mechanisms, and compared the methods in terms of bias in the treatment difference and power of the treatment comparison. With equal drop-out in Active and Placebo arms, LOCF generally underestimated the treatment effect; but with unequal drop-out, bias could be much larger and in either direction. In contrast, bias with the MMRM method was much smaller; and whereas MMRM rarely caused a difference in power of greater than 20%, LOCF caused a difference in power of greater than 20% in nearly half the simulations. Use of the LOCF method is therefore likely to misrepresent the results of a trial seriously, and so is not a good choice for primary analysis. In contrast, the MMRM method is unlikely to result in serious misinterpretation, unless the drop-out mechanism is missing not at random (MNAR) and there is substantially unequal drop-out. Moreover, MMRM is clearly more reliable and better grounded statistically. Neither method is capable of dealing on its own with trials involving MNAR drop-out mechanisms, for which sensitivity analysis is needed using more complex methods.
In this paper, prediction provides the basis for unifying the procedures of covariances adjustment and standardization. Analysis of covariance is a method of forming predictions from a linear model; it is used when qualitative effects are to be studied and the effects of continuous variables are to be adjusted for. An essential feature is the division into effects of interest and effects for which adjustment is required. Covariates may also be qualitative: as such, they are used implicitly in experimental designs with blocks, where treatment effects are adjusted for the effect of blocks. The technique of standardization is well-known in epidemiology and demography as a method of adjusting explicitly for qualitative effects. The same division of effects applies when an analysis that uses generalized linear models is summarized. Two distinct types of prediction, which give identical results in classical linear models, are available: prediction may be conditional on a fixed value of a covariate, or marginal on a distribution of values such as the distribution in the set of data being analysed. Prediction methods are illustrated by the analysis of a table of proportions by use of a logit model.
Knowledge of the number and distribution of species is fundamental to biodiversity conservation efforts, but this information is lacking for the majority of species on earth. Consequently, subsets of taxa are often used as proxies for biodiversity; but this assumes that different taxa display congruent distribution patterns. Here we use a global meta-analysis to show that studies of cross-taxon congruence rarely give consistent results. Instead, species richness congruence is highest at extreme spatial scales and close to the equator, while congruence in species composition is highest at large extents and grain sizes. Studies display highest variance in cross-taxon congruence when conducted in areas with dissimilar areal extents (for species richness) or latitudes (for species composition). These results undermine the assumption that a subset of taxa can be representative of biodiversity. Therefore, researchers whose goal is to prioritize locations or actions for conservation should use data from a range of taxa.
Recurrent events in clinical trials have typically been analysed using either a multiple time-to-event method or a direct approach based on the distribution of the number of events. An area of application for these methods is exacerbation data from respiratory clinical trials. The different approaches to the analysis and the issues involved are illustrated for a large trial (n = 1465) in chronic obstructive pulmonary disease (COPD). For exacerbation rates, clinical interest centres on a direct comparison of rates for each treatment which favours the distribution-based analysis, rather than a time-to-event approach. Poisson regression has often been employed and has recently been recommended as the appropriate method of analysis for COPD exacerbations but the key assumptions often appear unreasonable for this analysis. By contrast use of a negative binomial model which corresponds to assuming a separate Poisson parameter for each subject offers a more appealing approach. Non-parametric methods avoid some of the assumptions required by these models, but do not provide appropriate estimates of treatment effects because of the discrete and bounded nature of the data.
SUMMARYDeveloping advanced applications for the emerging national-scale 'Computational Grid' infrastructures is still a difficult task. Although Grid services are available that assist the application developers in authentication, remote access to computers, resource management, and infrastructure discovery, they provide a challenge because these services may not be compatible with the commodity distributedcomputing technologies and frameworks used previously.The Commodity Grid project is working to overcome this difficulty by creating what we call Commodity Grid Toolkits (CoG Kits) that define mappings and interfaces between Grid and particular commodity frameworks. In this paper, we explain why CoG Kits are important, describe the design and implementation of a Java CoG Kit, and use examples to illustrate how CoG Kits can enable new approaches to application development based on the integrated use of commodity and Grid technologies.
Surrogate concepts are used in all sub-disciplines of environmental science. However, controversy remains regarding the extent to which surrogates are useful for resolving environmental problems. Here, we argue that conflicts about the utility of surrogates (and the related concepts of indicators and proxies) often reflect context-specific differences in trade-offs between measurement accuracy and practical constraints. By examining different approaches for selecting and applying surrogates, we identify five trade-offs that correspond to key points of contention in the application of surrogates. We then present an 8-step Adaptive Surrogacy Framework that incorporates cross-disciplinary perspectives from a wide spectrum of the environmental sciences, aiming to unify surrogate concepts across disciplines and applications. Our synthesis of the science of surrogates is intended as a first step towards fully leveraging knowledge accumulated across disciplines, thus consolidating lessons learned so that they may be accessible to all those operating in different fields, yet facing similar hurdles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.