When analyzing spatially referenced event data, the criteria for declaring rates as "reliable" is still a matter of dispute. What these varying criteria have in common, however, is that they are rarely satisfied for crude estimates in small area analysis settings, prompting the use of spatial models to improve reliability. While reasonable, recent work has quantified the extent to which popular models from the spatial statistics literature can overwhelm the information contained in the data, leading to oversmoothing. Here, we begin by providing a definition for a "reliable" estimate for event rates that can be used for crude and model-based estimates and allows for discrete and continuous statements of reliability. We then construct a spatial Bayesian framework that allows users to infuse prior information into their models to improve reliability while also guarding against oversmoothing. We apply our approach to county-level birth data from Pennsylvania, highlighting the effect of oversmoothing in spatial models and how our approach can allow users to better focus their attention to areas where sufficient data exists to drive inferential decisions. We then conclude with a brief discussion of how this definition of reliability can be used in the design of small area studies.
Background: Cluster randomized trials, which randomize groups of individuals to an intervention, are common in health services research when one wants to evaluate improvement in a subject's outcome by intervening at an organizational level. For many such trials sample size calculation is performed under the assumption of equal cluster size. Many trials that set out to recruit equal clusters end up with unequal clusters for various reasons. This leads to a misalignment between the method used for sample size calculation and the data analysis, which may affect trial power. Various weighted analysis methods for analyzing cluster means have been suggested to overcome the problem introduced by unbalanced clusters; however, the performance of such methods has not been evaluated extensively.Methods: We examine the use of the general linear model for analysis of clustered randomized trials assuming equal cluster sizes during the planning stage but ending up with unequal clusters. We demonstrate the performance of three approaches using different weights for analyzing the cluster means: (1) the standard analysis of cluster means, (2) weighting by cluster size, and (3) minimum variance weights. Several distributions are used to generate cluster sizes to cover a wide range of patterns of imbalance. The variability in cluster size is measured by the coefficient of variation (CV). By means of a simulation study, we assess the impact of using each of the three analysis methods with respect to type I error and power of the study and how it is affected by the variability in cluster size. Results: Analyses that assumes equal clusters provide a reasonable approximation when cluster sizes vary minimally (CV < 0.30). In an analysis weighted by cluster size type I errors were inflated, and that worsened as the variation in cluster size increases. However, a minimum variance weighted analysis best maintains target power and level of significance under all degrees of imbalance considered. Conclusion: The unweighted analysis works well as an approximate method when the variation in cluster size is minimal. However, using minimum variance weights performs much better across the full range of variation in cluster size and is recommended.
Background: Cluster randomized trials, which randomize groups of individuals to an intervention, are common in health services research when one wants to evaluate improvement in a subject's outcome by intervening at an organizational level. For many such trials, sample size calculation is performed under the assumption of equal cluster size. For a variety of reasons, many trials that set out to recruit clusters of the same size end up with unequal clusters. This leads to a misalignment between the method used for sample size calculation and the data analysis, which may affect trial power. Various weighted analysis methods for analyzing cluster means have been suggested to overcome the problem introduced by unbalanced clusters; however, the performance of such methods has not been evaluated extensively. Methods: We examine the use of the general linear model for analysis of clustered randomized trials that assume equal cluster sizes during the planning stage, but for which the realized cluster sizes are unequal. We demonstrate the performance of three approaches using different weights for analyzing the cluster means: (1) the standard analysis of cluster means, (2) weighting by cluster size, and (3) minimum variance weights. Several distributions are used to generate cluster sizes to assess a range of patterns of imbalance. The variability in cluster size is measured by the coefficient of variation (CV). We assess the impact of using each of the three methods of analysis with respect to type I error and power of the study and how each are impacted by the variability in cluster size via simulations. Results: Analyses that assumes equal clusters provide a reasonable approximation when cluster sizes vary minimally (CV < 0.30). For analyses weighted by cluster size type I errors were inflated, and that worsened as the variation in cluster size increases, despite reasonable power. However, minimum variance weighted analyses best maintain target power and level of significance under scenarios considered. Conclusion: Unweighted analyses work well as an approximate method when variation in cluster size is minimal. However, using minimum variance weights performs much better across the full range of variation in cluster size and is recommended.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.