Covariance matrix estimation arises in multivariate problems including multivariate normal sampling models and regression models where random effects are jointly modeled, e.g. random-intercept, random-slope models. A Bayesian analysis of these problems requires a prior on the covariance matrix. Here we compare an inverse Wishart, scaled inverse Wishart, hierarchical inverse Wishart, and a separation strategy as possible priors for the covariance matrix. We evaluate these priors through a simulation study and application to a real data set. Generally all priors work well with the exception of the inverse Wishart when the true variance is small relative to prior mean. In this case, the posterior for the variance is biased toward larger values and the correlation is biased toward zero. This bias persists even for large sample sizes and therefore caution should be used when using the inverse Wishart prior.
Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident hospitalizations, incident cases, incident deaths, and cumulative deaths due to COVID-19 at national, state, and county levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages.
Especially when facing reliability data with limited information (e.g., a small number of failures), there are strong motivations for using Bayesian inference methods. These include the option to use information from physics‐of‐failure or previous experience with a failure mode in a particular material to specify an informative prior distribution. Another advantage is the ability to make statistical inferences without having to rely on specious (when the number of failures is small) asymptotic theory needed to justify non‐Bayesian methods. Users of non‐Bayesian methods are faced with multiple methods of constructing uncertainty intervals (Wald, likelihood, and various bootstrap methods) that can give substantially different answers when there is little information in the data. For Bayesian inference, there is only one method of constructing equal‐tail credible intervals—but it is necessary to provide a prior distribution to fully specify the model. Much work has been done to find default prior distributions that will provide inference methods with good (and in some cases exact) frequentist coverage properties. This paper reviews some of this work and provides, evaluates, and illustrates principled extensions and adaptations of these methods to the practical realities of reliability data (e.g., non‐trivial censoring).
Management policies for influenza outbreaks balance the expected morbidity and mortality costs versus the cost of intervention policies. We present a methodology for dynamic determination of optimal policies in a completely observed stochastic compartmental model with parameter uncertainty. Our approach is simulation-based and searches the full set of sequential control strategies. For each time point, it generates a policy map describing the optimal intervention to implement as a function of outbreak state and Bayesian parameter posteriors. As a running example, we study a stochastic SIR model with isolation and vaccination as two possible interventions. Numerical simulations based on a classic influenza outbreak are used to explore the impact of various cost structures on management policies. Comparisons demonstrate the realized cost savings of choosing interventions based on the computed dynamic policy over simpler decision rules.
Demographic studies of many bird species are challenging because their nests are cryptic, resulting in few nests being found. To maximize statistical power, methods are needed that minimize disturbance while yielding as much information per nest as possible. One way to meet these objectives is to use miniature thermal data loggers to precisely date nest fates. Our objectives, therefore, were to (1) examine the possible effect of thermal data loggers on nest success through hatching by grass-and shrub-nesting songbirds that differed in their parasite egg-accepting and -rejecting behavior, (2) examine the effect of using daily temperature data versus less frequent nest-visit data on statistical power, bias, and precision when estimating the daily survival rate (DSR) for nests, and (3) compare these two approaches using a simulation study and field data. We monitored the survival of nests located in agricultural landscapes and used a binomial logistic regression with main effects for data-loggers and parasite-accepting or -rejecting status and their interaction. We also compared maximum likelihood-derived DSR for differences in estimated rates, precision, and sample sizes with both data collected in the field and simulated with varying sample sizes and visit frequencies. We found no evidence that thermal data loggers had any effect on hatching rates either for all species or for parasite egg-accepting and -rejecting species, separately. Both our simulation and analysis of real nest data indicated that use of data loggers increased the statistical power from each nest studied by increasing effective sample sizes and precision of DSR estimates compared to in-person visits. We also found a negative bias in DSR estimates with longer visit intervals, which use of data-loggers removed. Both the results of simulated-and field-data analyses suggest that future studies of nest survival can be improved by automated nest monitoring by removing a source of bias and providing more time to find additional nests.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.