Empirical Bayesian analysis is a well-known approach that incorporates an estimator into a Bayesian analysis. In this article, we offer another approach, which has several useful properties. Our solution is based on the framework introduced by Yekutieli (2012) to account for the variability introduced by selecting parameters. Specifically, we assume that the unknown parameter is contained within a ball centered at an estimator, and the radius is given by a prior distribution. We refer to our method as the auxiliary parameter constrained Bayesian hierarchical model (C-BHM). This general framework is particularly exciting as traditional empirical Bayesian analysis and parametric Bayesian analysis can be written as special cases. Hence, this C-BHM represents a unifying framework within the area of Bayesian statistics. Several technical results are provided. Furthermore, we show analytically that one can outperform both empirical and fully Bayesian analysis through the Bayes factor. We illustrate the C-BHM to extend the Fay-Herriot model, which is often used in the survey sampling setting. To demonstrate the usefulness of our method we provide simulations and an illustration to data obtained from the U.S. Census Bureau's Small Area Income and Poverty Estimates (SAIPE) program.
Consider the setting where there are B (≥ 1) candidate statistical models, and one is interested in model selection. Two common approaches to solve this problem are to select a single model or to combine the candidate models through model averaging. Instead, we select a subset of the combined parameter space associated with the models. Specifically, a model averaging perspective is used to increase the parameter space, and a model selection criterion is used to select a subset of this expanded parameter space. We account for the variability of the criterion by adapting Yekutieli (2012)'s method to Bayesian model averaging (BMA). Yekutieli (2012)'s method treats model selection as a truncation problem. We truncate the joint support of the data and the parameter space to only include small values of the covariance penalized error (CPE) criterion. The CPE is a general expression that contains several information criteria as special cases. Simulation results show that as long as the truncated set does not have near zero probability, we tend to obtain lower mean squared error than BMA. Additional theoretical results are provided that provide the foundation for these observations. We apply our approach to a dataset consisting of American Community Survey (ACS) period estimates to illustrate that this perspective can lead to improvements of a single model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.