2014
DOI: 10.1080/00273171.2014.928492
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Model Averaging for Propensity Score Analysis

Abstract: This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 25 publications
(21 citation statements)
references
References 37 publications
0
21
0
Order By: Relevance
“…We used the Bayesian model averaging package in R to select the clinical variables that produced the model with the minimal Bayesian information criterion. 18 We then used the rpart package in R to create decision trees for each selected clinical variable on in-hospital mortality. For the decision tree analysis, we upsampled in-hospital mortality to adjust for the class imbalance of mortality.…”
Section: Descriptive and Statistical Analysesmentioning
confidence: 99%
“…We used the Bayesian model averaging package in R to select the clinical variables that produced the model with the minimal Bayesian information criterion. 18 We then used the rpart package in R to create decision trees for each selected clinical variable on in-hospital mortality. For the decision tree analysis, we upsampled in-hospital mortality to adjust for the class imbalance of mortality.…”
Section: Descriptive and Statistical Analysesmentioning
confidence: 99%
“…Following the works of Kaplan and Chen and Zigler, we now consider a more complex structure for the treatment model. More specifically, we assume Z i ∼ Bern ( e i ), where logitfalse(eifalse)=α0+α13.0235ptX1i+α23.0235ptX2i+α33.0235ptX3i+α43.0235ptfalse|X4ifalse|+α53.0235ptexpfalse(X5ifalse)+α63.0235ptX6i+α73.0235ptX6i2, where X 1 ∼ N (1,1), X 2 ∼ Poisson (2), X 3 ∼ Bernoulli (0.5), X 4 ∼ N (0,1), X 5 ∼ N (1,1), X 6 ∼ N (0,1), and | X 4 | represents the absolute value of X 4 .…”
Section: Simulation Studiesmentioning
confidence: 99%
“…Second, it was shown that the point estimator does not possess good small‐sample properties . Kaplan and Chen also examined the differences in the causal estimate when incorporating noninformative versus informative priors in a model averaging stage . Zigler et al developed a Bayesian strategy that augments the outcome model with additional individual covariates .…”
Section: Introductionmentioning
confidence: 99%
“…Instead of selecting the only best model and accepting that it properly defines the data generation process, a group of models analyze all the possible models to be resultant from the existing variables set and combine the results through a mul tiplicity of techniques, for example, bootstrap aggregation, bagging, boosting, support vector machines, neural networks, genetic algorithms, and Bayesian model averaging [21,22]. The subsequent group of models constantly achieves better results than the designated best model making higher accurate predictions across a wide group of domains and techniques [23,24].…”
Section: Goodchild and LImentioning
confidence: 99%