2019
DOI: 10.1214/18-ba1139
|View full text |Cite
|
Sign up to set email alerts
|

Post-Processing Posteriors Over Precision Matrices to Produce Sparse Graph Estimates

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 24 publications
(24 citation statements)
references
References 21 publications
0
24
0
Order By: Relevance
“…The decision analysis approach to variable selection has historical roots 35 with modern development 36,37 . Related approaches, often termed posterior summarization , have been developed for multiple linear regression and logistic regression, 36 nonlinear regressions, 38 time‐varying parameter models, 39 functional regression, 40 graphical models, 41 and seeming unrelated regressions 42 . However, existing methods face several important limitations.…”
Section: Methodsmentioning
confidence: 99%
“…The decision analysis approach to variable selection has historical roots 35 with modern development 36,37 . Related approaches, often termed posterior summarization , have been developed for multiple linear regression and logistic regression, 36 nonlinear regressions, 38 time‐varying parameter models, 39 functional regression, 40 graphical models, 41 and seeming unrelated regressions 42 . However, existing methods face several important limitations.…”
Section: Methodsmentioning
confidence: 99%
“…To determine the penalty parameter, we follow Friedman, Hastie, and Tibshirani (2019) and use ρij=ϖfalse|trues^ijfalse|κ2, where false|trues^ijfalse| denotes the absolute size of the ( i , j )th element of trueboldS^1 and ϖ is a scalar penalty parameter whereas κ ≥ 1 controls the penalty on small precision parameters. Equation () nests the specification stipulated in Bashir et al (2019) if we set κ=1, trues^ij to an initial estimate of the ( i , j )th element of the precision matrix, and cross‐validate ϖ .…”
Section: Achieving Sparsity In Var Modelsmentioning
confidence: 99%
“…As a potential remedy, we propose postprocessing the estimates of the precision matrix Σ −1 (i.e., the inverse of Σ ). Friedman, Hastie, and Tibshirani (2008) and, more recently, Bashir, Carvalho, Hahn, and Jones (2019) propose methods to ex‐post sparsify precision matrices using the graphical lasso. We follow this literature and specify a loss function similar to Equation () that aims to strike a balance between model fit and parsimony.…”
Section: Achieving Sparsity In Var Modelsmentioning
confidence: 99%
“…The posterior projection method has been suggested for various settings including [12], [17], [26] and [8], and was investigated in general aspects by [27]. The idea of transforming posterior samples was also used for the inference on covariance or precision matrices in [23] and [1].…”
Section: Consider the Multivariate Linear Regression Modelmentioning
confidence: 99%