2022
DOI: 10.48550/arxiv.2206.08731
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust Information Criterion for Model Selection in Sparse High-Dimensional Linear Regression Models

Abstract: Model selection in linear regression models is a major challenge when dealing with high-dimensional data where the number of available measurements (sample size) is much smaller than the dimension of the parameter space. Traditional methods for model selection such as Akaike information criterion, Bayesian information criterion (BIC) and minimum description length are heavily prone to overfitting in the high-dimensional setting. In this regard, extended BIC (EBIC), which is an extended version of the original … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 26 publications
0
9
0
Order By: Relevance
“…Hence, the oracle provides the upper bound on the maximum achievable PCMS for any given setting. The tuning parameters chosen are α = 0.01 for GRRT (as mentioned in [31]) and ζ = 1 (EBIC R ) [7], [8].…”
Section: Simulation Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Hence, the oracle provides the upper bound on the maximum achievable PCMS for any given setting. The tuning parameters chosen are α = 0.01 for GRRT (as mentioned in [31]) and ζ = 1 (EBIC R ) [7], [8].…”
Section: Simulation Resultsmentioning
confidence: 99%
“…EBIC R is derived under the Bayesian framework of model selection, which starts with deriving the maximum a-posteriori (MAP) criterion and ending with the final EBIC R after suitable modifications and reasonable assumptions. We follow similar steps as in [7], [8], but incorporate the multi-measurement and block structure into it. Let us denote the prior pdf of the parameter vector θ I as p(θ I |H I ), the marginal of y as p(y|H I ) and the prior probability of the model with support I as Pr(H I ).…”
Section: Proposed Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Since criteria such as AIC and Bayesian information criterion (BIC) have a tendency to overfit in model selection in high-dimensional data; EBIC (extended BIC), EBIC-Robust and EFIC (it is formed by combining extended fisher information criterion and EBIC) have been proposed instead of these criteria. The results showed that the EBIC-Robust criterion performed better in model selection compared to the EFIC and EBIC criteria [ 13 ].…”
Section: Introductionmentioning
confidence: 99%