2018
DOI: 10.1080/07350015.2017.1383263
|View full text |Cite
|
Sign up to set email alerts
|

Model Averaging for Prediction With Fragmentary Data

Abstract: Proof of Lemma 1. Define C n (w) = y S 1 −μ S 1 (w) 2 +2tr{ P (w)Σ 1 }, where Σ 1 = Var(ε S 1 ), and w = argmin w∈H K C n (w). Under conditions (C1), (C2) and (C4), by directly applying the proof skills of Wan et al. (2010) to extend Theorem 2.1 * of Andrew (1991) to model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
32
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(32 citation statements)
references
References 28 publications
0
32
0
Order By: Relevance
“…In existing literatures, incomplete cases data have also been utilized to construct candidate models but with covariates in ∆ 0 ∪ ∆ m (Xiang et al, 2014;Fang et al, 2017). They differ significantly from our candidate estimators by using the covariates in ∆ 0 repeatedly, while µ(x; D m ), 1 ≤ m ≤ M + 1 are estimated based on M + 1 incomplete cases data-blocks with distinct covariates.…”
Section: Candidate Estimatorsmentioning
confidence: 99%
See 2 more Smart Citations
“…In existing literatures, incomplete cases data have also been utilized to construct candidate models but with covariates in ∆ 0 ∪ ∆ m (Xiang et al, 2014;Fang et al, 2017). They differ significantly from our candidate estimators by using the covariates in ∆ 0 repeatedly, while µ(x; D m ), 1 ≤ m ≤ M + 1 are estimated based on M + 1 incomplete cases data-blocks with distinct covariates.…”
Section: Candidate Estimatorsmentioning
confidence: 99%
“…More specifically, when p and M are fixed, Condition (C4) just sets an upper bound on the rate of nM by ζ n , which can be satisfied when nM is a constant multiple of n 0 . More detailed discussion about this condition can be found in Wan et al (2010), Ando and Li (2014) and Fang et al (2017).…”
Section: Asymptotic Optimalitymentioning
confidence: 99%
See 1 more Smart Citation
“…To address the issue, many model selection or model averaging methods have been developed to improve the prediction accuracy in the presence of missing data. For example, Ibrahim et al (2008) developed a novel model selection criterion for missing data problem based on the EM algorithm; Schomaker et al (2010) presented two approaches to handle missing data for model averaging problem; Dardanoni et al (2011) adopted model-averaging approach to tackle the bias-precision trade-off with in the presence of missing covariate values in linear regression models; Zhang (2013) proposed using Mallows model averaging approach to handle missing completely covariates at random; Fang et al (2017) presented a model averaging approach in the context of fragmentary data. However, the aforementioned works have been developed for classical setting that the number of predictors is fixed and less than sample size.…”
Section: Introductionmentioning
confidence: 99%
“…Other frequentist model averaging strategies include adaptive regression through mixing (Yang, 2001), jackknife model averaging (Hansen and Racine, 2012), heteroscedasticity robust model averaging (Liu and Okui, 2013), model averaging marginal regression (Chen et al, 2018;Li et al, 2015) and the plug-in method (Liu, 2015). Model averaging has also been extended to other contexts such as structural break models (Hansen, 2009), mixed effects models (Zhang et al, 2014), factor-augmented regression models (Cheng and Hansen, 2015), quantile regression models (Lu and Su, 2015), generalized linear models (Zhang et al, 2016) and missing data models (Fang et al, 2019;Zhang, 2013).…”
Section: Introductionmentioning
confidence: 99%