2011
DOI: 10.1198/jasa.2011.tm10221
|View full text |Cite
|
Sign up to set email alerts
|

Best Predictive Small Area Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
87
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
6
2

Relationship

4
4

Authors

Journals

citations
Cited by 71 publications
(88 citation statements)
references
References 13 publications
1
87
0
Order By: Relevance
“…For example, in case that is completely unknown, it can be shown that essentially the same derivation of Jiang et al [25] goes through, and the resulting measure of lack-of-fit, ( ), is the minimizer of ( − ) Γ Γ( − ) − 2t (Γ Σ), assuming that Σ is known. It is also possible to extend the idea of predictive model selection to the generalized linear mixed models (GLMMs; e.g., Jiang [9]).…”
Section: Predictive Model Selectionmentioning
confidence: 96%
See 1 more Smart Citation
“…For example, in case that is completely unknown, it can be shown that essentially the same derivation of Jiang et al [25] goes through, and the resulting measure of lack-of-fit, ( ), is the minimizer of ( − ) Γ Γ( − ) − 2t (Γ Σ), assuming that Σ is known. It is also possible to extend the idea of predictive model selection to the generalized linear mixed models (GLMMs; e.g., Jiang [9]).…”
Section: Predictive Model Selectionmentioning
confidence: 96%
“…Here, again, E denotes expectation under the true model. Jiang et al [25] showed that the MSPE has another expression, which is the key to our derivation:…”
Section: Predictive Model Selectionmentioning
confidence: 99%
“…For example, Copas and Eguchi (2005) discuss a similar issue that they term as incomplete-data bias , in which the maximum likelihood estimators can be (sometimes severely) biased when incomplete data are present, and an incorrect model is being fit, and yet still appears to give a good fit to the available data. Jiang et al (2011a) showed that if one derives the parameter estimators by evaluating the best predictor (BP) under the assumed model, say, M , using the distribution also under M , the resulting predictor is not robust in the sense that it may perform poorly when M is not the true model. Here, the failure of the BP is due to a similar double-dipping strategy, that is, (1) the measure of lack-of-fit (sum of squared prediction errors), is for the BP under M ; and (2) the distribution under which the measure of lack-of-fit is evaluated is also under on M .…”
Section: Outline Of Our Main Contributionsmentioning
confidence: 99%
“…Previous research (Jiang et al 2011 andBondell et al 2010) discussed two issues in Linear Mixed Model (LMM), which is the GLMM with identity link and Gaussian assumption. Jiang et al (2011) shows how to obtain the best prediction in the LMM (Linear Mixed Model), and Bondell et al(2010) uses the Lasso to select random effect in LMM for the estimation purpose.…”
Section: Introductionmentioning
confidence: 99%
“…Jiang et al (2011) shows how to obtain the best prediction in the LMM (Linear Mixed Model), and Bondell et al(2010) uses the Lasso to select random effect in LMM for the estimation purpose. However, there are no existing methods for selecting both random effects and fixed effects for the purpose of best prediction via the Lasso (Tibshirani 2011) in LMM.…”
Section: Introductionmentioning
confidence: 99%