2016
DOI: 10.1007/s10463-016-0577-6
|View full text |Cite
|
Sign up to set email alerts
|

Moment convergence of regularized least-squares estimator for linear regression model

Abstract: In this paper we study the uniform tail-probability estimates of a regularized leastsquares estimator for the linear regression model, by making use of the polynomial type large deviation inequality for the associated statistical random fields, which may not be locally asymptotically quadratic. Our results provide a measure of rate of consistency in variable selection in sparse estimation, which in particular enable us to verify various arguments requiring convergence of moments of estimator-dependent statisti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 11 publications
(14 reference statements)
0
4
0
Order By: Relevance
“…We may also apply our moment-convergence results to validate the celebrated AIC methodology ( [3], and also [10], [26], and [27]) even under the sparse asymptotics. Recent studies exactly in this direction contain [25] and [29], where the uniform integrability of the sparse maximum-likelihood estimator with the bridge-like regularization played an important role for validating the asymptotic bias correction. Below we will briefly discuss how we can extend the result of [29] to cover a broader range of statistical models with locally asymptotically normal structure.…”
Section: Prediction-related Issuesmentioning
confidence: 99%
See 1 more Smart Citation
“…We may also apply our moment-convergence results to validate the celebrated AIC methodology ( [3], and also [10], [26], and [27]) even under the sparse asymptotics. Recent studies exactly in this direction contain [25] and [29], where the uniform integrability of the sparse maximum-likelihood estimator with the bridge-like regularization played an important role for validating the asymptotic bias correction. Below we will briefly discuss how we can extend the result of [29] to cover a broader range of statistical models with locally asymptotically normal structure.…”
Section: Prediction-related Issuesmentioning
confidence: 99%
“…In that case, it may even happen that the random function M n (u; θ 0 ) diverges in probability for each u; indeed, the sparse-type estimation, which is a particular case of regularized estimation, falls into the region of mixed-rates asymptotics. In this case, convergence of moments does not follow from a direct application of [33], and we are aware of only the following previous studies in this direction: [29] deduced the convergence of moments of a regularized sparse maximumlikelihood estimator of the generalized linear model, and applied it to verify the AIC type variable selection; [25] (also [20]) deduced the moment convergence in regularized estimation of a linear regression model with general regularization term. Nevertheless, the proofs of these results made particular use of the special structure of the considered models and/or the convexity argument, and both of them do not tell us much about the case of general regularized M -estimation.…”
Section: Introductionmentioning
confidence: 99%
“…These properties are useful to investigate an asymptotic behavior of statistics which depends on the moment of a −1 T ( θT − θ * ); see e.g. Chan and Ing (2011), Shimizu (2017), Suzuki and Yoshida (2018) and Umezu et al (2019).…”
Section: Introductionmentioning
confidence: 99%
“…The importance of such precise estimates of tail probability is well recognized in asymptotic decision theory, prediction, theory of information criteria for model selection, asymptotic expansion, etc. The QLA is rapidly expanding the range of its applications: for example, sampled ergodic diffusion processes (Yoshida [30]), contrastbased information criterion for diffusion processes (Uchida [23]), approximate self-weighted LAD estimation of discretely observed ergodic Ornstein-Uhlenbeck processes (Masuda [12]), jump diffusion processes Ogihara and Yoshida([17]), adaptive estimation for diffusion processes (Uchida and Yoshida [24]), adaptive Bayes type estimators for ergodic diffusion processes (Uchida and Yoshida [27]), asymptotic properties of the QLA estimators for volatility in regular sampling of finite time horizon (Uchida and Yoshida [25]) and in non-synchronous sampling (Ogihara and Yoshida [18]), Gaussian quasi-likelihood random fields for ergodic Lévy driven SDE (Masuda [15]), hybrid multi-step estimators (Kamatani and Uchida [6]), parametric estimation of Lévy processes (Masuda [13]), ergodic point processes for limit order book (Clinet and Yoshida [1]), a non-ergodic point process regression model (Ogihara and Yoshida [19]), threshold estimation for stochastic processes with small noise (Shimizu [21]), AIC for non-concave penalized likelihood method (Umezu et al [28]), Schwarz type model comparison for LAQ models (Eguchi and Masuda [2]), adaptive Bayes estimators and hybrid estimators for small diffusion processes based on sampled data (Nomura and Uchida [16]), moment convergence of regularized leastsquares estimator for linear regression model (Shimizu [22]), moment convergence in regularized estimation under multiple and mixed-rates asymptotics (Masuda and Shimizu [14]), asymptotic expansion in quasi likelihood analysis for volatility (Yoshida [31]) among others.…”
Section: Introductionmentioning
confidence: 99%