2021
DOI: 10.1002/cjs.11661
|View full text |Cite
|
Sign up to set email alerts
|

Robust estimation and variable selection for function‐on‐scalar regression

Abstract: Function-on-scalar regression is commonly used to model the dynamic behaviour of a set of scalar predictors of interest on the functional response. In this article, we develop a robust variable selection procedure for function-on-scalar regression with a large number of scalar predictors based on exponential squared loss combined with the group smoothly clipped absolute deviation regularization method. The proposed procedure simultaneously selects relevant predictors and provides estimates for the functional c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2022
2022
2026
2026

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 38 publications
0
0
0
Order By: Relevance
“…In the absence of measurement error,Chen et al (2016) andWang et al (2007) established faster convergence rates for using spline methods than (5.24), but they ignored spline approximation error by treating the original function-on-scalar model as a parametric model. On the contrary, the L 2 convergence rate, established byCai et al (2022) for robust function-on-scalar linear regression, is slower than (5.24).The next theorem presents the selection consistency and the point-wise limiting distribution. Define Σ ε = Cov(ε i (t)), where ε i (t) is the error vector given in the model (2.2).…”
mentioning
confidence: 91%
See 1 more Smart Citation
“…In the absence of measurement error,Chen et al (2016) andWang et al (2007) established faster convergence rates for using spline methods than (5.24), but they ignored spline approximation error by treating the original function-on-scalar model as a parametric model. On the contrary, the L 2 convergence rate, established byCai et al (2022) for robust function-on-scalar linear regression, is slower than (5.24).The next theorem presents the selection consistency and the point-wise limiting distribution. Define Σ ε = Cov(ε i (t)), where ε i (t) is the error vector given in the model (2.2).…”
mentioning
confidence: 91%
“…Parodi and Reimherr (2018) developed a functional linear adaptive mixed estimation procedure to simultaneously obtain sparsity and smoothness for the coefficient functions. Cai et al (2022) considered a robust selection approach with the exponential squared loss function employed.…”
Section: Introductionmentioning
confidence: 99%