2016
DOI: 10.1007/s11222-016-9662-1
|View full text |Cite
|
Sign up to set email alerts
|

Boosting flexible functional regression models with a high number of functional historical effects

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
87
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 27 publications
(87 citation statements)
references
References 26 publications
0
87
0
Order By: Relevance
“…We used the FDboost package in R (Brockhaus, Melcher, Leisch, & Greven, ; Brockhaus, Scheipl, Hothorn, & Greven, ) to fit a model of the following form:yfalse(tfalse)=α+0tβ1(s,t)x1(s)ds+0tβ2(s,t)x2(s)dswhere the functional response variable y ( t ) is the rate of spring onion growth over time in each school. An important note in this model is that s is always less than t , so only past temperature can inform the current growth rate.…”
Section: Methodsmentioning
confidence: 99%
“…We used the FDboost package in R (Brockhaus, Melcher, Leisch, & Greven, ; Brockhaus, Scheipl, Hothorn, & Greven, ) to fit a model of the following form:yfalse(tfalse)=α+0tβ1(s,t)x1(s)ds+0tβ2(s,t)x2(s)dswhere the functional response variable y ( t ) is the rate of spring onion growth over time in each school. An important note in this model is that s is always less than t , so only past temperature can inform the current growth rate.…”
Section: Methodsmentioning
confidence: 99%
“…In the case considered here, it is more appropriate to introduce the historical functional linear model (HFLM) as it allows to restrict the domain where the predictor influences the predictand (Malfait and Ramsay ; Harezlak et al ; Gervini ). The HFLM has known recent developments regarding its implementation (Brockhaus et al ; Brockhaus, Melcher, et al ) and is defined as follows:yit=αt+l(t)u(t)βfalse(t,sfalse)xifalse(sfalse)ds+εit,tnormalΩ1,where l ( t ) and u ( t ) are, respectively, the lower and the upper bound where the predictor x i ( . ) can influence y i (.)…”
Section: Methodsmentioning
confidence: 99%
“…In order to fit functional models, the functional linear array model (FLAM) implemented in R (R Core Team ) by Brockhaus et al () in the package FDboost (Brockhaus, Ruegamer, et al ) was used as it can fit both model (3) and (4) with different configurations of l ( t ) and u ( t ). The FLAM is fitted using a component‐wise gradient boosting algorithm, a machine learning procedure that estimates the model parameters by minimizing an empirical loss function (Freund et al ; Bühlmann and Hothorn ; Brockhaus, Melcher, et al ). In our case, the minimized loss function is the root mean squared error (Brockhaus et al ):1ni=1nΩ1yit-truey^it2dt,where truey^i. is the simulated stream temperature curve by the functional model.…”
Section: Methodsmentioning
confidence: 99%
“…They fit their models either using existing generalized additive models software (Wood, 2016) in the R package refund (Huang et al 2016) or through a component-wise gradient boosting (Buhlmann and Hothorn, 2007) in the R package FDboost (Brockhaus, 2016). …”
Section: Summary Of Functional Linear Array Model Frameworkmentioning
confidence: 99%
“…The formulation and building up of this general framework has been a topic in this research group’s work over the past few years, introducing much of this structure for Gaussian functions largely represented by splines and fit using generalized additive model (GAM) software in Scheipl, Staicu and Greven (2015), incorporating functional principal components (fPC) to flexibly model sparse, irregularly sampled outcomes in Cederbaum, et al (2015), extending to generalized outcomes in Scheipl, Gertheiss, and Greven (2016), and introducing a new boosting-based fitting procedure that allows extension to robust functional regression and confers other benefits in Brockhaus, et al (2015). Other specific work has been done developing details for additive scalar-on function models (McLean et al 2014) and function-on-function regression (Ivanescu, et al 2015; Scheipl and Greven 2016; Brockhaus, et al 2016), and undoubtedly this productive group will continue to further develop this framework in the coming years.…”
Section: Introductionmentioning
confidence: 99%