2021
DOI: 10.1002/sta4.393
|View full text |Cite
|
Sign up to set email alerts
|

Estimation of the mean function of functional data via deep neural networks

Abstract: In this work, we propose a deep neural networks-based method to perform nonparametric regression for functional data. The proposed estimators are based on sparsely connected deep neural networks with rectifier linear unit (ReLU) activation function. We provide the convergence rate of the proposed deep neural networks estimator in terms of the empirical norm. Through Monte Carlo simulation studies, we examine the finite sample performance of the proposed method. Finally, the proposed method is applied to analys… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(14 citation statements)
references
References 30 publications
1
13
0
Order By: Relevance
“…It is interesting to observe that our proposed RDNN estimator enjoys the same asymptotic as the least squares DNN estimator [24] does. Specifically, the convergence rate for M-type DNN estimator in the functional regression model depends on the smoothness, i.e., β * , and the intrinsic dimension, i.e., t , of the true mean function f 0 , and the decay rate of the maximal value of the variance function E{(ψ( 1j )) 2 }.…”
Section: Unified Rate Of Convergencementioning
confidence: 80%
See 3 more Smart Citations
“…It is interesting to observe that our proposed RDNN estimator enjoys the same asymptotic as the least squares DNN estimator [24] does. Specifically, the convergence rate for M-type DNN estimator in the functional regression model depends on the smoothness, i.e., β * , and the intrinsic dimension, i.e., t , of the true mean function f 0 , and the decay rate of the maximal value of the variance function E{(ψ( 1j )) 2 }.…”
Section: Unified Rate Of Convergencementioning
confidence: 80%
“…Different from the classical SGD procedures, Adam is a method for efficient stochastic optimization that only requires firstorder gradients with little memory requirement. Hence, it is well suited for problems when there are large sample sizes and parameters ( [11]), and is widely used in network training for FDA, such as [24]. In our numerical studies, Adam provides the best results and is the most computationally efficient among other gradient based algorithms.…”
Section: Training Neural Networkmentioning
confidence: 96%
See 2 more Smart Citations
“…Farrell et al (2021) studies the rates of convergence for deep feedforward neural nets in semiparametric inference. The successful applications include, but are not limited to, computer vision (He et al, 2016), natural language processing (Bahdanau et al, 2014), drug discovery and toxicology (Jiménez-Luna et al, 2020), and dynamics system (Li et al, 2021), functional data analysis (Wang et al, 2021).…”
Section: Related Workmentioning
confidence: 99%