2020
DOI: 10.1214/19-aos1910
|View full text |Cite
|
Sign up to set email alerts
|

Discussion of: “Nonparametric regression using deep neural networks with ReLU activation function”

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 2 publications
0
5
0
Order By: Relevance
“…Further, it is shown that estimators which are not based on a composition structure do not possess the same adaptation property. For more on the results and limitations of Schmidt‐Hieber (2019), see the published discussions (Ghorbani, Mei, Misiakiewicz, and Montanari (2019), Shamir (2019), Kutyniok (2019)). Other work in this direction is Bach (2017) and Bauer and Kohler (2019).…”
Section: Deep Neural Networkmentioning
confidence: 99%
“…Further, it is shown that estimators which are not based on a composition structure do not possess the same adaptation property. For more on the results and limitations of Schmidt‐Hieber (2019), see the published discussions (Ghorbani, Mei, Misiakiewicz, and Montanari (2019), Shamir (2019), Kutyniok (2019)). Other work in this direction is Bach (2017) and Bauer and Kohler (2019).…”
Section: Deep Neural Networkmentioning
confidence: 99%
“…For high-dimensional data with a large d, it is not clear when such an error bound is useful in a non-asymptotic sense. Similar concerns about this type of error bounds as established in Schmidt-Hieber (2020) are raised in the discussion by Ghorbani et al (2020), who looked at the example of additive models and pointed out that in the upper bound of the form R( fn , f 0 ) ≤ C(d)n −ǫ * log 2 n obtained in Schmidt-Hieber (2020), the d-dependence of the prefactor C(d) is not characterized. It also assumes n large enough, that is, n ≥ n 0 (d) for an unspecified n 0 (d).…”
Section: Approximation Error the Approximation Error Depends Onmentioning
confidence: 65%
“…For such an f 0 , the optimal convergence rate of the prediction error is C d n −2β/(2β+d) under mild conditions (Stone, 1982), where C d is a prefactor independent of n but depending on d and other model parameters. In low-dimensional models with a small d, the impact of C d on the convergence rate is not significant, however, in high-dimensional models with a large d, the impact of C d can be substantial, see, for example, Ghorbani et al (2020). Therefore, it is crucial to elucidate how this prefactor depends on the dimensionality so that the error bounds are meaningful in the high-dimensional settings.…”
Section: Introduction Consider a Nonparametric Regression Modelmentioning
confidence: 99%
“…Also, the existing error bound results for the nonparametric regression estimators using deep neural networks contain prefactors that depend exponentially on the ambient dimension d of the predictor (Schmidt-Hieber, 2020;Farrell, Liang and Misra, 2021;Padilla, Tansey and Chen, 2020). This adversely affects the quality of the error bounds, especially in the high-dimensional settings when d is large (Ghorbani et al, 2020). In particular, such type of error bounds lead to a sample complexity that depends exponentially on d. Such a sample size requirement is difficult to be met even for a moderately large d.…”
Section: Introduction Consider a Nonparametric Regression Modelmentioning
confidence: 99%