1994
DOI: 10.1023/a:1022650905902
|View full text |Cite
|
Sign up to set email alerts
|

Untitled

Abstract: Abstract. For a common class of artificial neural networks, the mean integrated squared error between the estimated network and a target function / is shown to be bounded by where n is the number of nodes, d is the input dimension of the function, N is the number of training observations, and Cf is the first absolute moment of the Fourier magnitude distribution of /. The two contributions to this total risk are the approximation error and the estimation error. Approximation error refers to the distance between… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
7
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 17 publications
2
7
0
Order By: Relevance
“…This last scaling is reasonable because when the input layer is wide enough, expansion in the hidden layer is unnecessary. In all regions, L * h shows a square-root dependence on N , as suggested from previous studies [6,8]. To further illustrate the dependence of L * h on L x and N , in Fig.…”
Section: Optimal Hidden Layer Sizesupporting
confidence: 73%
See 3 more Smart Citations
“…This last scaling is reasonable because when the input layer is wide enough, expansion in the hidden layer is unnecessary. In all regions, L * h shows a square-root dependence on N , as suggested from previous studies [6,8]. To further illustrate the dependence of L * h on L x and N , in Fig.…”
Section: Optimal Hidden Layer Sizesupporting
confidence: 73%
“…Similarly, model selection in neural networks was previously studied mostly in the large sample size limit [7,66]. The upper bound on the network size was also studied from VC (Vapnik-Chervonenkis) theory [5], and the minimum description length principle [6].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…This, combined with the universality of the artificial neural network proved in [31], suggests that extreme outliers in errors can be overcome by generating more data that increases the information entropy of the data set and re-training the neural network. The relationship between errors and size of data set for various configurations of artificial neural networks is further explored in [47][48][49][50].…”
Section: E Effect Of the Amount Of Training Data On Accuracymentioning
confidence: 99%