2001
DOI: 10.1007/3-540-45718-6_104
|View full text |Cite
|
Sign up to set email alerts
|

Implementation of Kolmogorov Learning Algorithm for Feedforward Neural Networks

Abstract: Abstract. We present a learning algorithm for feedforward neural networks that is based on Kolmogorov theorem concerning composition of ndimensional continuous function from one-dimensional continuous functions. A thorough analysis of the algorithm time complexity is presented together with serial and parallel implementation examples.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2002
2002
2023
2023

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 7 publications
(3 reference statements)
0
4
0
Order By: Relevance
“…Note that the activation functions g k in the output layer depend on f and have to be determined by learning procedures. Some practically useful learning algorithms for such networks were discussed in [19]. By using cubic spline technique of approximation, both for activation and internal transfer functions in Kolmogorov type networks, more efficient approximation of multivariate functions was achieved (see [6]).…”
Section: Introductionmentioning
confidence: 99%
“…Note that the activation functions g k in the output layer depend on f and have to be determined by learning procedures. Some practically useful learning algorithms for such networks were discussed in [19]. By using cubic spline technique of approximation, both for activation and internal transfer functions in Kolmogorov type networks, more efficient approximation of multivariate functions was achieved (see [6]).…”
Section: Introductionmentioning
confidence: 99%
“…[5]). Functions θ and previous iteration error function e r−1 are then used to compile the outer functions φ r q for q =0,...,m…”
Section: Algorithm Proposalmentioning
confidence: 99%
“…The following theorem 3 summarizes our estimation of the total amount of computational time for one iteration (cf. [5]). …”
Section: Serial Implementationmentioning
confidence: 99%
See 1 more Smart Citation