1992
DOI: 10.1016/s0893-6080(05)80138-8
|View full text |Cite
|
Sign up to set email alerts
|

Fast generating algorithm for a general three-layer perceptron

Abstract: A fast iterative algorithm is proposed for the construction and the learning of a neural net achieving a classification task, with an input layer, one intermediate layer, and an output layer. The network is able to learn an arbitrary training set. The algorithm does not depend on a special learning scheme (e.g., the couplings can be determined by modified Hebbian prescriptions or by more complex learning procedures). During the process the intermediate units are constructed systematically by collecting the pat… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

1994
1994
2009
2009

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(11 citation statements)
references
References 7 publications
0
11
0
Order By: Relevance
“…Sequential Window Learning method [25] uses window transfer functions for which weights are obtained from solution of a system of algebraic equations. Target Switch algorithm [33] use traditional perceptron learning. Fig.…”
Section: Boolean Functionsmentioning
confidence: 99%
“…Sequential Window Learning method [25] uses window transfer functions for which weights are obtained from solution of a system of algebraic equations. Target Switch algorithm [33] use traditional perceptron learning. Fig.…”
Section: Boolean Functionsmentioning
confidence: 99%
“…One type is algorithm oriented. Many polynomial-time network construction/training algorithms [3], [14], [16], [17], [20] were developed only to solve approximate versions of the NCP because they do not take into account and cannot necessarily produce the exact minimum size.…”
Section: Introductionmentioning
confidence: 99%
“…To limit the e ective size of a network in order to avoid over tting one can either use additive or subtractive methods or regularization. Additive often called constructive methods start out from a small network and then insert additional units also known as nodes or neurons and connections also known as weights or links until the network can represent the required function Ash, 1989, Fahlman and Lebiere, 1990, Frean, 1990, Gallant, 1986, Hanson, 1989, M ezar and Nadal, 1989, Simon, 1993, Wang et al, 1994, Wynne-Jones, 1991, Zollner et al, 1992. Subtractive often called pruning methods start out from a large network and remove super uous parts until the network can just still represent the required function Le Cun et al, 1989, Finno et al, 1993, Hassibi and Stork, 1993, Levin et al, 1994.…”
Section: Introductionmentioning
confidence: 99%