1992
DOI: 10.6028/nist.ir.4893
|View full text |Cite
|
Sign up to set email alerts
|

Topological separation versus weight sharing in neural net optimization

Abstract: Recent advances in neural networks application development for real life problems have drawn attention to network optimization. Most of the known optimization methods rely heavily on a weight sharing concept for pattern separation and recognition. The shortcoming of the weight sharing method is attributed to a large number of extraneous weights which play a minimal role in pattern separation and recognition. Our experiments have shown that up to 97% of the connections in the network can be eliminated with litt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

1994
1994
1994
1994

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…A standard method of error minimization for real world problems is backpropagation 27] although more powerful methods of optimization have also been used 28,29]. In addition to the problem of error reduction, e ective generalization also requires that the information content of the network be reduced to some minimum value 30,31,32]. The resulting reduced network has the advantage of increased speed achieved by using fewer connections and is more e ective in terms of the use of information capacity to achieve a speci ed pattern recognition accuracy.…”
Section: Generalizationmentioning
confidence: 99%
See 1 more Smart Citation
“…A standard method of error minimization for real world problems is backpropagation 27] although more powerful methods of optimization have also been used 28,29]. In addition to the problem of error reduction, e ective generalization also requires that the information content of the network be reduced to some minimum value 30,31,32]. The resulting reduced network has the advantage of increased speed achieved by using fewer connections and is more e ective in terms of the use of information capacity to achieve a speci ed pattern recognition accuracy.…”
Section: Generalizationmentioning
confidence: 99%
“…This results in a smaller network with a very high information content that allows the use of a reasonably small training set. We have used the Boltzmann method as a secondary method of optimization to prune the networks used here 30,31]. The method can be used in conjunction with a primary method of optimization such as a scaled conjugate gradient scheme 29].…”
Section: Generalizationmentioning
confidence: 99%