A locally iterative learning (LIL) rule is adapted to a model of the associative memory based on the evolving recurrent-type neural networks composed of growing neurons. There exist extremely di erent scale parameters of time, the individual learning time and the generation in evolution. This model allows us de nite investigation on the interaction between learning and evolution. And the reinforcement of the robustness against the noise is also achieved in the evolutional scheme.
Abstract. In this paper, we investigate the associative memory in recurrent neural networks, based on the model of evolving neural networks proposed by Nolfi, Miglino and Parisi.Experimentaily developed network has highly asymmetric synaptic weights and dilute connections, quite different from those of the Hopfield model.Some results on the effect of learning efficiency on the evolution are also presented. IntroductionAs a model of associative memory in terms of the recurrent-type artificial neural networks, Hopfield model [1] and its variants have been successfully developed and made some important results using the statistical mechanical techniques on the analogy with spin glass [2,3]. Their assumption of the symmetrically and fully connected weights of synapses is, however, unrealistic, since this does not arise naturally in biological neural systems. Some researches are found allowing asymmetric synaptic weights or diluting the network connections [2,3]. They do not concern the network structures, but most of them merely investigate the effects on the storage capacities under limited conditions.In any of above mentioned models, the physical positions where neurons exist or the distances between neurons are not considered at all. It must play an important role for the formation of biological networks in the brain. There should be the attempt to construct more natural models based on biologically and genetically plausible considerations, in which asymmetric and dilute connections are just the result. The genetic algorithm [4] can be a key ingredient toward such directions. Recently evolutionary search procedures have been combined with the artificial neural networks (so called "the evolutionary artificial neural networks") [5]. Almost all of the models deal with the feedforward multi-layered networks to raise the performance of the networks and in many of them the evolution of both weights and architectures is not integrated. In this paper we apply a genetic evolutionary scheme to recurrent neural networks and investigate associative memory. Aiming to construct a biologically inspired neural network, we take into account the fol!owing two points in network formation: 9 The correspondence of genotype and phenotype should not be direct. Only developmental rules of neurons are encoded in the genotype representation. 9 The information on the physical positions of neurons and the distances between neurons should be included in the model. The resulting network is a physical object in space. While the first point, the indirect encoding scheme, has been studied [6,7], few attempts have been made to incorporate both of the above two points, except for the model of "evolving (growing) neural networks r by Nolfi et al. [8,9]. So we follow their scheme treating a neural network as a creature in a physical environment.The outline of our model is as follows.(1) Prepare the individuals which have the genotypes encoded the information for generating networks. (2) Learn the patterns according to an appropriate learning rule. (3...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.