Models of associative memory usually have full connectivity or if diluted, random symmetric connectivity. In contrast, biological neural systems have predominantly local, non-symmetric connectivity. Here we investigate sparse networks of threshold units, trained with the perceptron learning rule. The units are given position and are arranged in a ring. The connectivity graph varies between being local to random via a small world regime, with short path-lengths between any two neurons. The connectivity may be symmetric or non-symmetric. The results show that it is the small-world networks with non-symmetric weights and non-symmetric connectivity that perform best as associative memories. It is also shown that in highly dilute networks small world architectures will produce efficiently wired associative memories, which still exhibit good pattern completion abilities.
In physical implementations of associative memory, wiring costs play a significant role in shaping patterns of connectivity. In this study of sparsely-connected associative memory, a range of architectures is explored in search of optimal connection strategies which maximise pattern-completion performance, while at the same time minimising wiring costs. It is found that architectures in which the probability of connection between any two nodes is based on relatively tight Gaussian and exponential distributions perform well, and that for optimum performance, the width of the Gaussian distribution should be made proportional to the number of connections per node. It is also established from a study of other connection strategies that distal connections are not necessary for good pattern-completion performance. Convergence times and network scalability are also addressed in the wide ranging study.
The problem we address in this paper is that of finding effective and parsimonious patterns of connectivity in sparse associative memories. This problem must be addressed in real neuronal systems, so that results in artificial systems could throw light on real systems. We show that there are efficient patterns of connectivity and that these patterns are effective in models with either spiking or non-spiking neurons. This suggests that there may be some underlying general principles governing good connectivity in such networks. We also show that the clustering of the network, measured by Clustering Coefficient, has a strong negative linear correlation to the performance of associative memory. This result is important since a purely static measure of network connectivity appears to determine an important dynamic property of the network.
Abstract⎯The performance of sparsely-connected associative memories built from sets of perceptrons configured in a ring structure is investigated using different patterns of connectivity. Architectures based on uniform and linear distributions of restricted maximum connection length are compared to those based on Gaussian distributions and to networks created by progressively rewiring a locallyconnected network. It is found that while all four architectures are capable of good pattern-completion performance in sparse networks, the Gaussian, restrictedlinear and restricted-uniform architectures require lower mean wiring lengths to achieve the same results. It is shown that in order to achieve good pattern-completion at low wiring costs, connectivity should be localized, though not completely local, and that distal connections are not necessary.Index Terms⎯Associative memory, sparse connectivity, high performance, patterns of connectivity
This study examines the performance of sparsely-connected associative memory models built using a number of different connection strategies, applied to one-and two-dimensional topologies. Efficient patterns of connectivity are identified which yield high performance at relatively low wiring costs in both topologies. Networks with displaced connectivity are seen to perform particularly well. It is found that two-dimensional models are more tolerant of variations in connection strategy than their one-dimensional counterparts; though networks built with both topologies become less so as their connection density is decreased.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.