Research in neuroevolution-that is, evolving artificial neural networks (ANNs) through evolutionary algorithms-is inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective compositional pattern-producing networks (CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. This approach can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to the underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution. HyperNEAT is demonstrated through visual discrimination and food-gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.
Looking to nature as inspiration, for at least the last 25 years researchers in the field of neuroevolution (NE) have developed evolutionary algorithms designed specifically to evolve artificial neural networks (ANNs).Yet the ANNs evolved through NE algorithms lack the distinctive characteristics of biological brains, perhaps explaining why NE is not yet a mainstream subject of neural computation. Motivated by this gap, this paper shows that when geometry is introduced to evolved ANNs through the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) algorithm, they begin to acquire characteristics that indeed are reminiscent of biological brains. That is, if the neurons in evolved ANNs are situated at locations in space (i.e. if they are given coordinates), then, as experiments in evolving checkers-playing ANNs in this paper show, topographic maps with symmetries and regularities can evolve spontaneously. The ability to evolve such maps is shown in this paper to provide an important advantage in generalization. In fact, the evolved maps are sufficiently informative that their analysis yields the novel insight that the geometry of the connectivity patterns of more general players is significantly more smooth and contiguous than less general ones. Thus, the results in this paper reveal a correlation between generality and smoothness in connectivity patterns. This result hints at the intriguing possibility that, as NE matures as a field, its algorithms can evolve ANNs of increasing relevance to those who study neural computation in general.
Connectivity patterns in biological brains exhibit many repeating motifs. This repetition mirrors inherent geometric regularities in the physical world. For example, stimuli that excite adjacent locations on the retina map to neurons that are similarly adjacent in the visual cortex. That way, neural connectivity can exploit geometric locality in the outside world by employing local connections in the brain. If such regularities could be discovered by methods that evolve artificial neural networks (ANNs), then they could be similarly exploited to solve problems that would otherwise require optimizing too many dimensions to solve. This paper introduces such a method, called Hypercube-based Neuroevolution of Augmenting Topologies (HyperNEAT), which evolves a novel generative encoding called connective Compositional Pattern Producing Networks (connective CPPNs) to discover geometric regularities in the task domain. Connective CPPNs encode connectivity patterns as concepts that are independent of the number of inputs or outputs, allowing functional large-scale neural networks to be evolved. In this paper, this approach is tested in a simple visual task for which it effectively discovers the correct underlying regularity, allowing the solution to both generalize and scale without loss of function to an ANN of over eight million connections.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.