Recent advances in the biophysics of computation and neurocomputing models have brought to the foreground the importance of dendritic structures in a single neuron cell. Dendritic structures are now viewed as the primary autonomous computational units capable of realizing logical operations. By changing the classic simplified model of a single neuron with a more realistic one that incorporates the dendritic processes, a novel paradigm in artificial neural networks is being established. In this work, we introduce and develop a mathematical model of dendrite computation in a morphological neuron based on lattice algebra. The computational capabilities of this enriched neuron model are demonstrated by means of several illustrative examples and by proving that any single layer morphological perceptron endowed with dendrites and their corresponding input and output synaptic processes is able to approximate any compact region in higher dimensional Euclidean space to within any desired degree of accuracy. Based on this result, we describe a training algorithm for single layer morphological perceptrons and apply it to some well-known nonlinear problems in order to exhibit its performance.
Recent advances in neurobiology and the biophysics of neural computation have brought to the foreground the importance of dendritic structures of neurons. These structures are now viewed as the primary basic computational units of the neuron, capable of realizing logical operations. Based on these new bioohysical neural models, we develop a new paradigm for classified as m distinct classes to within any desired degree Of accuracy. We conclude this paper with several pertinent observations concerning training of morphologic^ perceptrons and differences between morphological perceptrons and classical PercePtrons. single laye. perceptrons that incorporal& dendriiic processes. The basic computational processes in dendrites as well as neurons are based on lattice algebra. The computational capabilities of this new perceptron model is demonstrated by means of several illustrative examples and two theorems.
Abstract. Morphological associative memories (MAMs) use a lattice algebra approach to store and recall pattern associations. The lattice matrix operations endow MAMs with properties that are completely different than those of traditional associative memory models. In the present paper, we focus our attention to morphological bidirectional associative memories (MBAMs) capable of storing and recalling non-boolean patterns degraded by random noise. The notions of morphological strong independence (MSI), minimal representations, and kernels are extended to provide the foundation of bidirectional recall when dealing with noisy inputs. For arbitrary pattern associations, we present a practical solution to compute kernels in MBAMs by induced MSI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.