Parallel processing has emerged as a key enabling technology in modern computing. Recent software advances have allowed collections of heterogeneous computers to be used as a concurrent computational resource. In this work we explore how Differential Evolution can be parallelized in a virtual parallel environment so as to improve both the speed and the performance of the method. Experimental results indicate that the extent of information exchange among subpopulations assigned to different processor nodes, bears a significant impact on the performance of the algorithm. Furthermore, not all the mutation strategies of the Differential Evolution algorithm are equally sensitive to the value of this parameter.
Advances in data technology have enabled streaming acquisition of real-time information in a wide range of settings, including consumer credit, electricity consumption, and internet user behavior. Streaming data consist of transiently observed, temporally evolving data sequences, and poses novel challenges to statistical analysis. Foremost among these challenges are the need for online processing, and temporal adaptivity in the face of unforeseen changes, both smooth and abrupt, in the underlying data generation mechanism. In this paper, we develop streaming versions of two widely used parametric classifiers, namely quadratic and linear discriminant analysis. We rely on computationally efficient, recursive formulations of these classifiers. We additionally equip them with exponential forgetting factors that enable temporal adaptivity via smoothly down-weighting the contribution of older data. Drawing on ideas from adaptive filtering, we develop an online method for self-tuning forgetting factors on the basis of an approximate gradient scheme. We provide extensive simulation and real data analysis that demonstrate the effectiveness of the proposed method in handling diverse types of change, while simultaneously offering monitoring capabilities via interpretable behavior of the adaptive forgetting factors.
Networks of spiking neurons can perform complex non-linear computations in fast temporal coding just as well as rate coded networks. These networks differ from previous models in that spiking neurons communicate information by the timing, rather than the rate, of spikes. To apply spiking neural networks on particular tasks, a learning process is required. Most existing training algorithms are based on unsupervised Hebbian learning. In this paper, we investigate the performance of the Parallel Differential Evolution algorithm, as a supervised training algorithm for spiking neural networks. The approach was successfully tested on well-known and widely used classification problems. I. INTRODUCTION Artificial Neural Networks (ANNs) are parallel computational models comprised of densely interconnected, simple, adaptive processing units, characterized by an inherent propensity for storing experiential knowledge and rendering it available for use. ANNs resemble the human brain in two fundamental respects; firstly, knowledge is acquired by the network from its environment through a learning process, and secondly, synaptic weights are employed to store the acquired knowledge [I]. The building block of ANNs is the model of the artificial neuron. In [2] three generations of artificial neuron models are distinguished. The first generation of neurons gave rise to multilayered perceptrons, Hopfield nets, and Boltzmann machines. These networks can compute any boolean function, as well as, all digital functions. The second generation neurons apply activation functions with a continuous set of possible output values to a weighted sum of the inputs (e.g. sigmoid functions, linear saturated functions, piecewise exponential functions). From this emerged feedforward and recurrent sigmoidal neural nets and networks of radial basis function units. These networks can further approximate, arbitrarily closely, any continuous function defined on a compact domain, and support learning algorithms based on gradient descent. All these models require the timing of individual computation steps to adhere to a global schedule that is independent of the values of the input parameters. Incoming spikes induce a postsynaptic potential according to an impulse response function. Spiking neurons are considered to comprise the third gen
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.