Parallel algorithms are presented for modules of learning automata with the objective of improving their speed of convergence without compromising accuracy. A general procedure suitable for parallelizing a large class of sequential learning algorithms on a shared memory system is proposed. Results are derived to show the quantitative improvements in speed obtainable using parallelization. The efficacy of the procedure is demonstrated by simulation studies on algorithms for common payoff games, parametrized learning automata and pattern classification problems with noisy classification of training samples.
This paper proposes a backpropagation-based feedforward neural network for learning probability distributions of outputs conditioned on inputs using incoming input-output samples only. The backpropagation procedure is shown to locally minimize the Kullback-Leibler measure in an expected sense. The procedure is enhanced to facilitate boundedness of weights and exploration of the search space to reach a global minimum. Weak convergence theory is employed to show that the longterm behavior of the resulting algorithm can be approximated by that of a stochastic differential equation, whose invariant distributions are concentrated around the global minima of the Kullback-Leibler measure within a region of interest. Simulation studies on problems involving samples arriving from a mixture of labeled densities and the well-known Iris data problem demonstrate the speed and accuracy of the proposed procedure.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.