Abstract-In this paper a hardware implementation of an artificial neural network on Field Programmable Gate Arrays (FPGA) is presented. A digital system architecture is designed to realize a feed forward multilayer neural network. The designed architecture is described using Very High Speed Integrated Circuits Hardware Description Language (VHDL). The parallel structure of a neural network makes it potentially fast for the computation of certain tasks. The same feature makes a neural network well suited for implementation in VLSI technology. Hardware realization of a Neural Network (NN), to a large extent depends on the efficient implementation of a single neuron. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of neural networks. FPGA realization of ANNs with a large number of neurons is still a challenging task.Index Terms-Artificial neural network, hardware description language, field programmable gate arrays (FPGAs), sigmoid activation function.
I. INTRODUCTIONArtificial neural networks (ANN) have found widespread deployment in a broad spectrum of classification, perception, association and control applications [1].The aspiration to build intelligent systems complemented with the advances in high speed computing has proved through simulation the capability of Artificial Neural Networks (ANN) to map, model and classify nonlinear systems. Real time applications are possible only if low cost high-peed neural computation is made realizable. Towards this goal numerous works on implementation of Neural Networks (NN) have been proposed [2].Artificial neural networks (ANNs) have been mostly implemented in software. This has benefits, since the designer does not need to know the inner workings of neural network elements, but can concentrate on the application of the neural network. However, a disadvantage in real-time applications of software-based ANNs is slower execution compared with hardware-based ANNs.Hardware-based ANNs have been implemented as both analogue and digital circuits. The analogue implementations exploit the nonlinear characteristics of CMOS (complementary metal-oxide semiconductor) devices, but Manuscript received November 18, 2012; revised February 27, 2013 The artificial neuron given in this figure has N inputs, denoted as x1, x2,…..xN,. Each line connecting these inputs to the neuron is assigned a weight, denoted as w1, w2,…..wN respectively. The activation, a, determines whether the neuron is to be fired or not. It is given by the formula: A negative value for a weight indicates an inhibitory connection while a positive value indicates excitatory connection.The output, y of the neuron is given as:Originally the neuron output function f(a) in McCulloch Pitts model was proposed as threshold function, however linear, ramp, and sigmoid functions are also used in different situations. The vector notationcan be used for expressing the activation of a neuron. Here, the jth element of the input vector x is xj, the jth element of the weight vector of w is wj....