2While inspired by the brain, currently successful artificial neural networks lack key features of 3 the biological original. In particular, the deep convolutional networks (DCNs) neither use pulses 4 as signals exchanged among neurons, nor do they include recurrent connections which are 5 both core properties of real neuronal networks. This not only puts to question the relevance of 6 DCNs for explaining information processing in nervous systems but also limits their potential for 7 modeling natural intelligence. 8 Spike-By-Spike (SbS) networks are a promising new approach that combines the 9 computational power of artificial networks with biological realism. Instead of separate neurons 10 they consist of neuronal populations performing inference. Even though the underlying equations 11 are rather simple implementations of such networks on currently available hardware are several 12 orders of magnitude slower than for comparable non-spiking deep networks. 13 Here, we develop and investigate a framework for SbS networks on chip. Thanks to the 14 communication via spikes, already moderately sized deep networks based on the SbS approach 15 allows a parallelization into thousands of simple and fully independent computational cores. We 16 demonstrate the feasibility of our design on a Xilinx Virtex 6 FPGA while avoiding proprietary 17 cores (except block memory) that can not be realized on a custom-designed ASIC. We present 18 memory access optimized circuits for updating the internal variables of the neurons based on 19 incoming spikes as well as for learning the connection's strength. The optimized computational 20 circuits as well as the representation of variables fully exploit the non-negative properties of all 21 data in the SbS approach. We compare the sizes of the arising circuits for floating and fixed point 22 numbers. In addition we show how to minimize the number of components that are required for 23 the computational cores by reusing their components for different functions. 24
INTRODUCTIONNowadays, deep neuronal networks (Schmidhuber, 2015) are a basis for successfully applying neuronal 25 networks on problems from artificial intelligence research (Azkarate Saiz, 2015; Silver et al., 2016; 26 Guo et al., 2016; Gatys et al., 2016). The revival of using neuronal networks was provoked by the increase 27 of computational powers in modern computers and boosted even more through modern 3D graphic cards 28 as well as specialized application specific integrated circuits (ASICs) (Sze et al., 2017; Jouppi et al., 2018) 29 and field programmable gate arrays (FPGAs) (Lacey et al., 2016). The most successful type of networks 30 is based on multilayer perceptrons (Rumelhart et al., 1986; Rosenblatt, 1958) and consists of several so 31
Rotermund & Pawelzik
SbS FPGAcalled hidden layers. Typically information is processed and feed forward from one hidden layer to the 32 next, beginning at the input layer and ending at the network's output layer. In theory such a network is 33 able to c...