In this paper, we investigate Hebbian learning strategies applied to Convolutional Neural Network (CNN) training. We consider two unsupervised learning approaches, Hebbian Winner-Takes-All (HWTA) and Hebbian Principal Component Analysis (HPCA), which are compared to Variational Auto-Encoder (VAE) training. We also consider a Supervised Hebbian Classifier (SHC) approach, for training the final classification layer, which is compared to Stochastic Gradient Descent (SGD) training. The Hebbian learning rules are used to train the layers of a CNN in order to extract features that are then used for classification, without requiring backpropagation (backprop). We also investigate hybrid learning methodologies, where some layers are trained following the Hebbian approach, and others are trained by backprop. We tested our approach on MNIST, CIFAR10 and CIFAR100 datasets. Our results suggest that Hebbian learning is generally suitable for training early feature extraction layers, or to retrain higher network layers in This work was partially supported by the H2020 project AI4EU under GA 825619 and by the H2020 project AI4Media under GA 951911.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.