2022
DOI: 10.1007/978-3-031-17849-8_20
|View full text |Cite
|
Sign up to set email alerts
|

FastHebb: Scaling Hebbian Training of Deep Neural Networks to ImageNet Level

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…Another disadvantage of biologically based models is related to the computational cost of simulating certain synaptic dynamics or unfolding the temporal evolution of complex neural circuitry. A recent work [137] addresses in part this problem by leveraging GPU parallelization more carefully.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Another disadvantage of biologically based models is related to the computational cost of simulating certain synaptic dynamics or unfolding the temporal evolution of complex neural circuitry. A recent work [137] addresses in part this problem by leveraging GPU parallelization more carefully.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, Hebbian learning was successfully used to retrain the higher layers of a pre-trained network, achieving results comparable to backprop, but requiring fewer training epochs, thus suggesting potential applications in the context of transfer learning (see also [32,156,157]). Some contributions [132,137] showed promising results of unsupervised Hebbian algorithms for semi-supervised network training, in learning scenarios with scarce data availability, achieving superior results compared to other backprop-based unsupervised methods for semi-supervised training such as Variational Auto-Encoders (VAE) [118]. In further developments [132,137], a more efficient formulation of Hebbian learning was also proposed, that enabled the scaling up of experiments to complex image recognition datasets, such as ImageNet [56], large-scale image retrieval, and complex network architectures, improving training speed up to a factor of 50.…”
Section: Synaptic Plasticity Models In Deep Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Hebbian winner-take-all models are deemed feasible for neuromorphic computing in light of their characteristic of local and unsupervised weight update rule [55,56]. In contrast to previously mentioned approaches, the Hebbian learning method has demonstrated reasonable levels of convergence when used for training on large datasets such as ImageNet [57,58], while also exhibiting training time comparable to backpropagation [59]. Despite the progress made in the field, opportunities remain for improvement.…”
Section: New Training Strategiesmentioning
confidence: 99%