2019
DOI: 10.3389/fnins.2019.00483
|View full text |Cite
|
Sign up to set email alerts
|

Neuromorphic Hardware Learns to Learn

Abstract: Hyperparameters and learning algorithms for neuromorphic hardware are usually chosen by hand to suit a particular task. In contrast, networks of neurons in the brain were optimized through extensive evolutionary and developmental processes to work well on a range of computing and learning tasks. Occasionally this process has been emulated through genetic algorithms, but these require themselves hand-design of their details and tend to provide a limited range of improvements. We employ instead other powerful gr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(38 citation statements)
references
References 29 publications
(33 reference statements)
0
38
0
Order By: Relevance
“…Few-shot learning using biological models and synaptic plasticities is an appealing open problem, but there are few relevant studies, mainly because of a lack of algorithms and data. Existing algorithms attempt to solve it by exploring the learning-to-learn (L2L; Bellec et al, 2018;Bohnstingl et al, 2019) or transfer learning (Stewart et al, 2020) approach. However, these methods have poor performance and are subject to many restrictions, such as specific structures or simple tasks.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Few-shot learning using biological models and synaptic plasticities is an appealing open problem, but there are few relevant studies, mainly because of a lack of algorithms and data. Existing algorithms attempt to solve it by exploring the learning-to-learn (L2L; Bellec et al, 2018;Bohnstingl et al, 2019) or transfer learning (Stewart et al, 2020) approach. However, these methods have poor performance and are subject to many restrictions, such as specific structures or simple tasks.…”
Section: Discussionmentioning
confidence: 99%
“…A special recurrent SNN with adapting neurons (LSNN) is suitable for L2L, which can approach the knowledge transfer performance of LSTM networks, but the universality of this algorithm to other SNN structures is not addressed (Bellec, Salaj, Subramoney, Legenstein, & Maass, 2018). With the aid of powerful gradient-free optimization tools, L2L also enhances the reward-based learning capability of SNNs in neuromorphic hardware (Bohnstingl, Scherr, Pehle, Meier, & Maass, 2019). Even so, these methods are limited to relatively simple learning tasks, such as learning the nonlinear functions from a teacher network or the reinforcement learning of multiarmed bandits.…”
Section: Introductionmentioning
confidence: 99%
“…This learning-to-learn framework was recently applied to SNNs to obtain properties of LSTM networks and use them to solve complex sequence learning tasks (Bellec et al, 2018 ). In Bohnstingl et al ( 2019 ), the learning-to-learn framework was also applied to a neuromorphic hardware platform.…”
Section: Algorithms For Biologically Plausible Continual Learningmentioning
confidence: 99%
“…For artificial neural networks (ANNs), these techniques span from simplifying models, such as pruning and quantization (Han et al, 2015;Wen et al, 2016;Yang et al, 2018;Zoph et al, 2018), to designing energy efficient architectures (Jin et al, 2014;Panda et al, 2016;Parsa et al, 2017;Wang et al, 2017), and neural architecture search (Zoph et al, 2018). In spiking neuromorphic domain, these include different training algorithms such as Schuman et al (2016), Bohnstingl et al (2019) based on evolutionary optimization, Esser et al (2015Esser et al ( , 2016 on modified backpropagation techniques, Severa et al (2019) as binary communication, and Rathi et al (2020) as a hybrid approach and then deploying these on neuromorphic hardware such as Schmitt et al (2017) and Koo et al (2020). In this section, we briefly introduce each of these methods and continue with the added complexity of co-designing hardware and software for artificial neural networks and spiking neuromorphic systems.…”
Section: Background and Related Workmentioning
confidence: 99%