2022
DOI: 10.1038/s41467-021-27653-2
|View full text |Cite
|
Sign up to set email alerts
|

Brain-inspired global-local learning incorporated with neuromorphic computing

Abstract: There are two principle approaches for learning in artificial intelligence: error-driven global learning and neuroscience-oriented local learning. Integrating them into one network may provide complementary learning capabilities for versatile learning scenarios. At the same time, neuromorphic computing holds great promise, but still needs plenty of useful algorithms and algorithm-hardware co-designs to fully exploit its advantages. Here, we present a neuromorphic global-local synergic learning model by introdu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
60
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 51 publications
(60 citation statements)
references
References 46 publications
0
60
0
Order By: Relevance
“…They suffer from expensive computation during the training process on complex network architecture [45]. Choosing the appropriate surrogate gradient function is also extensively studied: [27] optimizes the surrogate gradient using the finite difference method, [46] uses meta learning to estimate the function of spiking neurons. PLIF [47] proposes to learn the parameters in LIF neurons via gradient descent.…”
Section: Direct Training Snnmentioning
confidence: 99%
“…They suffer from expensive computation during the training process on complex network architecture [45]. Choosing the appropriate surrogate gradient function is also extensively studied: [27] optimizes the surrogate gradient using the finite difference method, [46] uses meta learning to estimate the function of spiking neurons. PLIF [47] proposes to learn the parameters in LIF neurons via gradient descent.…”
Section: Direct Training Snnmentioning
confidence: 99%
“…few-shot learning, energy efficient and explainable) but with limited flexibility. In the future, we will focus on solving the generalization issue in various technology paths: 1) designing an SNN based Network Architecture Search (NAS) mechanism which similar to the Auto ML (42); 2) introducing a reinforcement learning based agent to generate learning rules that equal to the human prior knowledge (43); 3) utilizing biological brain assembling theories to build a learning logic based architecture (28,25).…”
Section: Discussionmentioning
confidence: 99%
“…The developed system shows great one-shot learning behavior compared to the DL. Meanwhile, (25) proposed a spike-based hybrid plasticity model for solving few-shot learning, continual learning, and fault-tolerance learning problems, it combines both local plasticity and global supervise information for multi-task learning.…”
Section: Introductionmentioning
confidence: 99%
“…Cortical neurons are sparsely connected via dynamical synapses that can be weakened or strengthened (Waters and Helmchen, 2006 ; Seeman et al, 2018 ) by some mechanisms, such as activity-dependent or retrograde signaling from other neurons. Understanding such network architectures and molecular mechanisms, and then implementing them in the artificial systems may lead to brain-like machines that are able to perform complex tasks (Faghihi and Moustafa, 2017 ; Li et al, 2019 ; Wu et al, 2022 ). Biological neurons are composed of dendrites that take up the signals from other neurons, the soma that is involved in information processing, and the axon that passes on the generated action potential (AP) into the terminal synapses of the axon ( Figure 1A ).…”
Section: Introductionmentioning
confidence: 99%