2019
DOI: 10.3389/fnins.2019.00754
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Coding Using the Locally Competitive Algorithm on the TrueNorth Neurosynaptic System

Abstract: The Locally Competitive Algorithm (LCA) is a biologically plausible computational architecture for sparse coding, where a signal is represented as a linear combination of elements from an over-complete dictionary. In this paper we map the LCA algorithm on the brain-inspired, IBM TrueNorth Neurosynaptic System. We discuss data structures and representation as well as the architecture of functional processing units that perform non-linear threshold, vector-matrix multiplication. We also present the design of the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 33 publications
0
10
0
Order By: Relevance
“…Demonstrating progress toward the above goals, we recently integrated multiple neuromorphic sensorymotor networks into a single Loihi chip to control a humanoid robot's interaction with its environment in an object-learning task. 12 In this work, three SNNs were implemented on Loihi: an object-recognition network receiving input from an event-based camera and labels extracted from speech commands; a spatial-memory network that tracked the pose of the robot's head from its motor encoders and memorized object locations using on-chip plasticity; and a DNF-based neural state machine responsible for reconfiguring different parts of the architecture as required by the current behavior (looking, learning an object, recognizing it, or communicating with the user). While we are still far from implementing the robust, adaptive brains that future robotic systems will require to interact freely in the real world, this work shows that we can already build relatively complex neuromorphic applications by composing heterogeneous modules drawing from a toolbox of common algorithmic primitives, such as spike-based communication, attractor networks, top-down and bottom-up attention, working memory, and local learning rules.…”
Section: Roboticsmentioning
confidence: 99%
See 1 more Smart Citation
“…Demonstrating progress toward the above goals, we recently integrated multiple neuromorphic sensorymotor networks into a single Loihi chip to control a humanoid robot's interaction with its environment in an object-learning task. 12 In this work, three SNNs were implemented on Loihi: an object-recognition network receiving input from an event-based camera and labels extracted from speech commands; a spatial-memory network that tracked the pose of the robot's head from its motor encoders and memorized object locations using on-chip plasticity; and a DNF-based neural state machine responsible for reconfiguring different parts of the architecture as required by the current behavior (looking, learning an object, recognizing it, or communicating with the user). While we are still far from implementing the robust, adaptive brains that future robotic systems will require to interact freely in the real world, this work shows that we can already build relatively complex neuromorphic applications by composing heterogeneous modules drawing from a toolbox of common algorithmic primitives, such as spike-based communication, attractor networks, top-down and bottom-up attention, working memory, and local learning rules.…”
Section: Roboticsmentioning
confidence: 99%
“…We speculate that this is due to its slow speed exacerbated by a restrictive feature set. For example, a recent demonstration of the locally competitive algorithm (LCA) on TrueNorth [12] operates at similar power levels as Loihi for the same size of the problem but requires six to seven orders of magnitude longer to converge to a solution as a result of the contortions necessary to run LCA on that architecture.…”
mentioning
confidence: 99%
“…1) Complications of signed VMM: As the rate encoded input spikes lack sign, to make signed VMM mappable to the TrueNorth, Fair et al [27] divides axons and neurons into positive and negative groups, where positive and negative input spikes are routed to their respective groups allowing each group to represent positive and negative outputs from respective connected axons. We illustrate Fair's representation using Figure 6 based on an example multiplication of input vector [- 1 3] with the input matrix [2 -3] T .…”
Section: A Vmm Optimizationmentioning
confidence: 99%
“…To present a deterministic verification of our architecture's ability to replicate TrueNorth, we implement signed VMM in RANC by using Fair's method of mapping VMM on TrueNorth [27].…”
Section: Vmm Verificationmentioning
confidence: 99%
“…Inspired by the information processing mechanisms of the biological brain, competitive learning neural networks (CLNNs) have received widespread attention. [5][6][7][8][9] Their corresponding network structure is shown in Figure 1b. [10] As a traditional artificial neural network (ANN) model, CLNN is used to discover patterns in the distribution of data mainly through unsupervised learning, based on similarity measurements between input samples and weight vectors.…”
Section: Introductionmentioning
confidence: 99%