2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489114
|View full text |Cite
|
Sign up to set email alerts
|

Encoding symbolic sequences with spiking neural reservoirs

Abstract: Biologically inspired spiking networks are an important tool to study the nature of computation and cognition in neural systems. In this work, we investigate the representational capacity of spiking networks engaged in an identity mapping task. We compare two schemes for encoding symbolic input, one in which input is injected as a direct current and one where input is delivered as a spatio-temporal spike pattern. We test the ability of networks to discriminate their input as a function of the number of distinc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
11
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

8
0

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 9 publications
(10 reference statements)
0
11
0
Order By: Relevance
“…In some cases, the measured responses are quantified using the lowpass filtered spike trains of the individual neurons, obtained by convolving them with an exponential kernel with τ = 20 ms and temporal resolution equal to the simulation resolution, 0.1 ms. However, for most of the analyses, we consider the membrane potential V m as the primary state variable, as it is parameter-free and constitutes a more natural choice (van den Broek et al, 2017;Duarte et al, 2018).…”
Section: Stimulus Input and Computational Tasksmentioning
confidence: 99%
“…In some cases, the measured responses are quantified using the lowpass filtered spike trains of the individual neurons, obtained by convolving them with an exponential kernel with τ = 20 ms and temporal resolution equal to the simulation resolution, 0.1 ms. However, for most of the analyses, we consider the membrane potential V m as the primary state variable, as it is parameter-free and constitutes a more natural choice (van den Broek et al, 2017;Duarte et al, 2018).…”
Section: Stimulus Input and Computational Tasksmentioning
confidence: 99%
“…Relatively low-dimensional input streams are thus non-linearly projected onto the circuit's high-dimensional representational space. Through this expansive transformation, the neural substrate can develop suitable dynamic representations (Duarte and Morrison, 2014 ; Duarte et al, 2018 ) and resolve non-linearities such that classes that are not linearly separable in the input space can be separated in the system's representational space. This property relies on the characteristics of the neural substrate, acting as a non-linear operator, and the ensuing input-driven dynamics (Maass et al, 2002 ).…”
Section: Introductionmentioning
confidence: 99%
“…Relatively low-dimensional input streams are thus non-linearly projected onto the circuit’s high-dimensional representational space. Through this expansive transformation, the neural substrate can develop suitable dynamic representations (Duarte & Morrison, 2014; Duarte et al, 2018) and resolve non-linearities such that classes that are not linearly separable in the input space can be separated in the system’s representational space. This property, commonly referred to as the kernel trick , relies on the characteristics of the neural substrate, acting as a non-linear operator, and the ensuing input-driven dynamics (Maass et al, 2002).…”
Section: Introductionmentioning
confidence: 99%