2014
DOI: 10.1162/neco_a_00590
|View full text |Cite
|
Sign up to set email alerts
|

Short-Term Memory Capacity in Networks via the Restricted Isometry Property

Abstract: Cortical networks are hypothesized to rely on transient network activity to support short term memory (STM). In this paper we study the capacity of randomly connected recurrent linear networks for performing STM when the input signals are approximately sparse in some basis. We leverage results from compressed sensing to provide rigorous non-asymptotic recovery guarantees, quantifying the impact of the input sparsity level, the input sparsity basis, and the network characteristics on the system capacity. Our an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
36
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
1
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 26 publications
(36 citation statements)
references
References 43 publications
(91 reference statements)
0
36
0
Order By: Relevance
“…Our theory predicts that there exists an optimal memory length between these two regions that allows for optimal recall of the input stream (bottom). bottom), which can be calculated as a function of the network parameters (Charles et al, 2014).…”
Section: Simulationsmentioning
confidence: 99%
See 2 more Smart Citations
“…Our theory predicts that there exists an optimal memory length between these two regions that allows for optimal recall of the input stream (bottom). bottom), which can be calculated as a function of the network parameters (Charles et al, 2014).…”
Section: Simulationsmentioning
confidence: 99%
“…In the low-rank case a set of prototypical signals V V V combine linearly through a set of coefficients Q Q Q to generate more generally correlated input streams. Maass et al, 2002;Ganguli & Sompolinsky, 2010;Wallace, Hamid, & Latham, 2013;Verstraeten, Schrauwen, dHaene, & Stroobandt, 2007;White, Lee, & Sompolinsky, 2004;Lukoševičius & Jaeger, 2009;Buonomano & Maass, 2009;Charles, Yap, & Rozell, 2014). For example, if the state of the network at time N can recover s s s 1 up to s s s N , then we can say that the number of recoverable inputs (the STM) is LN.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A recent study used the RIP mathematical formalization (Charles et al, 2014) to show that memory capacity of randomly connected recurrent networks (like the ones in CA3) receiving inputs that are approximately sparse in some basis, can scale superlinearly with the number of neurons. Moreover, under certain conditions, memory capacity was found to largely exceed network size.…”
Section: Conclusion and Future Considerationsmentioning
confidence: 99%
“…After it was shown that sparseness, combined with unsupervised learning 50 using natural images, was sufficient to develop features which resemble receptive fields of primary visual 51 cortex Field (1996, 1997); Bell and Sejnowski (1997); Rehn and Sommer (2007), a number 52 of extensions have been proposed that have successfully explained many other aspects of visual information 53 processing, such as complex cell properties and topographic organization 54 . Moreover, a form of code based on sparseness has many potential benefits for 55 neural systems, being energy efficient Niven and Laughlin (2008), increasing storage capacity in associative 56 memories Baum et al (1988); Charles et al (2014) and making the structure of natural signals explicit and 57 easier to read out at subsequent level of processing Olshausen and Field (2004). Particularly noteworthy is 58 the fact that these statistical models can be reformulated as dynamical systems Rozell et al (2008), where 59 processing units can be identified with real neurons having a temporal dynamics that can be implemented 60 with various degrees of biophysical plausibility: using local learning rules Zylberberg et al (2011), spiking 61 neurons Hu et al (2012); Shapero et al (2013) and even employing distinct classes of inhibitory neurons 62 King et al (2013); Zhu and Rozell (2015).…”
Section: Introductionmentioning
confidence: 99%