2004
DOI: 10.1088/0305-4470/37/37/002
|View full text |Cite
|
Sign up to set email alerts
|

Analytic solution of attractor neural networks on scale-free graphs

Abstract: Abstract. We study the influence of network topology on retrieval properties of recurrent neural networks, using replica techniques for diluted systems. The theory is presented for a network with an arbitrary degree distribution p(k) and applied to power law distributions p(k) ∼ k −γ , i.e. to neural networks on scale-free graphs. A bifurcation analysis identifies phase boundaries between the paramagnetic phase and either a retrieval phase or a spin glass phase. Using a population dynamics algorithm, the retri… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
45
0

Year Published

2004
2004
2019
2019

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 35 publications
(46 citation statements)
references
References 29 publications
1
45
0
Order By: Relevance
“…Davey et al also give results for small-world networks of perceptrons (Davey et al, 2006). Other investigations have been reported in which sparse Hopfield networks are connected with scale-free connection graphs, for example (da Fontoura Costa, and Stauffer, 2003;Kosinski, and Sinolecka, 1999;Perez Castillo, Wemmenhove, Hatchett, Coolen, Skantzos, and Nikoletopoulos, 2004;Stauffer, Aharony, da Fontoura Costa, and Adler, 2003;Torres, Munoz, Marro, and Garrido, 2004). Another approach is to build networks with a modular structure: see for example (Horn, Levy, and Ruppin, 1999;Renart, Parga, and Rolls, 1999).…”
Section: Related Workmentioning
confidence: 99%
“…Davey et al also give results for small-world networks of perceptrons (Davey et al, 2006). Other investigations have been reported in which sparse Hopfield networks are connected with scale-free connection graphs, for example (da Fontoura Costa, and Stauffer, 2003;Kosinski, and Sinolecka, 1999;Perez Castillo, Wemmenhove, Hatchett, Coolen, Skantzos, and Nikoletopoulos, 2004;Stauffer, Aharony, da Fontoura Costa, and Adler, 2003;Torres, Munoz, Marro, and Garrido, 2004). Another approach is to build networks with a modular structure: see for example (Horn, Levy, and Ruppin, 1999;Renart, Parga, and Rolls, 1999).…”
Section: Related Workmentioning
confidence: 99%
“…In particular due to the unexpectedly rich and varied range of multi-disciplinary applications of finite connectivity replica techniques which emerged subsequently in, for example, spin-glass modelling [6][7][8][9], error correcting codes [10][11][12][13], theoretical computer science [14][15][16][17], recurrent neural networks [18][19][20] and 'small-world' networks [21], this field is presently enjoying a renewed interest and popularity. Until very recently, analysis was limited to the equilibrium properties of such models, but now attention has also turned to the dynamics of finitely connected spin systems [22][23][24][25], using combinatorial and generating functional methods.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, we consider a highly sparse network, with values of κ/ N ∈ [10 −3 , 10 −2 ], which can be homogeneous (i.e., every node having roughly the same connectivity degree), or heterogeneous, with the formation of hubs. Both sparseness and heterogeneity damage severely the memory retrieval ability of the neural network that, for such cases, diminishes fast with P compared with the case of highly connected and homogeneous neural networks (Stauffer et al, 2003; Castillo et al, 2004; Morelli et al, 2004; Torres et al, 2004; Oshima and Odagaki, 2007; Akam and Kullmann, 2014) However, there is experimental evidence that the configurations of neural activity related to particular memories in the animal brain involve many more silent neurons, ξiμ=0, than active ones, ξiμ=1 (Chklovskii et al, 2004; Akam and Kullmann, 2014). Notice that in this case there is a positive correlation between different patterns due to the sparseness, since a 0 ≠ 0.5, which is also known to improve the storage capacity of a neural network (Knoblauch et al, 2014; Knoblauch and Sommer, 2016), and in particular that of heterogeneous and sparse neural networks (Morelli et al, 2004).…”
Section: Model and Methodsmentioning
confidence: 99%