2013
DOI: 10.1140/epjb/e2013-40502-8
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Hopfield network reconstruction with ℓ 1 regularization

Abstract: We propose an efficient strategy to infer sparse Hopfield network based on magnetizations and pairwise correlations measured through Glauber samplings. This strategy incorporates the ℓ1 regularization into the Bethe approximation by a quadratic approximation to the log-likelihood, and is able to further reduce the inference error of the Bethe approximation without the regularization. The optimal regularization parameter is observed to be of the order of M −ν where M is the number of independent samples. The va… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
10
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 38 publications
0
10
0
Order By: Relevance
“…where t and η denote the learning step and learning rate, respectively. The maximum likelihood learning shown here has a simple interpretation of minimizing the Kullback-Leibler divergence between the empirical probability and the model probability [27,28]. In the learning equation (Eq.…”
Section: Maximum Entropy Modelmentioning
confidence: 99%
“…where t and η denote the learning step and learning rate, respectively. The maximum likelihood learning shown here has a simple interpretation of minimizing the Kullback-Leibler divergence between the empirical probability and the model probability [27,28]. In the learning equation (Eq.…”
Section: Maximum Entropy Modelmentioning
confidence: 99%
“…Winterhalder et al (2005) reviewed the nonparametric methods and Granger causality–based methods. Much progress has been made recently using the kinetic Ising model and Hopfield network (Huang, 2013; Dunn & Roudi, 2013; Battistin, Hertz, Tyrcha, & Roudi, 2015; Capone, Filosa, Gigante, Ricci-Tersenghi, & Del Giudice, 2015; Roudi & Hertz, 2011) with sparsity regularization (Pernice & Rotter, 2013). The GLM method (Okatan et al, 2005; Truccolo et al, 2005; Pillow et al, 2008) and the maximum entropy method (Schneidman, Berry, Segev, & Bialek, 2006) are two popular classes of methods and the main modern approaches for modeling multiunit recordings (Roudi, Dunn, & Hertz, 2015).…”
Section: Discussionmentioning
confidence: 99%
“…I examined a simple question: what features does the Ising model learn when trained on images of the letters A-J? Using the variational pseudolikelihood approach, it is simple to infer coupling matrices with a specific structure, such as that of a Hopfield neural network [5,21,26,29]. The couplings in a Hopfield network are described by a Hebbian rule such that…”
Section: Methodsmentioning
confidence: 99%
“…As a result, the inverse Ising problem has to be solved approximately. A number of approximate methods for inferring the parameters of the Ising model have been introduced including naive mean field theory [15], the Thouless-Anderson-Palmer (TAP) approximation [16], the isolated spin pair approximation [14], the Sessak-Monasson expansion [17], and others [5,[18][19][20][21][22][23][24][25][26][27]. The second obstacle -overfitting -is more fundamental and generally affects all high dimensional problems in statistical inference.…”
mentioning
confidence: 99%
See 1 more Smart Citation