1986
DOI: 10.1103/physreva.34.4217
|View full text |Cite
|
Sign up to set email alerts
|

Collective computational properties of neural networks: New learning mechanisms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
137
0
1

Year Published

1999
1999
2021
2021

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 309 publications
(140 citation statements)
references
References 11 publications
2
137
0
1
Order By: Relevance
“…The algorithms employed in this section are all designed to generate weight matrices that are good approximations of the weight matrix generated by the pseudo-inverse rule of Personnaz et al [30]. According to this rule…”
Section: The Pseudo-inverse Class Of Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…The algorithms employed in this section are all designed to generate weight matrices that are good approximations of the weight matrix generated by the pseudo-inverse rule of Personnaz et al [30]. According to this rule…”
Section: The Pseudo-inverse Class Of Modelsmentioning
confidence: 99%
“…For very small networks it is possible to explore the state space exhaustively (see, for example, [30]), in order to calculate R exactly, but for more realistic network sizes the nature of the attractors is very hard to compute [14,25] and only empirical methods, as described here, are available.…”
Section: Attractor Basin Sizementioning
confidence: 99%
See 1 more Smart Citation
“…On embedding correlated patterns by this scheme, the storage performance of the network is significantly decreased. Thus, many researches for storing correlated patterns direct to orthogonalization of these patterns, such as pseudo-inverse matrix method [2] and iterative learning scheme [3]. Though these methods enable all the correlated patterns to be stable local minima in the network, their computational costs grow with respect to the network size and the number of patterns to be embedded.…”
Section: Introductionmentioning
confidence: 99%
“…It was developed by Personnaz et al, [3] [4] and later refined by Kanter and Sompolinsky [5], and Gorodnichy, [6]. This method allows up to p < N linearly independent patterns to be memorized.…”
Section: Introductionmentioning
confidence: 99%