1989
DOI: 10.1117/12.7976900
|View full text |Cite
|
Sign up to set email alerts
|

High Performance Recording Algorithm For Hopfield Model Associative Memories

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

1989
1989
2013
2013

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(17 citation statements)
references
References 27 publications
0
17
0
Order By: Relevance
“…The value corresponds to ; taking into account that there are 45 vectors at , the recall accuracy is 86% with the SVM and 58% with the perceptron. For sake of comparison we consider the results in [17], based on the Ho-Kashyap method [18]: They are % for and % for and . Example 2: Here, we consider a specific design example and compare four different design strategies: SVM standard, SVM with fixed threshold according to (23), perceptron, and the designer neural network of Chan and Zak.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The value corresponds to ; taking into account that there are 45 vectors at , the recall accuracy is 86% with the SVM and 58% with the perceptron. For sake of comparison we consider the results in [17], based on the Ho-Kashyap method [18]: They are % for and % for and . Example 2: Here, we consider a specific design example and compare four different design strategies: SVM standard, SVM with fixed threshold according to (23), perceptron, and the designer neural network of Chan and Zak.…”
Section: Resultsmentioning
confidence: 99%
“…We examined also the capacity measure proposed in [18], based on the concept of recall accuracy (RA) for a given Hamming radius .…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this paper, we extend our earlier work on the Ho-Kashyap training procedure [7,8] and derive, based on gradient descent strategies, three new adaptive Ho-Kashyap (AHK) training rules: AHK I, AHK II, and AHK III. We propose these training rules as alternatives to LMS and perceptron training rules for classification problems.…”
Section: Introductionmentioning
confidence: 99%
“…Other related work can be found in [16], [17], [18] and [19] in which either by adding dummy neuron, increasing in number of layers or manipulating the interconnection among neurons in each layer, the issue of performance improvement of BAM is addressed. Even some new learning algorithms were introduced to improve the performance of original BAM and can be found in detail in [20]- [24].…”
Section: Related Workmentioning
confidence: 99%