Proceedings of the 41st SICE Annual Conference. SICE 2002.
DOI: 10.1109/sice.2002.1196557
|View full text |Cite
|
Sign up to set email alerts
|

FPGA implementation of bidirectional associative memory via simultaneous perturbation rule

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…Winner-take-all networks, which utilize recurrent inhibitory connections to force a single output, have also been implemented in neuromorphic systems [1107]- [1109]. Hopfield networks were especially common in earlier neuromorphic implementations, as is consistent with neural network research at that time [5], [6], [13], [15], [759], [764], [813], [1110]- [1148], but there are also more recent implementations [838], [1149]- [1159]. Similarly, associative memory based implementations were also significantly more popular in earlier neuromorphic implementations [1053], [1160]- [1182].…”
Section: Network Modelsmentioning
confidence: 87%
See 3 more Smart Citations
“…Winner-take-all networks, which utilize recurrent inhibitory connections to force a single output, have also been implemented in neuromorphic systems [1107]- [1109]. Hopfield networks were especially common in earlier neuromorphic implementations, as is consistent with neural network research at that time [5], [6], [13], [15], [759], [764], [813], [1110]- [1148], but there are also more recent implementations [838], [1149]- [1159]. Similarly, associative memory based implementations were also significantly more popular in earlier neuromorphic implementations [1053], [1160]- [1182].…”
Section: Network Modelsmentioning
confidence: 87%
“…Other approaches for on-chip supervised weight training have been utilized. These approaches include the least-mean-squares algorithm [750], [787], [1025], [1026], weight perturbation [19], [625], [655], [669], [682], [698], [699], [708], [710], [712], [713], [715], [736], [834], [835], [841], [845]- [847], [856], [1078]- [1080], [1098], [1099], [1148], [1304], training specifically for convolutional neural networks [1305], [1306] and others [169], [220], [465], [714], [804], [864], [865], [1029], [1049], [1307]- [1320]. Other on-chip supervised learning mechanisms are built for particular model types, such as Boltzmann machines, restricted Boltzmann machines, or deep belief networks [12], [627], [1135], [1193]<...>…”
Section: A Supervised Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Consequently, this optimal allocation can be obtained by properly including different constraints (i.e., channel and queue status for the different users) in the definition of the HNN energy. From an implementation point of view, HNN methodology can be carried out either by solving iteratively a numerical differential equation based on the Euler technique or by means of hardware implementations (HNN is derived with an initial hardware implementation in mind) such as the field-programmable gate array (FPGA) chip [18] that has been proved practically for implementation purposes.…”
Section: Introductionmentioning
confidence: 99%