Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan)
DOI: 10.1109/ijcnn.1993.717042
|View full text |Cite
|
Sign up to set email alerts
|

Development of a high-performance general purpose neuro-computer composed of 512 digital neurons

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 3 publications
0
8
0
Order By: Relevance
“…Onchip trainable neural hardware implementations have also been reported in literature. Most of the reported ones are custom ASIC implementations such as the GRD chip by Murakawa et al [61], on-chip backpropagation implementation of Ayala et al [15], CNAPS by Hammerstrom [62], MY-NEUPOWER by Sato et al [63], and FPNA by Farquhar, et al [66]. FPGA-based implementations of onchip training algorithms have also been reported such as the backpropagation algorithm implementations in [48,49,57,58].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Onchip trainable neural hardware implementations have also been reported in literature. Most of the reported ones are custom ASIC implementations such as the GRD chip by Murakawa et al [61], on-chip backpropagation implementation of Ayala et al [15], CNAPS by Hammerstrom [62], MY-NEUPOWER by Sato et al [63], and FPNA by Farquhar, et al [66]. FPGA-based implementations of onchip training algorithms have also been reported such as the backpropagation algorithm implementations in [48,49,57,58].…”
Section: Discussionmentioning
confidence: 99%
“…Multiple GRD chips can be connected for a scalable neural architecture. Two commercially available neurochips from the early 1990s, the CNAPS [62] and MY-NEUPOWER [63], support on-chip training. CNAPS was an SIMD array of 64 processing elements per chip that are comparable to low precision DSPs and was marketed commercially by Adaptive solutions.…”
Section: Designmentioning
confidence: 99%
See 1 more Smart Citation
“…Fixed-point arithmetic is often chosen for NN accelerators to produce smaller PEs that can be integrated in large number [11]- [14]. As already mentioned, it has been shown by simulation that the use of fixedpoint variables with an appropriate number of bits does not hinder significantly the convergence of NN algorithms [6].…”
Section: Fixed-point Arithmeticmentioning
confidence: 99%
“…We have also succeeded in developing a neuro-computer that has a high-speed learning function [Sato 19931. Making the most of that experience, we here propose new microprocessorbased evolvable hardware.…”
Section: Introductionmentioning
confidence: 98%