1993
DOI: 10.1109/4.192041
|View full text |Cite
|
Sign up to set email alerts
|

A self-learning digital neural network using wafer-scale LSI

Abstract: FIELD OF THE INVENTION The present invention concerns a digital hardware archi tecture for realizing neural networks. Throughout the description, reference is made to the following list of references:

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
8
0

Year Published

1994
1994
2016
2016

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 30 publications
(9 citation statements)
references
References 14 publications
(17 reference statements)
0
8
0
Order By: Relevance
“…As we deal with scalable design, layout usually reach the technology limitations in size and consumption, therefore the risk of defaults in the silicon circuits is greater than ever. Being able to use some circuits with one or more faulty PE in it gives the possibility to reach an higher parallelism degree for a given price, it has been often applied to massively parallel architecture such as CAM [11] and neural networks [22]. Fault tolerance can be applied to 2-D architectures [15], however it is far more simple to implement in a 1-D organisation.…”
Section: -One Dimensional Netsmentioning
confidence: 99%
“…As we deal with scalable design, layout usually reach the technology limitations in size and consumption, therefore the risk of defaults in the silicon circuits is greater than ever. Being able to use some circuits with one or more faulty PE in it gives the possibility to reach an higher parallelism degree for a given price, it has been often applied to massively parallel architecture such as CAM [11] and neural networks [22]. Fault tolerance can be applied to 2-D architectures [15], however it is far more simple to implement in a 1-D organisation.…”
Section: -One Dimensional Netsmentioning
confidence: 99%
“…Since Eq. (4) is the sum of the errors for the faults which are used in the BP learning algorithm, the weight to be modified for the case where the faults are injected becomes as follows.where Awi(.,,-) is the modification of the weights when the n-multiple weight fault X is injected to the n-multiple links i.From Eq (6),. the learning algorithm proposed performs the BP algorithm, injecting the multiple-value S sequentially to each link in an MNN.…”
mentioning
confidence: 99%
“…As an integerrepresentation architecture is used in the machine [8], all output-states from 2O to 2n-1 have the same probability to be stuck. Therefore, from the above evaluation of pd = 0.06, the global-ordering state can be achieved even with %I(%)( = F] of all stuck-outputs in the above SOM condition ( N = 100, m = 1 l),-that is, only 6% of all stuck-outputs impede the global-ordering-.…”
Section: Resultsmentioning
confidence: 99%