2002
DOI: 10.1109/tnn.2002.1031948
|View full text |Cite
|
Sign up to set email alerts
|

A CMOS feedforward neural-network chip with on-chip parallel learning for oscillation cancellation

Abstract: Abstract-This paper presents a mixed signal CMOS feedforward neural-network chip with on-chip error-reduction hardware for real-time adaptation. The chip has compact on-chip weighs capable of high-speed parallel learning; the implemented learning algorithm is a genetic random search algorithm-the random weight change (RWC) algorithm. The algorithm does not require a known desired neural-network output for error calculation and is suitable for direct feedback control. With hardware experiments, we demonstrate t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2006
2006
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(14 citation statements)
references
References 17 publications
0
14
0
Order By: Relevance
“…As we may know, nonlinear elements are frequently encountered in analog/digital circuit prototypes of neural networks [19,21]; involving nonlinear activation functions can be beneficial to potential design and implication. On the other hand, faster convergence is indeed required for solving GLME (1) when the linear model Complexity 3 might not satisfy increasing computational requirements.…”
Section: Theorem 1 the Neural State Matrix ( ) ∈mentioning
confidence: 99%
“…As we may know, nonlinear elements are frequently encountered in analog/digital circuit prototypes of neural networks [19,21]; involving nonlinear activation functions can be beneficial to potential design and implication. On the other hand, faster convergence is indeed required for solving GLME (1) when the linear model Complexity 3 might not satisfy increasing computational requirements.…”
Section: Theorem 1 the Neural State Matrix ( ) ∈mentioning
confidence: 99%
“…Given are a training set REF of n reference vectors REF= {ref 1 , ref 2 , …, ref n } and an unknown input vector IN defined in a d-dimensional space. Then, the nearest neighbor (NN) classifier, which represents a nonparametric statistical method, assigns IN to the class of its closest neighbor from the non-preprocessed REF in terms of a distance metric according to (1).…”
Section: Arg Min Refmentioning
confidence: 99%
“…Then, the nearest neighbor (NN) classifier, which represents a nonparametric statistical method, assigns IN to the class of its closest neighbor from the non-preprocessed REF in terms of a distance metric according to (1). On the other hand, IN is assigned by the KNN classifier to the class that has a majority among the k closest neighbor vectors.…”
Section: Arg Min Refmentioning
confidence: 99%
See 1 more Smart Citation
“…The PRWC training algorithm is "model-free" such as Random Weight Change (RWC) [5], Weight Perturbation (WP) [6] and Simulated Annealing (SA) [8]. In the comparison with the GD on-line algorithm, the PRWC algorithm results in further simplifcation in circuit design as it does not require intermediate network outputs.…”
Section: B Probabilistic Random Weight Change (Prwc)mentioning
confidence: 99%