4th International Conference on Artificial Neural Networks 1995
DOI: 10.1049/cp:19950601
|View full text |Cite
|
Sign up to set email alerts
|

Implementation issues for on-chip learning with analogue VLSI MLPS

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

1995
1995
2020
2020

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…2. With Equation (18) or (19), find the slope factors n * that each one of HTSMC n is implementing. In our case, the slope factors are n * 1 = 2.2845 for HTSMC 2 and n * 2 = 3.8711 for HTSMC 3 .…”
Section: Obtaining Arbitrary Slopesmentioning
confidence: 99%
See 1 more Smart Citation
“…2. With Equation (18) or (19), find the slope factors n * that each one of HTSMC n is implementing. In our case, the slope factors are n * 1 = 2.2845 for HTSMC 2 and n * 2 = 3.8711 for HTSMC 3 .…”
Section: Obtaining Arbitrary Slopesmentioning
confidence: 99%
“…This time, AHTPC s with = 1 and = 0.66 is used to implement an approximation to the function tanh(0.7915x) to deal with the extra slope obtained with the two HTSMC n . The slopes of the target functions are the result of multiplying the theoretical slopes s of AHTPC s by the slope multiplier factors n * of each HTSMC n (see Figure 5 and Equations (18) and (19) for the implemented slope multiplier factors n * as a function of ). Table I includes a detailed list of the slopes of both types of functions and the errors obtained with the simulations depicted in Figure 9.…”
Section: Obtaining Arbitrary Slopesmentioning
confidence: 99%
“…Many digital-or analog-implemented circuits for perceptron have been proposed in the literature, showing good results in the simulation phase [5][6][7][8], however the drawback of these works is the lack of silicon measurement results, which are of a great importance for the investigation of the fundamental characteristics of a perceptron circuit. Besides, the multi-layer perceptron (MLP) is constituted by perceptron, which is a fundamental structure for the feedforward neural network (NN), in VLSI (very-large-scale integration) implementations incorporating various learning algorithms, thus making MLP a common choice as it has been continuously researched for many years [9][10][11][12][13][14]. In the state-of-the-art works, the authors in [15] implement a low-latency MLP processor for real-time cancer detection using field programmable gate arrays (FPGAs), under mass-spectrometry benchmarks, it outperforms both the central processing unit (CPU) and graphics processing unit (GPU) implementations, with an average speedup of 144× and 21×, respectively.…”
Section: Introductionmentioning
confidence: 99%