2020
DOI: 10.3390/s20154222
|View full text |Cite
|
Sign up to set email alerts
|

Implementation of Analog Perceptron as an Essential Element of Configurable Neural Networks

Abstract: Perceptron is an essential element in neural network (NN)-based machine learning, however, the effectiveness of various implementations by circuits is rarely demonstrated from chip testing. This paper presents the measured silicon results for the analog perceptron circuits fabricated in a 0.6 μm/±2.5 V complementary metal oxide semiconductor (CMOS) process, which are comprised of digital-to-analog converter (DAC)-based multipliers and phase shifters. The results from the measurement convinces us that our imple… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 47 publications
(60 reference statements)
0
3
0
Order By: Relevance
“…Self-associative neural networks (SANNs), usually in the form of linear networks or MLP (multilayer perceptron), have a specific topology. These networks are characterized by an identical number of neurons in both the input and output layers [ 22 , 23 ]. They are therefore intended to reproduce on their outputs the values given at the input.…”
Section: Methodsmentioning
confidence: 99%
“…Self-associative neural networks (SANNs), usually in the form of linear networks or MLP (multilayer perceptron), have a specific topology. These networks are characterized by an identical number of neurons in both the input and output layers [ 22 , 23 ]. They are therefore intended to reproduce on their outputs the values given at the input.…”
Section: Methodsmentioning
confidence: 99%
“…Another interesting feature of source followers is that its characteristics are similar to a rectified linear activation unit (ReLU) used as an activation function in neural network [54,55]. In conventional ReLU, the output corresponding to the input x is given by 0 for negative x and increases by a slope of unity for positive x, which is similar to the source follower V s -V g characteristics with V th = 0 and large V d .…”
Section: Activation Function In Neural Networkmentioning
confidence: 99%
“…Given the above limitations, the focus was on networks with a single hidden layer. There are many examples in the literature of implementing perceptrons, such as CMOS circuits [10,[43][44][45][46].…”
Section: Neural Networkmentioning
confidence: 99%