2021
DOI: 10.1109/jetcas.2021.3124940
|View full text |Cite
|
Sign up to set email alerts
|

A Single-MOSFET Analog High Resolution-Targeted (SMART) Multiplier for Machine Learning Classification

Abstract: Mixed-signal machine-learning classification has recently been demonstrated as an efficient alternative for classification with power expensive digital circuits. In this paper, a single-MOSFET analog multiplier is proposed for classifying high-dimensional input data into multi-class output space with less power and higher accuracy than state-of-the-art mixed-signal linear classifiers. A high-resolution (i.e., multi-bit) multiplication is facilitated within a single-MOSFET by feeding the features and feature we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 31 publications
0
3
0
Order By: Relevance
“…Here, the classifier has been trained using TensorFlow in Python and produces an accuracy of 75% with 67.3pJ energy consumption per prediction [9]. The schematic of an integrated system consisting of vote extractors, MAC (Multiplication and Accumulation) array, multiplexers, resistive voltage divider, and memory is illustrated in Figure 3.…”
Section: Application Of Tensorflow In Semiconductor Designmentioning
confidence: 99%
“…Here, the classifier has been trained using TensorFlow in Python and produces an accuracy of 75% with 67.3pJ energy consumption per prediction [9]. The schematic of an integrated system consisting of vote extractors, MAC (Multiplication and Accumulation) array, multiplexers, resistive voltage divider, and memory is illustrated in Figure 3.…”
Section: Application Of Tensorflow In Semiconductor Designmentioning
confidence: 99%
“…Other works, such as [26], [27], solve the classification problem based on an ensemble of binary classifiers and then perform the network benchmark using the MNIST database downscaled to 48 features, obtained as follows: the original images in the database are resized from 784 to 81 pixels, then Fisher's criterion is applied to the 81-pixel image to further reduce them to 48 pixels. In the case of [26], the network is fully implemented with the exception of the sensing stage, while in [27] only the binary classifiers were implemented, while data input and vote extraction stages were performed off-chip.…”
mentioning
confidence: 99%
“…Other works, such as [26], [27], solve the classification problem based on an ensemble of binary classifiers and then perform the network benchmark using the MNIST database downscaled to 48 features, obtained as follows: the original images in the database are resized from 784 to 81 pixels, then Fisher's criterion is applied to the 81-pixel image to further reduce them to 48 pixels. In the case of [26], the network is fully implemented with the exception of the sensing stage, while in [27] only the binary classifiers were implemented, while data input and vote extraction stages were performed off-chip. Again, it is important to stress that a vis-à-vis comparison with works reported [26], [27], only based on throughput and energy consumption data, would be unfair, due to intrinsic differences between the implemented ANN structures and due to the fact that they do not explicitly consider the energy and timing overhead coming from the pixel sensor matrix and the analog-to-digital conversion stage.…”
mentioning
confidence: 99%
See 1 more Smart Citation