2020
DOI: 10.1109/tcsi.2019.2960383
|View full text |Cite
|
Sign up to set email alerts
|

In-Hardware Training Chip Based on CMOS Invertible Logic for Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
23
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 22 publications
(23 citation statements)
references
References 31 publications
0
23
0
Order By: Relevance
“…One such promising approach to on-chip learning is using Boltzmann machines to implement invertible logic [90]. Invertible logic can perform operations in both the forward and reverse directions using the same hardware circuit.…”
Section: B Low-complexity Trainingmentioning
confidence: 99%
See 2 more Smart Citations
“…One such promising approach to on-chip learning is using Boltzmann machines to implement invertible logic [90]. Invertible logic can perform operations in both the forward and reverse directions using the same hardware circuit.…”
Section: B Low-complexity Trainingmentioning
confidence: 99%
“…For example, an invertible multiplier (forward) and factorizer (backward) was implemented using a Boltzmann machine using a standard CMOS process [92]. Recent results have demonstrated a CMOS chip that implements neural inference (forward) and training (backward) in the same hardware with invertible logic [90]. In this case, it is possible to directly obtain the values of the weights with low-precision computations, enabling low-complexity on-chip learning.…”
Section: B Low-complexity Trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…The bidirectional computing capability is realized by reducing the network energy to the global minimum energy with noise induced by random signals (e.g., a multiplier can be used as a factorizer in the backward mode). Due to the unique feature, several challenging problems can be quickly solved, such as integer factorization (e.g., cryptography problems [1]) and machine learning (e.g., training neural networks [3], [4]).…”
Section: Introductionmentioning
confidence: 99%
“…Recently, researchers have been engaged in the demanding work of building brainware computing system [7]- [10]. An in-hardware training chip has been fabricated and carried out a demonstration for data classification, which exhibits a noticeable reduction of power dissipation and latency [11]. Pei et al has proposed a hybrid Tianjic chip architecture that consists of multiple cores, reconfigurable building blocks to achieve the precise control and real-time object detection of unmanned bicycle [12].…”
Section: Introductionmentioning
confidence: 99%