2019
DOI: 10.1145/3304110
|View full text |Cite
|
Sign up to set email alerts
|

A Mixed Signal Architecture for Convolutional Neural Networks

Abstract: Deep neural network (DNN) accelerators with improved energy and delay are desirable for meeting the requirements of hardware targeted for IoT and edge computing systems. Convolutional neural networks (CoNNs) belong to one of the most popular types of DNN architectures. is paper presents the design and evaluation of an accelerator for CoNNs. e system-level architecture is based on mixed-signal, cellular neural networks (CeNNs). Speci cally, we present (i) the implementation of di erent layers, including convolu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 18 publications
(28 citation statements)
references
References 61 publications
0
23
0
Order By: Relevance
“…We follow the treatment of cellular neural networks in [22]. Application of CeNN to CoNN was considered in [42]. Due to both feedback and feedforward connections in a CeNN and due to more connections than just nearest neighbors, the number of synapses is doubled.…”
Section: Cennmentioning
confidence: 99%
“…We follow the treatment of cellular neural networks in [22]. Application of CeNN to CoNN was considered in [42]. Due to both feedback and feedforward connections in a CeNN and due to more connections than just nearest neighbors, the number of synapses is doubled.…”
Section: Cennmentioning
confidence: 99%
“…We show in this section that the core CoNN computation can be readily mapped to a CeNN hardware. In the previous work, the analog implementation of a CeNN-based CoNN has been demonstrated by using CMOS devices to perform energy-efficient non-Boolean computation [30], where the bit precision of the synapse weights is 4. In this section, we describe a few major changes to the activation and pooling layers and apply it to both charge-based analog and spintronic implementation.…”
Section: B Layer Implementations In Cenn-based Connmentioning
confidence: 99%
“…CeNNs are attractive as: 1) each cell is connected to only its neighbors, making the interconnections between cells local and 2) the cells and their synaptic interconnections are usually space-invariant, which makes CeNNs very suitable for CMOS very large scale integration (VLSI) implementation [26]- [29]. CeNN has shown great potential for convolutional neural network (CoNN) type of computations [30], with 8.7× and 4.3× energy-delay product (EDP) improvements compared with the state-of-the-art deep neural network (DNN) acceleration system in MNIST and CIFAR-10 data set, respectively. The CeNN architecture is well suited for the convolution operation, which is the most expensive operation in a typical CoNN.…”
Section: Introductionmentioning
confidence: 99%
“…The dot product has been successfully implemented in hardware in various ways [1]- [4]. Using these dot-product circuits, a cellular neural network (CeNN) accelerator for convolutional neural networks (CoNNs) was designed in [5]. There have been several proposals in recent years for spintronic implementations of CeNNs or CeNNlike structures in hardware, which could be tapped to produce an efficient spintronic CoNN accelerator.…”
Section: Introductionmentioning
confidence: 99%
“…In this article, we propose to utilize this platform as a hybrid memory/CeNN cell with a high energy efficiency that can be used as analog memory with a built-in activation function. The performance of these cells is simulated in a CeNN-accelerated CoNN performing image classification based on [5]. The spintronic cells significantly reduce the energy and time consumption relative to their charge-based counterparts, needing only ≈ 100 pJ and ≈ 42 ns to compute all but the final fully connected CoNN layer while maintaining high accuracy.…”
Section: Introductionmentioning
confidence: 99%