2016
DOI: 10.1109/led.2016.2573140
|View full text |Cite
|
Sign up to set email alerts
|

Demonstration of Convolution Kernel Operation on Resistive Cross-Point Array

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
69
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 134 publications
(74 citation statements)
references
References 17 publications
0
69
0
Order By: Relevance
“…15. The edge extractions described in this step are also a frequent layer of convolutional neural networks (CNNs or ConvNets) 18,49,50 , which is the most computationally expensive step in the networks. Compared to previously reported convolutions operating with binary inputs, binary weights and series readout 18 , our image filtering procedure included both analogue convolution matrices and analogue inputs, as well as parallel readout of 10 feature maps.…”
Section: Nature Electronicsmentioning
confidence: 99%
“…15. The edge extractions described in this step are also a frequent layer of convolutional neural networks (CNNs or ConvNets) 18,49,50 , which is the most computationally expensive step in the networks. Compared to previously reported convolutions operating with binary inputs, binary weights and series readout 18 , our image filtering procedure included both analogue convolution matrices and analogue inputs, as well as parallel readout of 10 feature maps.…”
Section: Nature Electronicsmentioning
confidence: 99%
“…A common method for implementing the convolution layer of the CNN is extracting the feature maps into smaller sizes through CBA bit lines (BLs), whereas a portion of the input image is fed into the CBA through word lines (WLs), which is commonly regarded as applying a kernel (or filter). In the neuromorphic hardware using the S‐CBA, implementing such kernel functions can be readily understood from SI Figure S1 and S2, Supporting Information, of which details are explained in SI‐I (basic idea of applying the S‐CBA to a kernel using an inverse mapping method).…”
Section: Resultsmentioning
confidence: 99%
“…[61,63,64] For example, a fully connected layer can be directly mapped on one RRAM array or partitioned and implemented onto a few smaller arrays. [63,65,66] For long short term memory (LSTM) networks, the synaptic weights in an LSTM layer toward input gate, output gate, and forget gate can be deployed on different RRAM arrays. [63,65,66] For long short term memory (LSTM) networks, the synaptic weights in an LSTM layer toward input gate, output gate, and forget gate can be deployed on different RRAM arrays.…”
Section: Rram Basics and Rram Array For Inferencementioning
confidence: 99%