2020
DOI: 10.1038/s41598-020-66413-y
|View full text |Cite
|
Sign up to set email alerts
|

Abstract: Artificial Intelligence (AI) at the edge has become a hot subject of the recent technology-minded publications. The challenges related to IoT nodes gave rise to research on efficient hardware-based accelerators. In this context, analog memristor devices are crucial elements to efficiently perform the multiply-and-add (MAD) operations found in many AI algorithms. This is due to the ability of memristor devices to perform in-memory-computing (IMC) in a way that mimics the synapses in human brain. Here, we presen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8

Relationship

3
5

Authors

Journals

citations
Cited by 37 publications
(39 citation statements)
references
References 44 publications
0
39
0
Order By: Relevance
“…The CNN is trained using Caffe deep learning framework, then MATLAB was used to run the inference stage. Subsequently, these weights are mapped into RRAM conductance values that fall within a specific conductance range [Roff Ron] following the method in [10], [33]. After that, a certain number of physical available quantization levels (QL) are assumed for the conductance range.…”
Section: Lenet Network Is Structured As Followsmentioning
confidence: 99%
See 1 more Smart Citation
“…The CNN is trained using Caffe deep learning framework, then MATLAB was used to run the inference stage. Subsequently, these weights are mapped into RRAM conductance values that fall within a specific conductance range [Roff Ron] following the method in [10], [33]. After that, a certain number of physical available quantization levels (QL) are assumed for the conductance range.…”
Section: Lenet Network Is Structured As Followsmentioning
confidence: 99%
“…Exploring new architectures based on dataflow [9]. Exploring new architectures based on novel bioinspired computing paradigms such as in-memory computing (IMC) [10]. Investigating new memory and computation devices such as random access memory resistive (RRAM) technology where high parallelism and low power can be achieved at the expense of more design complexity and loss of accuracy (Fig.…”
Section: Introductionmentioning
confidence: 99%
“…In this manner, MCA is able to perform the matrix multiplication operation in just one step. Resistive memory in its memristive crossbar array (MCA) configuration can significantly accelerate matrix multiply operations commonly found in neural network, graph, and image processing workloads [19,[82][83][84][85][86][87][88][89]. Figure 5a shows a mathematical abstraction of one such operation for a single-layer perceptron implementing binary classification, where x are the inputs to the perceptron, w are the weights to the perceptron, sgn is the signum function, and y is the output of the perceptron [90].…”
Section: Memory Typesmentioning
confidence: 99%
“…In contrast, DRAM requires multiple loads/stores and computation steps to perform the same operation. This advantage is exploited by management policies which map matrix vector multiplication to ReRAM crossbar arrays [19,[82][83][84][85][86][87][88][89].…”
Section: Energy Efficiencymentioning
confidence: 99%
“…The main contribution of this work is to provide a methodology that incorporates the device's intrinsic variation, dynamic range and available conductance states into DNN model to study the impact of devices characteristics and posttraining quantization on the classification accuracy of analog accelerators. The work is inspired from [13], so in our case we envision a lookup table with resistance-level approach based on the measurements data of our mentioned device. To the best of our knowledge, no work that emphasizes on device-independent post-training NN quantization has been reported before.…”
Section: Introductionmentioning
confidence: 99%