Resistive switching memory (RRAM) is a promising technology for embedded memory and their application in computing. In particular, RRAM arrays can provide a convenient primitive for matrix-vector multiplication (MVM) with strong impact on the acceleration of neural networks for artificial intelligence (AI). At the same time, RRAM is affected by intrinsic conductance variations which might cause a degradation of accuracy in AI inference hardware. This work provides a detailed study of the multilevel-cell (MLC) programming of RRAM for neural network applications. We compare three MLC programming schemes and discuss their variations in terms of the different slope in the programming characteristics. We test the accuracy of a 2layer fully-connected neural network (FC-NN) as a function of the MLC scheme, the number of weight levels, and the weight mapping configuration. We find a trade-off between the FC-NN accuracy, size and current consumption. This work highlights the importance of a holistic approach to AI accelerators encompassing the device properties, the overall circuit performance, and the AI application specifications. Index Terms-Resistive switching memory (RRAM); multilevel cell (MLC) operation; artificial neural network (ANN); in-memory computing (IMC).
In-memory computing (IMC) has emerged as a new computing paradigm able to alleviate or suppress the memory bottleneck, which is the major concern for energy efficiency and latency in modern digital computing. While the IMC concept is simple and promising, the details of its implementation cover a broad range of problems and solutions, including various memory technologies, circuit topologies, and programming/processing algorithms. This Perspective aims at providing an orientation map across the wide topic of IMC. First, the memory technologies will be presented, including both conventional complementary metal-oxide-semiconductor-based and emerging resistive/memristive devices. Then, circuit architectures will be considered, describing their aim and application. Circuits include both popular crosspoint arrays and other more advanced structures, such as closed-loop memory arrays and ternary content-addressable memory. The same circuit might serve completely different applications, e.g., a crosspoint array can be used for accelerating matrix-vector multiplication for forward propagation in a neural network and outer product for backpropagation training. The different algorithms and memory properties to enable such diversification of circuit functions will be discussed. Finally, the main challenges and opportunities for IMC will be presented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.