the modern computers are still largely based on the von Neumann architecture, and designed as a general-purpose machine, which make computers useful and convenient to use, but they are highly inefficient for data intensive tasks. The bus connecting memory and processor becomes a bottleneck for data transfer, referred to as the von Neumann bottleneck. [3] To improve the performance of computing systems in the so-called "big data" era, we must fundamentally change the way we compute today. Instead of being compute-centric, we should transgress to a data-centric paradigm.Neuroscientists and psychologists, all around the world have been studying the functional architecture of the human brain for centuries, which inspired datacentric computing methods such as artificial neural networks (ANNs) and machine learning (ML). The human brain can be characterized by its massive parallel reconfigurable connections (synapses or memory) connecting billions of neurons (the main processing unit). [4] Synapses play a very important role in achieving learning and adaptability of the human brain. The weight of a synapse shows connection strength between the two neurons linked by that synapse. In the learning phase, the synaptic weight changes in an analog fashion based on the learning rules. [5] ML and ANNs use a high-level abstract concept of human cognition and are referred to as brain-inspired computing. To further leverage and exploit the potential advantages and capabilities of the human brain, we may need to more faithfully mimic its functionality on hardware. Emerging devices that can be used for such neuromorphic computing with different levels of brain-inspiration will be our topic of discussion in this paper.In recent years, neuromorphic computing has emerged as a promising technology for the post-Moore's law era. Neuromorphic computing systems are highly connected and parallel, consume relatively low power and process in memory. To implement a neuromorphic system on hardware, it is important to realize: (1) artificial neurons that mimic biological neurons and (2) artificial synapses that emulate biological synapses, both of which must be power-efficient, scalable, and capable of implementing relevant learning rules to facilitate large-scale neuromorphic functions. To this end, over the last few years, numerous efforts have been made to realize artificial synapses using post-CMOS devices, including resistive randomaccess memory (ReRAM) (drift [6] and diffusive [7] memristors), A neuromorphic computing system may be able to learn and perform a task on its own by interacting with its surroundings. Combining such a chip with complementary metal-oxide-semiconductor (CMOS)-based processors can potentially solve a variety of problems being faced by today's artificial intelligence (AI) systems. Although various architectures purely based on CMOS are designed to maximize the computing efficiency of AI-based applications, the most fundamental operations including matrix multiplication and convolution heavily rely on the CMOS-based m...