We report an analog neuromorphic module composed of p-type carbon nanotube (CNT) synapses and an integrate-and-fire (I&F) circuit. The CNT synapse has a field-effect transistor structure with a random CNT network as its channel and an aluminum oxide dielectric layer implanted with indium ions as its gate. A positive voltage pulse (spike) applied on the gate attracts electrons into the defect sites of the gate dielectric layer, and the trapped electrons are gradually released after the pulse is removed. The electrons modify the hole concentration and induce a dynamic postsynaptic current in the CNT channel. Multiple input spikes induce excitatory or inhibitory postsynaptic currents via excitatory or inhibitory CNT synapses, which flow toward an I&F circuit to trigger output spikes. The dynamic transfer function between the input and output spikes of the neuromorphic module is analyzed. The module could potentially be scaled up to emulate biological neural networks and their functions.
In this paper, we present a systematic approach for characterization and reconstruction of statistically optimal representative unit cells of polydisperse particulate composites. Microtomography is used to gather rich three-dimensional data of a packed glass bead system. First-, second-, and third-order probability functions are used to characterize the morphology of the material, and the parallel augmented simulated annealing algorithm is employed for reconstruction of the statistically equivalent medium. Both the fully resolved probability spectrum and the geometrically exact particle shapes are considered in this study, rendering the optimization problem multidimensional with a highly complex objective function. A ten-phase particulate composite composed of packed glass beads in a cylindrical specimen is investigated, and a unit cell is reconstructed on massively parallel computers. Further, rigorous error analysis of the statistical descriptors (probability functions) is presented and a detailed comparison between statistics of the voxel-derived pack and the representative cell is made.
Transistor-based circuits with parallel computing architectures and distributed memories, such as graphics processing units (GPUs) from Nvidia, [9] tensor processing units (TPUs) from Google, [3,10] field-programmable gate arrays (FPGAs) from Intel, [11] and the TrueNorth neuromorphic circuit from IBM [12] have been developed to improve their energy efficiencies ( Figure 1a) to the range of 10 10 − 10 11 FLOPS W −1 (floating point operations per second per watt) by increasing parallelism and reducing global data transmission. However, their energy efficiencies are fundamentally limited by the energy consumptions on memory (≈10 −15 J bit −1 ) and signal transitions (≈10 −11 J bit −1 ) in digital computing circuits. [5,6] When transistors approach the limitations of their minimal sizes near the end of Moore's law, the energy efficiencies of transistorbased computing circuits are asymptotically saturated [4][5][6]13,14] (Figure 1a). Meanwhile, the information industry generates "big data" with exponentially increasing volumes, and leads to exponentially increasing power requirements for computations. [4,6,14] This trajectory is unsustainable as it would exceed the entire global power production in one or two decades [15] (Figure 1a). It is imperative to develop a new platform to facilitate inference and learning from "big data" in emerging intelligent systems with significantly higher energy efficiency than that of the transistor-based Turing computing platform.The human brain performs inference and learning from "big data" with an estimated speed (≈10 16 FLOPS) [16] comparable to the speed (≈10 17 FLOPS) of the fastest supercomputer, Summit, [17] but consumes much less power (≈20 W) than the supercomputer (≈10 7 W), and is much more energy-efficient (≈10 15 FLOPS W −1 ) than the supercomputer (≈10 10 FLOPS W −1 , Figure 1a). By contrast, the human brain concurrently performs spatiotemporal inference and learning in analog parallel mode [16,18,19] (Figure 1c) via a network of neurons connected by ≈10 14 synapses (Figure 1d). For inference, a wave of voltage pulses, V t m ( ) i Adv.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.