The inevitable variability within electronic devices causes strict constraints on operation, reliability and scalability of the circuit design. However, when a compromise arises among the different performance metrics, area, time and energy, variability then loosens the tight requirements and allows for further savings in an alternative design scope. To that end, unconventional computing approaches are revived in the form of approximate computing, particularly tuned for resource-constrained mobile computing. In this paper, a proof-of-concept of the approximate computing paradigm using memristors is demonstrated. Stochastic memristors are used as the main building block of probabilistic logic gates. As will be shown in this paper, the stochasticity of memristors’ switching characteristics is tightly bound to the supply voltage and hence to power consumption. By scaling of the supply voltage to appropriate levels stochasticity gets increased. In order to guide the design process of approximate circuits based on memristors a realistic device model needs to be elaborated with explicit emphasis of the probabilistic switching behavior. Theoretical formulation, probabilistic analysis, and simulation of the underlying logic circuits and operations are introduced. Moreover, the expected output behavior is verified with the experimental measurements of valence change memory cells. Hence, it is shown how the precision of the output is varied for the sake of the attainable gains at different levels of available design metrics. This approach represents the first proposition along with physical verification and mapping to real devices that combines stochastic memristors into unconventional computing approaches.
two significant directions. First, it results in serious heat issues, which may jeopardize the life-time of circuits, and create dangerous positive feedback effects in the thermally-activated physical mechanisms, underlying the operating principles of certain devices. Concurrently, it leads to an inevitable upsurge in power consumption across a CMOS chip, which sheds shadows on the reliability of Dennard's law, [2] and prevents further increases in the clock frequency. Taking also into account the extremely-high costs associated with the production of cutting-edge sub-10 nm chips, semiconductor manufacturers are questioning whether keeping the aggressive transistor downscaling rate, dictated by Moore's law, is still profitable. [3] An additional aspect, which limits the maximum information management rate, is related to the classical von Neumann architecture of state-of-the-art computers, in which the physical separation between central processing unit and data storage system causes inevitable delays in the accomplishment of memory and computing tasks. Besides the proposal of clever ideas to resolve some of these open issues, for example, exploiting the third vertical direction to increase the transistor count on the available chip area, and developing multi-core computing machines with distributed memory to increase the data processing rate, device engineers are devoting considerable efforts in the search for special materials, allowing to developThe multidisciplinary field of memristors calls for the necessity for theoreticallyinclined researchers and experimenters to join forces, merging complementary expertise and technical know-how, to develop and implement rigorous and systematic techniques to design variability-aware memristor-based circuits and systems. The availability of a predictive physics-based model for a memristor is a necessary requirement before commencing these investigations. An interesting dynamic phenomenon, occurring ubiquitously in non-volatile memristors, is fading memory. The latter may be defined as the appearance of a unique steady-state behavior, irrespective of the choice of the initial condition from an admissible range of values, for each stimulus from a certain family, for example, the DC or the purely-AC periodic input class. This paper first provides experimental evidence for the emergence of fading memory effects in the response of a TaO x redox-based random access memory cell to inputs from both of these classes. Leveraging the predictive capability of a physics-based device model, called JART VCM v1, a thorough system-theoretic analysis, revolving around the Dynamic Route Map graphic tool, is presented. This analysis allows to gain a better understanding of the mechanisms, underlying the emergence of history erase effects, and to identify the main factors, that modulate this nonlinear phenomenon, toward future potential applications.
We present work towards a visible wavelength tuneable external cavity laser (ECL) on a silicon nitride platform working around 640 nm. A ring resonator Vernier structure on the photonic integrated circuit (PIC) provides delayed feedback with spectral filtering and tuning. Gain is provided by a reflective semiconductor optical amplifier (SOA) grown on a GaAs substrate and integrated by pick-and-place flip-chip assembly. In a novel coupling scheme, the 1-dB in-plane placement tolerance is relaxed by a multi-mode edge-coupler to ± 2.6 µm in the direction parallel to the SOA edge and to displacements up to 3.5 µm from the PIC interface along the SOA's optical axis. Pedestals defined in the PIC guarantee vertical alignment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.