Software-implementations of brain-inspired computing underlie many important computational tasks, from image processing to speech recognition, artificial intelligence and deep learning applications. Yet, unlike real neural tissue, traditional computing architectures physically separate the core computing functions of memory and processing, making fast, efficient and low-energy computing difficult to achieve. To overcome such limitations, an attractive alternative is to design hardware that mimics neurons and synapses which, when connected in networks or neuromorphic systems, process information in a way more analogous to brains. Here we present an all-optical version of such a neurosynaptic system capable of supervised and unsupervised learning. We exploit wavelength division multiplexing techniques to implement a scalable circuit architecture for photonic neural networks, successfully demonstrating pattern recognition directly in the optical domain. Such photonic neurosynaptic networks promise access to the high speed and bandwidth inherent to optical systems, attractive for the direct processing of optical telecommunication and visual data.
Rios, Stegmaier et al., Integratable all-photonic nonvolatile multi-level memory Integratable all-photonic nonvolatile multi-level memoryWe show that individual memory elements can be addressed using a wavelength multiplexing scheme. Our multi-level, multi-bit devices provide a pathway towards eliminating the von-Neumann bottleneck and portend a new paradigm in all-photonic memory and non-conventional computing.* These authors contributed equally.-2-The advent of photonic technologies, in particular in the area of optical signaling, coupled with advances made in nanofabrication capabilities has created a growing need for practical allphotonic memories 3,[7][8][9][10] . Such memories are essential to supercharge computational performance in serial computers by speeding up the von-Neumann bottleneck, i.e. the information traffic jam between the processor and the memory. This bottleneck limits the speed of almost all processors today; it has already led to the introduction of multicore processor architectures and drives the search for viable on-chip optical interconnects. However, shuttling information optically from the processor to electronic memories is presently not efficient because electrical signals have to be converted to optical ones and vice-versa. Instead, information transfer and storage exclusively by optical means is highly desirable because of the inherently large bandwidth 1,3 , low residual cross-talk and high speed of optical information transfer. On a chip this has been challenging to achieve because practical photonic memories would need to retain information for long periods of time and require full-integration with the ancillary electronic circuitry, thus requiring compatibility with semiconductor processing 11.Ideal candidates for all-optical memories are phase-change materials (PCMs), already the subject of intense research and development over the last decade, but in the context of electronic memories [12][13][14] . A striking and functional feature of these materials is the high contrast between the crystalline and amorphous phase of both their electrical and optical properties 15,16 . In particular, chalcogenide-based PCMs have the ability to switch between these two states in response to appropriate heat stimuli (crystallization) or melt-quenching processes down to nanoscale cell sizes, which enables dense packaging and low-power memory switching. In our devices, data is stored in a nanoscale GST cell placed directly on top of a nanophotonic waveguide. Both writing into the memory cell and read-out of the stored information is carried out via evanescent coupling to the phase-change material and is thus not subject to the diffraction limit; because this is done directly within the waveguide using nanosecond optical pulses, our approach provides a promising route towards fast all-optical data storage in photonic circuits.The geometry of our memory cell and the operating principle is shown schematically in Fig. 1a. We store information in the GST (yellow region) by employing evanescent coup...
With the proliferation of ultra-high-speed mobile networks and internet-connected devices, along with the rise of artificial intelligence, the world is generating exponentially increasing amounts of data-data that needs to be processed in a fast, efficient and 'smart' way. These developments are pushing the limits of existing computing paradigms, and highly parallelized, fast and scalable hardware concepts are becoming progressively more important. Here, we demonstrate a computational specific integrated photonic tensor core-the optical analog of an ASIC-capable of operating at Tera-Multiply-Accumulate per second (TMAC/s) speeds. The photonic core achieves parallelized photonic inmemory computing using phase-change memory arrays and photonic chip-based optical frequency combs (soliton microcombs). The computation is reduced to measuring the optical transmission of reconfigurable and non-resonant, i.e. broadband, passive components operating at a bandwidth exceeding 14 GHz, limited only by the speed of the modulators and photodetectors. Given recent advances in hybrid integration of soliton microcombs at microwave line rates, ultra-low loss silicon nitride waveguides, and high speed on-chip detectors and modulators, our approach provides a path towards full CMOS wafer-scale integration of the photonic tensor core. While we focus on convolution processing, more generally our results indicate the major potential of integrated photonics for parallel, fast, efficient and wafer-scale manufacturable computational hardware in demanding AI applications such as autonomous driving, live video processing, and next generation cloud computing services.The increased demand for machine learning on very large datasets 1 and the growing offering of artificial intelligence services on the cloud 2-4 has driven a resurgence in custom hardware designed to accelerate multiply and accumulate (MAC) computations-the fundamental mathematical element needed for matrix-vector multiplication (MVM) operations. Whilst various custom silicon computing hardware (i.e. FPGAs 5 , ASICs 6 , and GPUs 7 ) have been developed to improve computational throughput and efficiency, they still depend on the same underlying electrical components which are fundamentally limited in both speed and energy by Joule heating, RF crosstalk, and capacitance 8 . The last of these (capacitance) dominates energy consumption and limits the maximum operating speeds in neural network hardware accelerators 9 since the movement of data (e.g. trained network weights), rather than arithmetic operations, requires the charging and discharging of chip-level metal interconnects. Thus, improving the efficiency of logic gates at the device level provides diminutive returns in such applications, if the flow of data during computation is not simultaneously addressed 10 . Even recent developments in the use of memristive crossbar arrays [11][12][13] to compute in the analog domain, whilst promising, do not have the potential for parallelizing the MVM operations (save for physically repli...
Research in photonic computing has flourished due to the proliferation of optoelectronic components on photonic integration platforms. Photonic integrated circuits have enabled ultrafast artificial neural networks, providing a framework for a new class of information processing machines. Algorithms running on such hardware have the potential to address the growing demand for machine learning and artificial intelligence, in areas such as medical diagnosis, telecommunications, and high-performance and scientific computing. In parallel, the development of neuromorphic electronics has highlighted challenges in that domain, in particular, related to processor latency. Neuromorphic photonics offers subnanosecond latencies, providing a complementary opportunity to extend the domain of artificial intelligence. Here, we review recent advances in integrated photonic neuromorphic systems, discuss current and future challenges, and outline the advances in science and technology needed to meet those challenges.Conventional computers are organized around a centralized processing architecture (i.e. with a central processor and memory), which is suited to run sequential, digital, procedure-based programs. Such an architecture is inefficient for computational models that are distributed, massively parallel, and adaptive, most notably, those used for neural networks in artificial intelligence (AI). AI is an attempt to approach human level accuracy on these tasks that are challenging for traditional computers but easy for humans. Major achievements have been realized by machine learning (ML) algorithms based on neural networks [1], which process information in a distributed fashion and adapt to past inputs rather than being explicitly designed by a programmer. ML has had an impact on many aspects of our lives with applications ranging from translating languages [2] to cancer diagnosis [3]. Neuromorphic engineering is partly an attempt to move elements of ML and AI algorithms to hardware that reflects their massively distributed nature. Matching hardware to algorithms leads potentially to faster and more energy efficient information processing. Neuromorphic hardware is also applied to problems outside of ML, such as robot control, mathematical programming, and neuroscientific hypothesis testing [4,5]. Massively distributed hardware relies heavily-more so than other computer architectures-on massively parallel interconnections between lumped elements (i.e. neurons). Dedicated metal wiring for every connection is not practical. Therefore, current state-of-the-art neuromorphic electronics use some form of shared digital communication bus that is timedivision multiplexed, trading bandwidth for interconnectivity [4]. Optical interconnects could negate this trade-off and thus have the potential to accelerate ML and neuromorphic computing.Light is established as the communication medium of telecom and datacenters, but it has not yet found widespread use in information processing and computing. The same properties that allow optoelectronic ...
The force exerted by photons is of fundamental importance in light-matter interactions. For example, in free space, optical tweezers have been widely used to manipulate atoms and microscale dielectric particles. This optical force is expected to be greatly enhanced in integrated photonic circuits in which light is highly concentrated at the nanoscale. Harnessing the optical force on a semiconductor chip will allow solid state devices, such as electromechanical systems, to operate under new physical principles. Indeed, recent experiments have elucidated the radiation forces of light in high-finesse optical microcavities, but the large footprint of these devices ultimately prevents scaling down to nanoscale dimensions. Recent theoretical work has predicted that a transverse optical force can be generated and used directly for electromechanical actuation without the need for a high-finesse cavity. However, on-chip exploitation of this force has been a significant challenge, primarily owing to the lack of efficient nanoscale mechanical transducers in the photonics domain. Here we report the direct detection and exploitation of transverse optical forces in an integrated silicon photonic circuit through an embedded nanomechanical resonator. The nanomechanical device, a free-standing waveguide, is driven by the optical force and read out through evanescent coupling of the guided light to the dielectric substrate. This new optical force enables all-optical operation of nanomechanical systems on a CMOS (complementary metal-oxide-semiconductor)-compatible platform, with substantial bandwidth and design flexibility compared to conventional electrical-based schemes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.