Research in photonic computing has flourished due to the proliferation of optoelectronic components on photonic integration platforms. Photonic integrated circuits have enabled ultrafast artificial neural networks, providing a framework for a new class of information processing machines. Algorithms running on such hardware have the potential to address the growing demand for machine learning and artificial intelligence, in areas such as medical diagnosis, telecommunications, and high-performance and scientific computing. In parallel, the development of neuromorphic electronics has highlighted challenges in that domain, in particular, related to processor latency. Neuromorphic photonics offers subnanosecond latencies, providing a complementary opportunity to extend the domain of artificial intelligence. Here, we review recent advances in integrated photonic neuromorphic systems, discuss current and future challenges, and outline the advances in science and technology needed to meet those challenges.Conventional computers are organized around a centralized processing architecture (i.e. with a central processor and memory), which is suited to run sequential, digital, procedure-based programs. Such an architecture is inefficient for computational models that are distributed, massively parallel, and adaptive, most notably, those used for neural networks in artificial intelligence (AI). AI is an attempt to approach human level accuracy on these tasks that are challenging for traditional computers but easy for humans. Major achievements have been realized by machine learning (ML) algorithms based on neural networks [1], which process information in a distributed fashion and adapt to past inputs rather than being explicitly designed by a programmer. ML has had an impact on many aspects of our lives with applications ranging from translating languages [2] to cancer diagnosis [3]. Neuromorphic engineering is partly an attempt to move elements of ML and AI algorithms to hardware that reflects their massively distributed nature. Matching hardware to algorithms leads potentially to faster and more energy efficient information processing. Neuromorphic hardware is also applied to problems outside of ML, such as robot control, mathematical programming, and neuroscientific hypothesis testing [4,5]. Massively distributed hardware relies heavily-more so than other computer architectures-on massively parallel interconnections between lumped elements (i.e. neurons). Dedicated metal wiring for every connection is not practical. Therefore, current state-of-the-art neuromorphic electronics use some form of shared digital communication bus that is timedivision multiplexed, trading bandwidth for interconnectivity [4]. Optical interconnects could negate this trade-off and thus have the potential to accelerate ML and neuromorphic computing.Light is established as the communication medium of telecom and datacenters, but it has not yet found widespread use in information processing and computing. The same properties that allow optoelectronic ...
Photonic systems for high-performance information processing have attracted renewed interest. Neuromorphic silicon photonics has the potential to integrate processing functions that vastly exceed the capabilities of electronics. We report first observations of a recurrent silicon photonic neural network, in which connections are configured by microring weight banks. A mathematical isomorphism between the silicon photonic circuit and a continuous neural network model is demonstrated through dynamical bifurcation analysis. Exploiting this isomorphism, a simulated 24-node silicon photonic neural network is programmed using “neural compiler” to solve a differential system emulation task. A 294-fold acceleration against a conventional benchmark is predicted. We also propose and derive power consumption analysis for modulator-class neurons that, as opposed to laser-class neurons, are compatible with silicon photonic platforms. At increased scale, Neuromorphic silicon photonics could access new regimes of ultrafast information processing for radio, control, and scientific computing.
It has long been known that photonic communication can alleviate the data movement bottlenecks that plague conventional microelectronic processors. More recently, there has also been interest in its capabilities to implement low precision linear operations, such as matrix multiplications, fast and efficiently. We characterize the performance of photonic and electronic hardware underlying neural network models using multiply-accumulate operations. First, we investigate the limits of analog electronic crossbar arrays and on-chip photonic linear computing systems. Photonic processors are shown to have advantages in the limit of large processor sizes (>100 µm), large vector sizes (N > 500), and low noise precision (≤4 bits). We discuss several proposed tunable photonic MAC systems, and provide a concrete comparison between deep learning and photonic hardware using several empiricallyvalidated device and system models. We show significant potential improvements over digital electronics in energy (>10 2), speed (>10 3), and compute density (>10 2). Index Terms-Artificial intelligence, neural networks, analog computers, analog processing circuits, optical computing. I. INTRODUCTION P HOTONICS has been well studied for its role in communication systems. Fiber optic links currently form the backbone of the world's telecommunications infrastructure, vastly overshadowing the best electronic communication standards in information capacity. Light signals have many advantageous properties for the transfer of information. For one, a photonic waveguide, with diameters ranging from those in fiber (∼80 μm) to those fabricated on-chip (∼500 nm), can carry information at enormous bandwidth densities-i.e., terabits per secondwith an energy efficiency that scales nearly independent of distance. This density is possible thanks to signal parallelization in photonic waveguides, in which hundreds of high speed, multiplexed channels can be independently modulated and detected. Photonic channels also experience less distortion, jitter,
There has been a recently renewed interest in neuromorphic photonics, a field promising to access pivotal and unexplored regimes of machine intelligence. Progress has been made on isolated neurons and analog interconnects; nevertheless, this renewal has yet to produce a demonstration of a silicon photonic neuron capable of interacting with other like neurons. We report a modulator-class photonic neuron fabricated in a conventional silicon photonic process line. We demonstrate behaviors of transfer function configurability, fan-in, inhibition, time-resolved processing, and, crucially, autaptic cascadability -a sufficient set of behaviors for a device to act as a neuron participating in a network of like neurons. The silicon photonic modulator neuron constitutes the final piece needed to make photonic neural networks fully integrated on currently available silicon photonic platforms.In this work, we fabricate and demonstrate a silicon photonic modulator neuron. It consists of a balanced photodetector directly connected to a microring (MRR) modulator. We demonstrate that this device possesses the necessary capabilities of a network-compatible neuron: fan-in, high-gain optical-to-optical nonlinearity, and arXiv:1812.11898v1 [physics.app-ph]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.