It has long been known that photonic communication can alleviate the data movement bottlenecks that plague conventional microelectronic processors. More recently, there has also been interest in its capabilities to implement low precision linear operations, such as matrix multiplications, fast and efficiently. We characterize the performance of photonic and electronic hardware underlying neural network models using multiply-accumulate operations. First, we investigate the limits of analog electronic crossbar arrays and on-chip photonic linear computing systems. Photonic processors are shown to have advantages in the limit of large processor sizes (>100 µm), large vector sizes (N > 500), and low noise precision (≤4 bits). We discuss several proposed tunable photonic MAC systems, and provide a concrete comparison between deep learning and photonic hardware using several empiricallyvalidated device and system models. We show significant potential improvements over digital electronics in energy (>10 2), speed (>10 3), and compute density (>10 2). Index Terms-Artificial intelligence, neural networks, analog computers, analog processing circuits, optical computing. I. INTRODUCTION P HOTONICS has been well studied for its role in communication systems. Fiber optic links currently form the backbone of the world's telecommunications infrastructure, vastly overshadowing the best electronic communication standards in information capacity. Light signals have many advantageous properties for the transfer of information. For one, a photonic waveguide, with diameters ranging from those in fiber (∼80 μm) to those fabricated on-chip (∼500 nm), can carry information at enormous bandwidth densities-i.e., terabits per secondwith an energy efficiency that scales nearly independent of distance. This density is possible thanks to signal parallelization in photonic waveguides, in which hundreds of high speed, multiplexed channels can be independently modulated and detected. Photonic channels also experience less distortion, jitter,
There has been a recently renewed interest in neuromorphic photonics, a field promising to access pivotal and unexplored regimes of machine intelligence. Progress has been made on isolated neurons and analog interconnects; nevertheless, this renewal has yet to produce a demonstration of a silicon photonic neuron capable of interacting with other like neurons. We report a modulator-class photonic neuron fabricated in a conventional silicon photonic process line. We demonstrate behaviors of transfer function configurability, fan-in, inhibition, time-resolved processing, and, crucially, autaptic cascadability -a sufficient set of behaviors for a device to act as a neuron participating in a network of like neurons. The silicon photonic modulator neuron constitutes the final piece needed to make photonic neural networks fully integrated on currently available silicon photonic platforms.In this work, we fabricate and demonstrate a silicon photonic modulator neuron. It consists of a balanced photodetector directly connected to a microring (MRR) modulator. We demonstrate that this device possesses the necessary capabilities of a network-compatible neuron: fan-in, high-gain optical-to-optical nonlinearity, and arXiv:1812.11898v1 [physics.app-ph]
Convolutional Neural Networks (CNNs) are powerful and highly ubiquitous tools for extracting features from large datasets for applications such as computer vision and natural language processing. However, a convolution is a computationally expensive operation in digital electronics. In contrast, neuromorphic photonic systems, which have experienced a recent surge of interest over the last few years, propose higher bandwidth and energy efficiencies for neural network training and inference. Neuromorphic photonics exploits the advantages of optical electronics, including the ease of analog processing, and busing multiple signals on a single waveguide at the speed of light. Here, we propose a Digital Electronic and Analog Photonic (DEAP) CNN hardware architecture that has potential to be 2.8 to 14 times faster while maintaining the same power usage of current state-of-the-art GPUs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.