Cancers are often impossible to visually distinguish from normal tissue. This is critical for brain cancer where residual invasive cancer cells frequently remain after surgery, leading to disease recurrence and a negative impact on overall survival. No preoperative or intraoperative technology exists to identify all cancer cells that have invaded normal brain. To address this problem, we developed a handheld contact Raman spectroscopy probe technique for live, local detection of cancer cells in the human brain. Using this probe intraoperatively, we were able to accurately differentiate normal brain from dense cancer and normal brain invaded by cancer cells, with a sensitivity of 93% and a specificity of 91%. This Raman-based probe enabled detection of the previously undetectable diffusely invasive brain cancer cells at cellular resolution in patients with grade 2 to 4 gliomas. This intraoperative technology may therefore be able to classify cell populations in real time, making it an ideal guide for surgical resection and decision-making.
Recent success in deep neural networks has generated strong interest in hardware accelerators to improve speed and energy consumption. This paper presents a new type of photonic accelerator based on coherent detection that is scalable to large (N 10 6 ) networks and can be operated at high (GHz) speeds and very low (sub-aJ) energies per multiply-and-accumulate (MAC), using the massive spatial multiplexing enabled by standard free-space optical components. In contrast to previous approaches, both weights and inputs are optically encoded so that the network can be reprogrammed and trained on the fly. Simulations of the network using models for digit-and image-classification reveal a "standard quantum limit" for optical neural networks, set by photodetector shot noise. This bound, which can be as low as 50 zJ/MAC, suggests performance below the thermodynamic (Landauer) limit for digital irreversible computation is theoretically possible in this device. The proposed accelerator can implement both fully-connected and convolutional networks. We also present a scheme for back-propagation and training that can be performed in the same hardware. This architecture will enable a new class of ultra-low-energy processors for deep learning.In recent years, deep neural networks have tackled a wide range of problems including image analysis [1], natural language processing [2], game playing [3], physical chemistry [4], and medicine [5]. This is not a new field, however. The theoretical tools underpinning deep learning have been around for several decades [6,7,8]; the recent resurgence is driven primarily by (1) the availability of large training datasets [9], and (2) substantial growth in computing power [10] and the ability to train networks on GPUs [11]. Moving to more complex problems and higher network accuracies requires larger and deeper neural networks, which in turn require even more computing power [12]. This motivates the development of special-purpose hardware optimized to perform neural-network inference and training [13].To outperform a GPU, a neural-network accelerator must significantly lower the energy consumption, since the performance of modern microprocessors is limited by on-chip power [14]. In addition, the system must be fast, programmable, scalable to many neurons, compact, and ideally compatible with training as well as inference. Application-specific integrated circuits (ASICs) are one obvious candidate for this task. Stateof-the-art ASICs can reduce the energy per multiply-and-accumulate (MAC) from 20 pJ/MAC for modern GPUs [15] to around 1 pJ/MAC [16,17]. However, ASICs are based on CMOS technology and therefore suffer from the interconnect problem-even in highly optimized architectures where data is stored in register files close to the logic units, a majority of the energy consumption comes from data movement, not logic [13,16]. Analog crossbar arrays based on CMOS gates [18] or memristors [19,20] promise better performance, but as analog electronic devices, they suffer from calibration issues and li...
Advanced machine learning models are currently impossible to run on edge devices such as smart sensors and unmanned aerial vehicles owing to constraints on power, processing, and memory. We introduce an approach to machine learning inference based on delocalized analog processing across networks. In this approach, named Netcast, cloud-based “smart transceivers” stream weight data to edge devices, enabling ultraefficient photonic inference. We demonstrate image recognition at ultralow optical energy of 40 attojoules per multiply (<1 photon per multiply) at 98.8% (93%) classification accuracy. We reproduce this performance in a Boston-area field trial over 86 kilometers of deployed optical fiber, wavelength multiplexed over 3 terahertz of optical bandwidth. Netcast allows milliwatt-class edge devices with minimal memory and processing to compute at teraFLOPS rates reserved for high-power (>100 watts) cloud computers.
As deep neural network (DNN) models grow ever-larger, they can achieve higher accuracy and solve more complex problems. This trend has been enabled by an increase in available compute power; however, efforts to continue to scale electronic processors are impeded by the costs of communication, thermal management, power delivery and clocking. To improve scalability, we propose a digital optical neural network (DONN) with intralayer optical interconnects and reconfigurable input values. The path-length-independence of optical energy consumption enables information locality between a transmitter and a large number of arbitrarily arranged receivers, which allows greater flexibility in architecture design to circumvent scaling limitations. In a proof-of-concept experiment, we demonstrate optical multicast in the classification of 500 MNIST images with a 3-layer, fully-connected network. We also analyze the energy consumption of the DONN and find that digital optical data transfer is beneficial over electronics when the spacing of computational units is on the order of $$>10\,\upmu $$ > 10 μ m.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.