Abstract:Imaging through diffusers presents a challenging problem with various digital image reconstruction solutions demonstrated to date using computers. Here, we present a computer-free, all-optical image reconstruction method to see through random diffusers at the speed of light. Using deep learning, a set of transmissive diffractive surfaces are trained to all-optically reconstruct images of arbitrary objects that are completely covered by unknown, random phase diffusers. After the training stage, which is a one-t… Show more
“…Some of these errors can be mitigated by selecting appropriate fabrication methods, e.g., high-precision lithography, and using less absorptive materials. Moreover, our previous results 23 , 38 , 44 , 49 , 50 showed that some of these uncontrolled physical errors and imperfections did not lead to a significant discrepancy between the experimental and numerical, expected results, indicating the correctness of the assumptions involved in our optical forward model and training procedures. Even if these errors and imperfections become considerable, the performance degradation of a diffractive network caused by some of these experimental factors can be compensated by incorporating them as random variables into the physical forward model of the diffractive network during the training process.…”
Section: Discussionmentioning
confidence: 61%
“…Motivated by the massive success of artificial intelligence and deep learning, in specific, a myriad of new hardware designs for optical computing have been reported recently, including, e.g., on-chip integrated photonic circuits 16 – 22 , free-space optical platforms 23 – 28 , and others 29 – 31 . Among these different optical computing systems, the integration of successive transmissive diffractive layers (forming an optical network) has been demonstrated for optical information processing, such as object classification 23 , 32 – 43 , image reconstruction 38 , 44 , all-optical phase recovery and quantitative phase imaging 45 , and logic operations 46 – 48 . A diffractive network is trained using deep learning and error-backpropagation methods implemented in a digital computer, after which the resulting transmissive layers are fabricated to form a physical network that computes based on the diffraction of the input light through these spatially-engineered transmissive layers.…”
Section: Introductionmentioning
confidence: 99%
“…It is also scalable since an increase in the input field-of-view (FOV) can be handled by fabricating larger transmissive layers and/or deeper diffractive designs with more successive layers positioned one after another. Furthermore, both the phase and the amplitude information channels of the input scene/FOV can be processed by a diffractive optical network, without the need for phase retrieval or digitizing, vectorizing an image of the scene, which makes diffractive computing highly desirable for machine vision applications 38 , 44 . Harnessing light-matter interactions using engineered diffractive surfaces also enabled the inverse design of optical elements for e.g., spatially-controlled wavelength demultiplexing 49 , pulse engineering 50 , and orbital angular momentum multiplexing/demultiplexing 51 , 52 .…”
Research on optical computing has recently attracted significant attention due to the transformative advances in machine learning. Among different approaches, diffractive optical networks composed of spatially-engineered transmissive surfaces have been demonstrated for all-optical statistical inference and performing arbitrary linear transformations using passive, free-space optical layers. Here, we introduce a polarization-multiplexed diffractive processor to all-optically perform multiple, arbitrarily-selected linear transformations through a single diffractive network trained using deep learning. In this framework, an array of pre-selected linear polarizers is positioned between trainable transmissive diffractive materials that are isotropic, and different target linear transformations (complex-valued) are uniquely assigned to different combinations of input/output polarization states. The transmission layers of this polarization-multiplexed diffractive network are trained and optimized via deep learning and error-backpropagation by using thousands of examples of the input/output fields corresponding to each one of the complex-valued linear transformations assigned to different input/output polarization combinations. Our results and analysis reveal that a single diffractive network can successfully approximate and all-optically implement a group of arbitrarily-selected target transformations with a negligible error when the number of trainable diffractive features/neurons (N) approaches $$N_pN_iN_o$$
N
p
N
i
N
o
, where Ni and No represent the number of pixels at the input and output fields-of-view, respectively, and Np refers to the number of unique linear transformations assigned to different input/output polarization combinations. This polarization-multiplexed all-optical diffractive processor can find various applications in optical computing and polarization-based machine vision tasks.
“…Some of these errors can be mitigated by selecting appropriate fabrication methods, e.g., high-precision lithography, and using less absorptive materials. Moreover, our previous results 23 , 38 , 44 , 49 , 50 showed that some of these uncontrolled physical errors and imperfections did not lead to a significant discrepancy between the experimental and numerical, expected results, indicating the correctness of the assumptions involved in our optical forward model and training procedures. Even if these errors and imperfections become considerable, the performance degradation of a diffractive network caused by some of these experimental factors can be compensated by incorporating them as random variables into the physical forward model of the diffractive network during the training process.…”
Section: Discussionmentioning
confidence: 61%
“…Motivated by the massive success of artificial intelligence and deep learning, in specific, a myriad of new hardware designs for optical computing have been reported recently, including, e.g., on-chip integrated photonic circuits 16 – 22 , free-space optical platforms 23 – 28 , and others 29 – 31 . Among these different optical computing systems, the integration of successive transmissive diffractive layers (forming an optical network) has been demonstrated for optical information processing, such as object classification 23 , 32 – 43 , image reconstruction 38 , 44 , all-optical phase recovery and quantitative phase imaging 45 , and logic operations 46 – 48 . A diffractive network is trained using deep learning and error-backpropagation methods implemented in a digital computer, after which the resulting transmissive layers are fabricated to form a physical network that computes based on the diffraction of the input light through these spatially-engineered transmissive layers.…”
Section: Introductionmentioning
confidence: 99%
“…It is also scalable since an increase in the input field-of-view (FOV) can be handled by fabricating larger transmissive layers and/or deeper diffractive designs with more successive layers positioned one after another. Furthermore, both the phase and the amplitude information channels of the input scene/FOV can be processed by a diffractive optical network, without the need for phase retrieval or digitizing, vectorizing an image of the scene, which makes diffractive computing highly desirable for machine vision applications 38 , 44 . Harnessing light-matter interactions using engineered diffractive surfaces also enabled the inverse design of optical elements for e.g., spatially-controlled wavelength demultiplexing 49 , pulse engineering 50 , and orbital angular momentum multiplexing/demultiplexing 51 , 52 .…”
Research on optical computing has recently attracted significant attention due to the transformative advances in machine learning. Among different approaches, diffractive optical networks composed of spatially-engineered transmissive surfaces have been demonstrated for all-optical statistical inference and performing arbitrary linear transformations using passive, free-space optical layers. Here, we introduce a polarization-multiplexed diffractive processor to all-optically perform multiple, arbitrarily-selected linear transformations through a single diffractive network trained using deep learning. In this framework, an array of pre-selected linear polarizers is positioned between trainable transmissive diffractive materials that are isotropic, and different target linear transformations (complex-valued) are uniquely assigned to different combinations of input/output polarization states. The transmission layers of this polarization-multiplexed diffractive network are trained and optimized via deep learning and error-backpropagation by using thousands of examples of the input/output fields corresponding to each one of the complex-valued linear transformations assigned to different input/output polarization combinations. Our results and analysis reveal that a single diffractive network can successfully approximate and all-optically implement a group of arbitrarily-selected target transformations with a negligible error when the number of trainable diffractive features/neurons (N) approaches $$N_pN_iN_o$$
N
p
N
i
N
o
, where Ni and No represent the number of pixels at the input and output fields-of-view, respectively, and Np refers to the number of unique linear transformations assigned to different input/output polarization combinations. This polarization-multiplexed all-optical diffractive processor can find various applications in optical computing and polarization-based machine vision tasks.
“…In stark contrast, spatial analog computing modulates incident wavefronts in real space, enabling massive and high-throughput parallel operations for required signal-processing tasks such as spatial differentiation 8 – 10 , integration 11 and solving equations 12 , 13 . Conventional physical architectures of spatial domain analog computers leverage upon the phase accumulation with stacked or series of optical elements 14 , 15 , making the whole system bulky and lossy. Nevertheless, metasurface has been captivated as a popular notion and a promising candidate for highly efficient, compact and ultrathin analog processors 16 – 18 .…”
Computational meta-optics brings a twist on the accelerating hardware with the benefits of ultrafast speed, ultra-low power consumption, and parallel information processing in versatile applications. Recent advent of metasurfaces have enabled the full manipulation of electromagnetic waves within subwavelength scales, promising the multifunctional, high-throughput, compact and flat optical processors. In this trend, metasurfaces with nonlocality or multi-layer structures are proposed to perform analog optical computations based on Green’s function or Fourier transform, intrinsically constrained by limited operations or large footprints/volume. Here, we showcase a Fourier-based metaprocessor to impart customized highly flexible transfer functions for analog computing upon our single-layer Huygens’ metasurface. Basic mathematical operations, including differentiation and cross-correlation, are performed by directly modulating complex wavefronts in spatial Fourier domain, facilitating edge detection and pattern recognition of various image processing. Our work substantiates an ultracompact and powerful kernel processor, which could find important applications for optical analog computing and image processing.
“…By combining novel deep neural network (DNN) architectures and domain knowledge in optical physics, the performance limits in various systems are continuously being re-defined, including spatial resolution 3 , 4 , depth-of-field 5 , space-bandwidth product 6 , imaging speed 6 , 7 , sensitivity to low-photon count 8 , and resilience to random scattering 9 , 10 . Of particular interest by Luo et al 11 is the ability to overcome random scattering by a DNN.…”
Diffractive Deep Neural Network enables computer-free, all-optical “computational imaging” for seeing through unknown random diffusers at the speed of light.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.