Recent years have witnessed promising artificial intelligence (AI) applications in many disciplines, including optics, engineering, medicine, economics, and education. In particular, the synergy of AI and meta-optics has greatly benefited both fields. Meta-optics are advanced flat optics with novel functions and light-manipulation abilities. The optical properties can be engineered with a unique design to meet various optical demands. This review offers comprehensive coverage of meta-optics and artificial intelligence in synergy. After providing an overview of AI and meta-optics, we categorize and discuss the recent developments integrated by these two topics, namely AI for meta-optics and meta-optics for AI. The former describes how to apply AI to the research of meta-optics for design, simulation, optical information analysis, and application. The latter reports the development of the optical Al system and computation via meta-optics. This review will also provide an in-depth discussion of the challenges of this interdisciplinary field and indicate future directions. We expect that this review will inspire researchers in these fields and benefit the next generation of intelligent optical device design.
The optical illusion affects depth‐sensing due to the limited and specific light‐field information acquired by single‐lens imaging. The incomplete depth information or visual deception would cause cognitive errors. To resolve this problem, an intelligent and compact depth‐sensing meta‐device that is miniaturized, integrated, and applicable for diverse scenes in all light levels is demonstrated. The compact and multifunction stereo vision system adopts an array with 3600 achromatic meta‐lenses and a size of 1.2 × 1.2 mm2 to measure the depth over a 30 cm range with deep‐learning support. The meta‐lens array can act as multiple imaging lenses to collect light field information. It can also work with a light source as an active optical device to project a structured light. The meta‐lens array can serve as the core functional component of a light‐field imaging system under bright conditions or a structured‐light projection system in the dark. The depth information in both ways can be analyzed and extracted by the convolutional neural network. This work provides a new avenue for the applications such as autonomous driving, machine vision, human–computer interaction, augmented reality, biometric identification, etc.
Meta-lens has successfully been developed for a variety of optical functions. We demonstrate a light-field edge detection imaging system with a gallium nitride achromatic meta-lens array. It enables edge detection from one dimension to three dimensions. The designed meta-lens array consists of 60 by 60 achromatic meta-lenses, which operate in the visible range from 400 to 660 nm. All of the light field information of objects in the scene can be captured and computed. The focused edge images from one dimension to three dimensions are extracted with depth estimation by image rendering. Three dimensions edge detection is two dimensions edge imaging with depth information. The focused edge images can be obtained by the sub-image reconstruction of the light field image. Our multidimensional edge detection system by achromatic meta-lens array brings novel advantages, such as broadband detection, data volume reduction, and device miniaturization capacity. Results of our experiments show new insight into applications of biological diagnose and robotic vision.
Underwater optics in all-aquatic environments is vital for environmental management, biogeochemistry, phytoplankton ecology, benthic processes, global change, etc. Many optical techniques of observational systems for underwater sensing, imaging, and applications have been developed. For the demands of compact, miniaturized, portable, lightweight, and low-energy consumption, a novel underwater binocular depth-sensing and imaging meta-optic device is developed and reported here. A GaN binocular meta-lens is specifically designed and fabricated to demonstrate underwater stereo vision and depth sensing. The diameter of each meta-lens is 2.6 mm, and the measured distance between the two meta-lens centers is 4.04 mm. The advantage of our binocular meta-lens is no need of distortion correction or camera calibration, which is necessary for traditional two-camera stereo vision systems. Based on the experimental results, we developed the generalized depth calculation formula for all-size binocular vision systems. With deep-learning support, this stereo vision system can realize the fast underwater object’s depth and image computation for real-time processing capability. Our artificial intelligent imaging results show that depth measurement accuracy is down to 50 μm. Besides the aberration-free advantage of flat meta-optic components, the intrinsic superhydrophobicity properties of our nanostructured GaN meta-lens enable an antiadhesion, stain-resistant, and self-cleaning novel underwater imaging device. This stereo vision binocular meta-lens will significantly benefit underwater micro/nanorobots, autonomous submarines, machine vision in the ocean, marine ecological surveys, etc.
The sixth-generation (6G) communication technology is being developed in full swing and is expected to be faster and better than the fifth generation. The precise information transfer directivity and the concentration of signal strength are the key topics of 6G technology. We report the synthetic phase design of rotary doublet Airy beam and triplet Gaussian beam varifocal meta-devices to fully control the terahertz beam’s propagation direction and coverage area. The focusing spot can be delivered to arbitrary positions in a two-dimensional plane or a three-dimensional space. The highly concentrated signal can be delivered to a specific position, and the transmission direction can be adjusted freely to enable secure, flexible, and high-directivity 6G communication systems. This technology avoids the high costs associated with extensive use of active components. 6G communication systems, wireless power transfer, zoom imaging, and remote sensing will benefit from large-scale adoption of such a technology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.