The natural compound eye has received much attention in recent years due to its remarkable properties, such as its large field of view (FOV), compact structure, and high sensitivity to moving objects. Many studies have been devoted to mimicking the imaging system of the natural compound eye. The paper gives a review of state-of-the-art artificial compound eye imaging systems. Firstly, we introduce the imaging principle of three types of natural compound eye. Then, we divide current artificial compound eye imaging systems into four categories according to the difference of structural composition. Readers can easily grasp methods to build an artificial compound eye imaging system from the perspective of structural composition. Moreover, we compare the imaging performance of state-of-the-art artificial compound eye imaging systems, which provides a reference for readers to design system parameters of an artificial compound eye imaging system. Next, we present the applications of the artificial compound eye imaging system including imaging with a large FOV, imaging with high resolution, object distance detection, medical imaging, egomotion estimation, and navigation. Finally, an outlook of the artificial compound eye imaging system is highlighted.
the potential of random pattern based computational ghost imaging (cGi) for real-time applications has been offset by its long image reconstruction time and inefficient reconstruction of complex diverse scenes. to overcome these problems, we propose a fast image reconstruction framework for cGi, called "DeepGhost", using deep convolutional autoencoder network to achieve real-time imaging at very low sampling rates (10-20%). By transferring prior-knowledge from STL-10 dataset to physical-data driven network, the proposed framework can reconstruct complex unseen targets with high accuracy. The experimental results show that the proposed method outperforms existing deep learning and state-of-the-art compressed sensing methods used for ghost imaging under similar conditions. the proposed method employs deep architecture with fast computation, and tackles the shortcomings of existing schemes i.e., inappropriate architecture, training on limited data under controlled settings, and employing shallow network for fast computation. Computational ghost imaging 1 acquires spatial information about an unknown target by illuminating it with a series of random binary patterns generated by a spatial light modulator (SLM). For each projected pattern, the light intensity back-reflected from the target plane is recorded by an ordinary photodiode. By correlating intensity measurements with corresponding projected patterns, the target image is reconstructed. One downside of CGI is the requirement of a large number of measurements to produce a good-quality image, which increases its imaging time. Despite the emergence of basis scan schemes 2 , CGI (using random patterns) is still employed in many applications due to its simplicity, inherent encryption of patterns 3 , and ease of deployment 4. Therefore, it is important to improve the efficiency of CGI by integrating it with some optimization technique to avoid complex (hardware based) methods 5 that fail to reap the benefits of reduced cost and simplicity in ghost imaging (GI). Owing to its advantages of low cost, robustness against noise and scattering, and ability to operate over long spectral range, CGI is widely used in many applications 6-8. In order to make CGI practical, more specifically for real-time imaging, it is important to reduce its imaging time. The imaging time of CGI can be subcategorized as data acquisition time and image reconstruction time. The data acquisition time of CGI depends on the required number of measurements and mainly on the projection rate of SLM. Recent advances in SLM technology make it easy to reduce data acquisition time by employing commercially available high-resolution digital micromirror devices (DMDs) operating at ~ 20 kHz. The acquisition time can also be reduced by employing some simple yet novel solutions 9,10. Therefore, the image reconstruction time remains the main bottleneck towards achieving high speed imaging in CGI. This image reconstruction time can be reduced by employing an efficient image reconstruction framework. Recently, comp...
In this study, we propose a method for training convolutional neural networks to make them identify and classify images with higher classification accuracy. By combining the Cartesian and polar coordinate systems when describing the images, the method of recognition and classification for plankton images is discussed. The optimized classification and recognition networks are constructed. They are available for in situ plankton images, exploiting the advantages of both coordinate systems in the network training process. Fusing the two types of vectors and using them as the input for conventional machine learning models for classification, support vector machines (SVMs) are selected as the classifiers to combine these two features of vectors, coming from different image coordinate descriptions. The accuracy of the proposed model was markedly higher than those of the initial classical convolutional neural networks when using the in situ plankton image data, with the increases in classification accuracy and recall rate being 5.3% and 5.1% respectively. In addition, the proposed training method can improve the classification performance considerably when used on the public CIFAR-10 dataset.
Fourier single pixel imaging (FSPI) is well known for reconstructing high quality images but only at the cost of long imaging time. For real-time applications, FSPI relies on under-sampled reconstructions, failing to provide high quality images. In order to improve imaging quality of real-time FSPI, a fast image reconstruction framework based on deep learning (DL) is proposed. More specifically, a deep convolutional autoencoder network with symmetric skip connection architecture for real time 96 × 96 imaging at very low sampling rates (5–8%) is employed. The network is trained on a large image set and is able to reconstruct diverse images unseen during training. The promising experimental results show that the proposed FSPI coupled with DL (termed DL-FSPI) outperforms conventional FSPI in terms of image quality at very low sampling rates.
A pulsed-laser range finding based on differential optical-path is proposed, and the mathematical models are developed and verified. Based on the method, some simulations are carried out and important conclusions are deduced. (1) Background power is suppressed effectively. (2) Compared with signal-to-noise ratio (SNR) of traditional method, SNR of the proposed method is more suitable than traditional method in long-range finding and large tilt angle of target. (3) No matter what the tilt angle of target is, it always has optimal sensitivity of zero cross as long as the differential distance is equal to the light speed multiplied by the received pulse length and there is an overlap between two echoes.
Silicon-based complementary metal oxide semiconductor (CMOS) devices have dominated the technological revolution in the past decades. With increasing demands in machine vision, autonomous driving, and artificial intelligence, silicon CMOS imagers, as the major optical information input devices, face great challenges in spectral sensing ranges. In this paper, we demonstrate the development of CMOS-compatible infrared colloidal quantum-dot (CQD) imagers in the broadband short-wave and mid-wave infrared ranges (SWIR and MWIR, 1.5–5 μm). A new device architecture of trapping-mode detectors is proposed, fabricated, and demonstrated with lowered darkcurrents and improved responsivity. The CMOS-compatible fabrication process is completed with two-step sequential spin-coating processes of intrinsic and doped HgTe CQDs on an 8 in. CMOS readout wafer with photoresponse non-uniformity (PRNU) down to 4%, dead pixel rate of 0%, external quantum efficiency up to 175%, and detectivity as high as 2 × 1011 Jones for extended SWIR at 300 K and 8 × 1010 Jones for MWIR at 80 K. Both SWIR images and MWIR thermal images are demonstrated with great potential for semiconductor inspection, chemical identification, and temperature monitoring.
Complementary metal oxide semiconductor (CMOS) silicon sensors play a central role in optoelectronics with widespread applications from small cell phone cameras to large-format imagers for remote sensing. Despite numerous advantages, their sensing ranges are limited within the visible (0.4−0.7 μm) and near-infrared (0.8−1.1 μm) range , defined by their energy gaps (1.1 eV). However, below or above that spectral range, ultraviolet (UV) and short-wave infrared (SWIR) have been demonstrated in numerous applications such as fingerprint identification, night vision, and composition analysis. In this work, we demonstrate the implementation of multispectral broadband CMOS-compatible imagers with UV-enhanced visible pixels and SWIR pixels by layer-by-layer direct optical lithography of colloidal quantum dots (CQDs). High-resolution single-color images and merged multispectral images were obtained by using one imager. The photoresponse nonuniformity (PRNU) is below 5% with a 0% dead pixel rate and room-temperature responsivities of 0.25 A/W at 300 nm, 0.4 A/W at 750 nm, and 0.25 A/W at 2.0 μm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.