This paper presents a novel calibration method for solid-state LiDAR devices based on a geometrical description of their scanning system, which has variable angular resolution. Determining this distortion across the entire Field-of-View of the system yields accurate and precise measurements which enable it to be combined with other sensors. On the one hand, the geometrical model is formulated using the well-known Snell’s law and the intrinsic optical assembly of the system, whereas on the other hand the proposed method describes the scanned scenario with an intuitive camera-like approach relating pixel locations with scanning directions. Simulations and experimental results show that the model fits with real devices and the calibration procedure accurately maps their variant resolution so undistorted representations of the observed scenario can be provided. Thus, the calibration method proposed during this work is applicable and valid for existing scanning systems improving their precision and accuracy in an order of magnitude.
This paper focuses on exploring ways to improve the performance of LiDAR imagers through fog. One of the known weaknesses of LiDAR technology is the lack of tolerance to adverse environmental conditions, such as the presence of fog, which hampers the future development of LiDAR in several markets. Within this paper, a LiDAR unit is designed and constructed to be able to apply temporal and polarimetric discrimination for detecting the number of signal photons received with detailed control of its temporal and spatial distribution under co-polarized and cross-polarized configurations. The system is evaluated using different experiments in a macro-scale fog chamber under controlled fog conditions. Using the complete digitization of the acquired signals, we analyze the natural light media response, to see that due to its characteristics it could be directly filtered out. Moreover, we confirm that there exists a polarization memory effect, which, by using a polarimetric cross-configuration detector, allows improvement of object detection in point clouds. These results are useful for applications related to computer vision, in fields like autonomous vehicles or outdoor surveillance where many variable types of environmental conditions may be present.
The polarization behavior of light transmitted through scattering media is studied quantitatively. A division of focal plane (DOFP) imaging polarimeter modified with a wideband quarter-wave plate (QWP) is used to evaluate the linear and circular depolarization signals. This system allows the measurement of the linear and circular co-polarization and cross-polarization channels simultaneously. The experiments are carried out at CEREMA's 30 m fog chamber under controlled fog density conditions. The polarization memory effect with circularly polarized light is demonstrated to be superior in forward transmission compared to the same phenomena with linearly polarized light when imaging inside a scattering medium. This paves the way for its use in imaging through scattering media for hazard detection in different applications.
Most pedestrian detection methods focus on bounding boxes based on fusing RGB with lidar. These methods do not relate to how the human eye perceives objects in the real world. Furthermore, lidar and vision can have difficulty detecting pedestrians in scattered environments, and radar can be used to overcome this problem. Therefore, the motivation of this work is to explore, as a preliminary step, the feasibility of fusing lidar, radar, and RGB for pedestrian detection that potentially can be used for autonomous driving that uses a fully connected convolutional neural network architecture for multimodal sensors. The core of the network is based on SegNet, a pixel-wise semantic segmentation network. In this context, lidar and radar were incorporated by transforming them from 3D pointclouds into 2D gray images with 16-bit depths, and RGB images were incorporated with three channels. The proposed architecture uses a single SegNet for each sensor reading, and the outputs are then applied to a fully connected neural network to fuse the three modalities of sensors. Afterwards, an up-sampling network is applied to recover the fused data. Additionally, a custom dataset of 60 images was proposed for training the architecture, with an additional 10 for evaluation and 10 for testing, giving a total of 80 images. The experiment results show a training mean pixel accuracy of 99.7% and a training mean intersection over union of 99.5%. Also, the testing mean of the IoU was 94.4%, and the testing pixel accuracy was 96.2%. These metric results have successfully demonstrated the effectiveness of using semantic segmentation for pedestrian detection under the modalities of three sensors. Despite some overfitting in the model during experimentation, it performed well in detecting people in test mode. Therefore, it is worth emphasizing that the focus of this work is to show that this method is feasible to be used, as it works regardless of the size of the dataset. Also, a bigger dataset would be necessary to achieve a more appropiate training. This method gives the advantage of detecting pedestrians as the human eye does, thereby resulting in less ambiguity. Additionally, this work has also proposed an extrinsic calibration matrix method for sensor alignment between radar and lidar based on singular value decomposition.
In recent times, there has been a surge of interest in LiDAR imaging systems, particularly in outdoor terrestrial applications associated with computer vision. However, a significant hurdle preventing their widespread implementation lies in their limited tolerance for adverse weather conditions, such as fog. To address this challenge, researchers have explored the capability of polarization in improving detection capabilities in such media. This paper explores the potential of LiDAR technology to obtain polarized images through fog and investigates the impact of fog on object detection using digitized temporal signals and point clouds. The study utilizes a LiDAR-polarized imaging system using circular polarization, which has been shown to enhance image contrast in highly-dispersive media. The analysis of the polarimetric information of the backscattered light signal in fog reveals its influence on object detection and evaluates the range difference between orthogonal polarimetric channels: coplanar and cross-configuration. The results demonstrate that cross-configuration detection provides larger range and more detailed point clouds compared to coplanar configuration, particularly benefiting metallic objects, for the same foggy conditions. By utilizing circularly polarized incident light and cross-configuration detection, the LiDAR system can improve the signal-to-noise ratio by filtering out the co-polarized fog responses. However, the range of the system may be reduced compared to nonpolarized detection. Overall, our findings indicate that utilizing a cross-polarization detection setup can effectively reduce the impact of fog backscatter while preserving the return signal from objects of interest in the majority of cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.