Semantic 3D mapping is one of the most important fields in robotics, and has been used in many applications, such as robot navigation, surveillance, and virtual reality. In general, semantic 3D mapping is mainly composed of 3D reconstruction and semantic segmentation. As these technologies evolve, there has been great progress in semantic 3D mapping in recent years. Furthermore, the number of robotic applications requiring semantic information in 3D mapping to perform high-level tasks has increased, and many studies on semantic 3D mapping have been published. Existing methods use a camera for both 3D reconstruction and semantic segmentation. However, this is not suitable for large-scale environments and has the disadvantage of high computational complexity. To address this problem, we propose a multimodal sensor-based semantic 3D mapping system using a 3D Lidar combined with a camera. In this study, we build a 3D map by estimating odometry based on a global positioning system (GPS) and an inertial measurement unit (IMU), and use the latest 2D convolutional neural network (CNN) for semantic segmentation. To build a semantic 3D map, we integrate the 3D map with semantic information by using coordinate transformation and Bayes' update scheme. In order to improve the semantic 3D map, we propose a 3D refinement process to correct wrongly segmented voxels and remove traces of moving vehicles in the 3D map. Through experiments on challenging sequences, we demonstrate that our method outperforms state-of-the-art methods in terms of accuracy and intersection over union (IoU). Thus, our method can be used for various applications that require semantic information in 3D map.
Semantic 3D maps are required for various applications including robot navigation and surveying, and their importance has significantly increased. Generally, existing studies on semantic mapping were camera-based approaches that could not be operated in large-scale environments owing to their computational burden. Recently, a method of combining a 3D Lidar with a camera was introduced to address this problem, and a 3D Lidar and a camera were also utilized for semantic 3D mapping. In this study, our algorithm consists of semantic mapping and map refinement. In the semantic mapping, a GPS and an IMU are integrated to estimate the odometry of the system, and subsequently, the point clouds measured from a 3D Lidar are registered by using this information. Furthermore, we use the latest CNN-based semantic segmentation to obtain semantic information on the surrounding environment. To integrate the point cloud with semantic information, we developed incremental semantic labeling including coordinate alignment, error minimization, and semantic information fusion. Additionally, to improve the quality of the generated semantic map, the map refinement is processed in a batch. It enhances the spatial distribution of labels and removes traces produced by moving vehicles effectively. We conduct experiments on challenging sequences to demonstrate that our algorithm outperforms state-of-the-art methods in terms of accuracy and intersection over union.
Abstract:In this paper, we propose a method to reduce the blind spot based on signal processing that indicates the minimum length of a cable under test in the time-frequency domain reflectometry without using an extension cable or adding a new high-speed hardware component. The time-frequency domain reflectometry adopted the proposed method can be achieved with not only a simple modification of the previous system but also a simple technique based on signal processing. The experimental results show that the proposed method allows us to estimate fault distance on the cable with high spatial resolution. Keywords: time-frequency domain reflectometry, blind spot, spatial resolution, fault detection Classification: Science and engineering for electronics Technol., vol. 8, pp. 1278Technol., vol. 8, pp. -1283Technol., vol. 8, pp. , 1990. OTDR with high-spatial resolution for short haul applications," IEEE Photon. Technol. Lett., vol. 9, pp. 1140-1142 Y. J. Shin, E. J. Powers, T. S. Choe, C. Y. Hong, E. S. Song, J. G. Yook, and J. B. Park, "Application of time-frequency domain reflectometry for detection and localization of a fault on a coaxial cable," IEEE Trans.
Accurate vehicle localization is important for autonomous driving and advanced driver assistance systems. Existing precise localization systems based on the global navigation satellite system cannot always provide lane-level accuracy even in open-sky environments. Map-based localization using high-definition (HD) maps is an interesting method for achieving greater accuracy. We propose a map-based localization method using a single camera. Our method relies on road link information in the HD map to achieve lane-level accuracy. Initially, we process the image—acquired using the camera of a mobile device—via inverse perspective mapping, which shows the entire road at a glance in the driving image. Subsequently, we use the Hough transform to detect the vehicle lines and acquire driving link information regarding the lane on which the vehicle is moving. The vehicle position is estimated by matching the global positioning system (GPS) and reference HD map. We employ iterative closest point-based map-matching to determine and eliminate the disparity between the GPS trajectories and reference map. Finally, we perform experiments by considering the data of a sophisticated GPS/inertial navigation system as the ground truth and demonstrate that the proposed method provides lane-level position accuracy for vehicle localization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.