ABSTRACT:RGB-D cameras, also known as range imaging cameras, are a recent generation of sensors. As they are suitable for measuring distances to objects at high frame rate, such sensors are increasingly used for 3D acquisitions, and more generally for applications in robotics or computer vision. This kind of sensors became popular especially since the Kinect v1 (Microsoft) arrived on the market in November 2010. In July 2014, Windows has released a new sensor, the Kinect for Windows v2 sensor, based on another technology as its first device. However, due to its initial development for video games, the quality assessment of this new device for 3D modelling represents a major investigation axis. In this paper first experiences with Kinect v2 sensor are related, and the ability of close range 3D modelling is investigated. For this purpose, error sources on output data as well as a calibration approach are presented.
Accurate self-vehicle localization is an important task for autonomous driving and ADAS. Current GNSS-based solutions do not provide better than 2-3 m in open-sky environments [1]. Moreover, map-based localization using HD maps became an interesting source of information for intelligent vehicles. In this paper, a Map-based localization using a multilayer LIDAR is proposed. Our method mainly relies on road lane markings and an HD map to achieve lane-level accuracy. At first, road points are segmented by analysing the geometric structure of each returned layer points. Secondly, thanks to LIDAR reflectivity data, road marking points are projected onto a 2D image and then detected using Hough Transform. Detected lane markings are then matched to our HD map using Particle Filter (PF) framework. Experiments are conducted on a Highway-like test track using GPS/INS with RTK correction as ground truth. Our method is capable of providing a lane-level localization with a 22 cm cross-track accuracy.
Accurate localization is very important to ensure performance and safety of autonomous vehicles. In particular, with the appearance of High Definition (HD) sparse geometric road maps, many research works have been focusing on the deployment of accurate localization systems in a previously built map. In this paper, we solve a localization problem by matching road perceptions from a 3D LIDAR sensor with HD map elements. The perception system detects High Reflective Landmarks (HRL) such as: lane markings, road signs and guard rail reflectors (GRR) from a 3D point cloud. A particle filtering algorithm estimates the position of the vehicle by matching observed HRLs with HD map attributes. The proposed approach extends our work in [1] and [2] where a localization system based on lane markings and road signs has been developed. Experiments have been conducted on a highway-like test track using GNSS/INS with RTK corrections as a ground truth (GT). Error evaluations are given as cross-track (CT) and along-track (AT) errors defined in the curvilinear coordinates [3] related to the map. The obtained accuracies of our localization system is 18 cm for the crosstrack error and 32 cm for the along-track error.
Commission 5WG V, ICWG I/Va KEY WORDS: mobile mapping, system, orthoimage, urban environment, range imaging camera ABSTRACT:3D cameras are a new generation of sensors more and more used in geomatics. The main advantages of 3D cameras are their handiness, their price, and the ability to produce range images or point clouds in real-time. They are used in many areas and the use of this kind of sensors has grown especially as the Kinect (Microsoft) arrived on the market. This paper presents a new localization system based exclusively on the combination of several 3D cameras on a mobile platform. It is planed that the platform moves on sidewalks, acquires the environment and enables the determination of most appropriate routes for disabled persons. The paper will present the key features of our approach as well as promising solutions for the challenging task of localization based on 3D-cameras. We give examples of mobile trajectory estimated exclusively from 3D cameras acquisitions. We evaluate the accuracy of the calculated trajectory, thanks to a reference trajectory obtained by a total station.
Sensors and their associated data fusion techniques, play a crucial role in Autonomous Vehicle (AV) decisionmaking applications. Accurately evaluate performance and reliability of the perception sources is an important task to be able to know the consistency of this data fusion. In this paper, a reference data generation framework for assessing perception sensors performances is proposed. Our approach relies on the complementary use of three data sources: a highly precise 3D map with semantic information, a High Density range finder sensor and a GNSS-RTK/INS localization unit. 3D map provides semantic knowledge of the environment and HD range finder precisely senses ego-vehicle surroundings. Finally, 3D map and HD scans are geometrically associated using positioning information in order to combine them and to infer reference data. Thorough experiments were conducted to evaluate and validate the proposed approach. As a proof of concept, performances of a LiDAR-based road plane detection method were evaluated, quantified and reported in terms of precision and recall.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.