Abstract-In urban environments, moving obstacles detection and free space determination are key issues for driving assistance systems and autonomous vehicles. When using lidar sensors scanning in front of the vehicle, uncertainty arises from ignorance and errors. Ignorance is due to the perception of new areas and errors come from imprecise pose estimation and noisy measurements. Complexity is also increased when the lidar provides multi-echo and multi-layer information. This paper presents an occupancy grid framework that has been designed to manage these different sources of uncertainty. A way to address this problem is to use grids projected onto the road surface in global and local frames. The global one generates the mapping and the local one is used to deal with moving objects. A credibilist approach is used to model the sensor information and to do a global fusion with the worldfixed map. Outdoor experimental results carried out with a precise positioning system show that such a perception strategy increases significantly the performance compared to a standard approach.
Advanced Driving Assistance Systems exploit exteroceptive sensors to help the driver in perceiving the dynamic environment, like other vehicles or pedestrians. This paper proposes an original approach to deal with this perception challenge in urban environments. The method detects mobile objects motions using grids elaborated thanks to a lidar range scanner and an enhanced map of the drivable space. The data fusion is performed using the Dempster-Shafer theory which provides an interesting framework particularly well adapted to manage the uncertainties of the sensors. By analyzing conflicting information, objects movements can be efficiently characterized. This formalism provides also the interesting possibility to introduce decay factors that are useful for forgetting old information. Experimental results obtained with an IBEO Alasca and an Applanix positioning system show that such a perception strategy can be effective compared to deterministic accumulation strategies.
In this paper we present a new approach for semantic recognition in the context of robotics. When a robot evolves in its environment, it gets 3D information given either by its sensors or by its own motion through 3D reconstruction. Our approach uses (i) 3D-coherent synthesis of scene observations and (ii) mix them in a multi-view framework for 3D labeling. (iii) This is efficient locally (for 2D semantic segmentation) and globally (for 3D structure labeling). This allows to add semantics to the observed scene that goes beyond simple image classification, as shown on challenging datasets such as SUNRGBD or the 3DRMS Reconstruction Challenge. * Computed at low resolution (224x224) as in [23] on the contrary of all other results computed at native resolution. **We also test a High Definition strategy, cropping 224x244 patches at original resolution instead of warping image.
We present a new dataset, dedicated to the development of simultaneous localization and mapping methods for underwater vehicles navigating close to the seabed. The data sequences composing this dataset are recorded in three different environments: a harbor at a depth of a few meters, a first archaeological site at a depth of 270 meters and a second site at a depth of 380 meters. The data acquisition is performed using Remotely Operated Vehicles equipped with a monocular monochromatic camera, a low-cost inertial measurement unit, a pressure sensor and a computing unit, all embedded in a single enclosure. The sensors' measurements are recorded synchronously on the computing unit and seventeen sequences have been created from all the acquired data. These sequences are made available in the form of ROS bags and as raw data. For each sequence, a trajectory has also been computed offline using a Structure-from-Motion library in order to allow the comparison with real-time localization methods. With the release of this dataset, we wish to provide data difficult to acquire and to encourage the development of vision-based localization methods dedicated to the underwater environment. The dataset can be downloaded from: http://www.lirmm.fr/aqualoc/
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.