A trustable and accurate ground truth is a key requirement for benchmarking self-localization and mapping algorithms; on the other hand, collection of ground truth is a complex and daunting task, and its validation is a challenging issue. In this paper we propose two techniques for indoor ground truth collection, developed in the framework of the European project RAW S E E D S, which are mutually independent and also independent on the sensors onboard the robot. These techniques are based, respectively, on a network of fixed cameras, and on a network of fixed laser scanners. We show how these systems are implemented and deployed, and, most importantly, we evaluate their performance; moreover, we investigate the possible fusion of their outputs
a b s t r a c tEgo-motion estimation and localization in large environments are key components in any assistive technology for real-time user orientation and navigation. We consider the case where a large known environment is explored without a priori assumptions on the initial location. In particular we propose a framework that uses a single portable 3D sensor to solve the place recognition problem and continuously tracks its position even when leaving the known area or when significant changes occur in the observed environment.We cast the place recognition step as a classification problem and propose an efficient search space reduction considering only navigable areas where the user can be localized. Classification hypotheses are then discarded exploiting temporal consistency w.r.t. a relative tracker that exploits only the sensor input data. The solution uses a compact classifier whose representation scales well with the map size. After being localized, the user is continuously tracked exploiting the known environment using an efficient data structure that provides constant access time for nearest neighbor searches and that can be streamed to keep only the local region close to the last known position in memory. Robust results are achieved by performing a geometrically stable selection of points, efficiently filtering outliers and integrating the relative tracker based on previous observations. We experimentally show that such a framework provides good localization results and that it scales well with the environment map size yielding real-time performance for both place recognition and tracking.
The purpose of this chapter is twofold: on one hand, it aims at defining a clear framework for the design and implementation of autonomous wheelchairs, highlighting the main challenges; on the other hand, it presents a complete and working system of such type, called LURCH. This incorporates technology from autonomous robotics, and interacts with its user through a multi-modal user interface, including joystick, touch screen, electromyographic control, or brain-computer interface. If required, other input methods and controllers can be seamlessly integrated. The result is an autonomous wheelchair capable of supporting user mobility while adapting its level of autonomy both to the abilities and to the requirements of the user. Moreover, the capabilities of such a system (in terms of perception, data processing, user interface, communication) open the way to novel modes of interaction between environment and wheelchair users, really making the latter differently able, i.e., endowing them with abilities that walking people cannot access without special equipment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.