In this paper an optimal method for distributed collision avoidance among multiple non-holonomic robots is presented in theory and experiments. Non-holonomic optimal reciprocal collision avoidance (NH-ORCA) builds on the concepts introduced in [2], but further guarantees smooth and collision-free motions under non-holonomic constraints. Optimal control inputs and constraints in velocity space are formally derived for the non-holonomic robots. The theoretical results are validated in several collision avoidance experiments with up to fourteen e-puck robots set on collision course. Even in scenarios with very crowded situations, NH-ORCA showed to be collision-free for all times.
Most of the existing camera calibration toolboxes require the observation of a checkerboard shown by the user at different positions and orientations. This paper presents an algorithm for the automatic detection of checkerboards, described by the position and the arrangement of their corners, in blurred and heavily distorted images. The method can be applied to both perspective and omnidirectional cameras. An existing corner detection method is evaluated and its strengths and shortcomings in detecting corners on blurred and distorted test image sets are analyzed. Starting from the results of this analysis, several improvements are proposed, implemented, and tested. We show that the proposed algorithm is able to consistently identify 80% of the corners on omnidirectional images of as low as VGA resolution and approaches 100% correct corner extraction at higher resolutions, outperforming the existing implementation significantly. The performance of the proposed method is demonstrated on several test image sets of various resolution, distortion, and blur, which are exemplary for different kinds of camera-mirror setups in use.
In this article we present a novel display that is created using a group of mobile robots. In contrast to traditional displays that are based on a fixed grid of pixels, such as a screen or a projection, this work describes a display in which each pixel is a mobile robot of controllable color. Pixels become mobile entities, and their positioning and motion are used to produce a novel experience. The system input is a single image or an animation created by an artist. The first stage is to generate physical goal configurations and robot colors to optimally represent the input imagery with the available number of robots. The run-time system includes goal assignment, path planning and local reciprocal collision avoidance, to guarantee smooth, fast and oscillation-free motion between images. The algorithms scale to very large robot swarms and extend to a wide range of robot kinematics. Experimental evaluation is done for two different physical swarms of size 14 and 50 differentially driven robots, and for simulations with 1,000 robot pixels.
Future requirements for drastic reduction of CO2 production and energy consumption will lead to significant changes in the way we see mobility in the years to come. However, the automotive industry has identified significant barriers to the adoption of electric vehicles, including reduced driving range and greatly increased refueling times.Automated cars have the potential to reduce the environmental impact of driving, and increase the safety of motor vehicle travel. The current state-of-the-art in vehicle automation requires a suite of expensive sensors. While the cost of these sensors is decreasing, integrating them into electric cars will increase the price and represent another barrier to adoption.The V-Charge Project, funded by the European Commission, seeks to address these problems simultaneously by developing an electric automated car, outfitted with close-to-market sensors, which is able to automate valet parking and recharging for integration into a future transportation system. The final goal is the demonstration of a fully operational system including automated navigation and parking. This paper presents an overview of the V-Charge system, from the platform setup to the mapping, perception, and planning sub-systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.