Abstract-In this paper, we propose a novel Aerial Social Force Model (ASFM) that allows autonomous flying robots to accompany humans in urban environments in a safe and comfortable manner. To date, we are not aware of other stateof-the-art method that accomplish this task. The proposed approach is a 3D version of the Social Force Model (SFM) for the field of aerial robots which includes an interactive human-robot navigation scheme capable of predicting human motions and intentions so as to safely accompany them to their final destination. ASFM also introduces a new metric to finetune the parameters of the force model, and to evaluate the performance of the aerial robot companion based on comfort and distance between the robot and humans. The presented approach is extensively validated in diverse simulations and real experiments, and compared against other similar works in the literature. ASFM attains remarkable results and shows that it is a valuable framework for social robotics applications, such as guiding people or human-robot interaction.
We present a new social robot named IVO, a robot capable of collaborating with humans and solving different tasks. The robot is intended to cooperate and work with humans in a useful and socially acceptable manner to serve as a research platform for long-term Social Human-Robot Interaction. In this paper, we proceed to describe this new platform, its communication skills and the current capabilities the robot possesses, such as, handing over an object to or from a person or performing guiding tasks with a human through physical contact. We describe the social abilities of the IVO robot, furthermore, we present the experiments performed for each robot's capacity using its current version.
Current industrial products must meet quality requirements defined by international standards. Most commercial surface inspection systems give qualitative detections after a long, cumbersome and very expensive configuration process made by the seller company. In this paper, a new surface defect detection method is proposed based on 3D laser reconstruction. The method compares the long products, scan by scan, with their desired shape and produces differential topographic images of the surface at very high speeds. This work proposes a novel method where the values of the pixels in the images have a direct translation to real-world dimensions, which enables a detection based on the tolerances defined by international standards. These images are processed using computer vision techniques to detect defects and filter erroneous detections using both statistical distributions and a multilayer perceptron. Moreover, a systematic configuration procedure is proposed that is repeatable and can be performed by the manufacturer. The method has been tested using train track rails, which reports better results than two photometric systems including one commercial system, in both defect detection and erroneous detection rate. The method has been validated using a surface inspection rail pattern showing excellent performance.
In the present paper, we propose a highly accurate and robust people detector, which works well under highly variant and uncertain conditions, such as occlusions, false positives and false detections. These adverse conditions, which initially motivated this research, occur when a robotic platform navigates in an urban environment, and although the scope is originally within the robotics field, the authors believe that our contributions can be extended to other fields. To this end, we propose a multimodal information fusion consisting of laser and monocular camera information. Laser information is modelled using a set of weak classifiers (Adaboost) to detect people. Camera information is processed by using HOG descriptors to classify person/non person based on a linear SVM. A multi-hypothesis tracker trails the position and velocity of each of the targets, providing temporal information to the fusion, allowing recovery of detections even when the laser segmentation fails. Experimental results show that our feedback-based system outperforms previous state-of-the-art methods in performance and accuracy, and that near real-time detection performance can be achieved. Keywords INTRODUCTIONHuman beings are so accustomed to navigating in crowded environments, such as busy streets or shopping malls, that they do not even realize the extreme difficulty that executing such tasks entails. Under the scope of robotics, we aim to obtain a successful perception system that permits us to enhance the current mobile robotic navigation paradigm, and to this end, a robust and fast human detector system is mandatory.Given a robotic platform like a two-wheeled robot ([12] and [15]), the challenge is being capable of building a system that perceives and predicts human behaviour during navigation tasks. However, the basis for the high-level interpretation of observed patterns of human motion requires detecting where the human being is. Given the nature of the project, where a highly uncertain environment is constantly sensed and the response of the system depends on human behaviour, we propose a feedback-based system that integrates a laser rangefinder, monocular camera and temporal information to detect human beings. As we will demonstrate later, the fusion of these sensors provides a tremendously robust performance, even under occlusion conditions due to the temporal information, while achieving a high level of accuracy.Although fusion systems have been thoroughly investigated, it is still a wide and open problem, many of the works on robot navigation do not obtain the accuracy required for a safe navigation at human-standard velocities, resulting in inaccurate and extremely slow systems.Our feedback-based approach outperforms state of the art methods, since it is able to detect people even when laser or image detections are not possible, thanks to the fusion of laser, camera and temporal information given by the feedback from the multi-hypothesis tracker. Moreover, it is able to work in nearly real-time due to the nature of ...
Abstract-We present a featureless pose estimation method that, in contrast to current Perspective-n-Point (PnP) approaches, it does not require n point correspondences to obtain the camera pose, allowing for pose estimation from natural shapes that do not necessarily have distinguished features like corners or intersecting edges. Instead of using n correspondences (e.g. extracted with a feature detector) we will use the raw polygonal representation of the observed shape and directly estimate the pose in the pose-space of the camera. This method compared with a general PnP method, does not require n point correspondences neither a priori knowledge of the object model (except the scale), which is registered with a picture taken from a known robot pose. Moreover, we achieve higher precision because all the information of the shape contour is used to minimize the area between the projected and the observed shape contours. To emphasize the non-use of n point correspondences between the projected template and observed contour shape, we call the method Planar P∅P. The method is shown both in simulation and in a real application consisting on a UAV localization where comparisons with a precise ground-truth are provided.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.