Abstract-Three advanced natural interaction modalities for mobile robot guidance in an indoor environment were developed and compared using two tasks and quantitative metrics to measure performance and workload. The first interaction modality is based on direct physical interaction requiring the human user to push the robot in order to displace it. The second and third interaction modalities exploit a 3D vision-based human-skeleton tracking allowing the user to guide the robot by either walking in front of it or by pointing towards a desired location. In the first task, the participants were asked to guide the robot between different rooms in a simulated physical apartment requiring rough movement of the robot through designated areas. The second task evaluated robot guidance in the same environment through a set of waypoints, which required accurate movements. The three interaction modalities were implemented on a generic differential drive mobile platform equipped with a pan-tilt system and a Kinect camera. Task completion time and accuracy were used as metrics to assess the users' performance, while the NASA-TLX questionnaire was used to evaluate the users' workload. A study with 24 participants indicated that choice of interaction modality had significant effect on completion time (F(2,61)=84.874, p<0.001), accuracy (F(2,29)=4.937, p=0.016), and workload (F(2,68)=11.948, p<0.001). The direct physical interaction required less time, provided more accuracy and less workload than the two contactless interaction modalities. Between the two contactless interaction modalities, the person-following interaction modality was systematically better than the pointing-control one: the participants completed the tasks faster with less workload. Index Terms-Human-robot interaction, mobile robots, direct physical interaction, person tracking, person following, gesture recognition
To achieve an improved human-robot interaction it is necessary to allow the human participant to interact with the robot in a natural way. In this work, a gesture recognition algorithm, based on dynamic time warping, was implemented with a use-case scenario of natural interaction with a mobile robot. Inputs are gesture trajectories obtained using a Microsoft Kinect sensor. Trajectories are stored in the person's frame of reference. Furthermore, the recognition is position-invariant, meaning that only one learned sample is needed to recognize the same gesture performed at another position in the gestural space. In experiments, a set of gestures for a robot waiter was used to train the gesture recognition algorithm. The experimental results show that the proposed modifications of the standard gesture recognition algorithm improve the robustness of the recognition.
In this paper, we present the adaptation of a sensorless (in De Luca's sense [1], i.e., without the use of extra sensors,) collision detection approach previously used on robotic arms to mobile wheeled robots. The method is based on detecting the torque disturbance and does not require a model of the robot's dynamics. We then consider the feasibility of developing control by physical interaction strategies using the described adapted technique.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.