Most of the feedback received by operators of current robotteleoperation systems is graphical. When a large variety of robot data needs to be displayed however, this may lead to operator overload. The research presented in this paper focuses on offloading part of the feedback to other human senses, specifically to the sense of touch, to reduce the load due to the interface, and as a consequence, to increase the level of operator situation awareness. Graphical and vibro-tactile versions of feedback delivery for collision interfaces were evaluated in a search task using a virtual teleoperated robot. Parameters measured included task time, number of collisions between the robot and the environment, number of objects found and the quality of post-experiment reports through the use of sketch maps. Our results indicate that the combined use of both graphical and vibro-tactile feedback interfaces led to an increase in the quality of sketch maps, a possible indication of increased levels of operator situation awareness, but also a slight decrease in the number of robot collisions.KEYWORDS: virtual reality, robot teleoperation, multi-sensory interfaces, vibro-tactile feedback, collision proximity detection.
INTRODUCTIONThe process of robot teleoperation may be divided into four primary activities: sensing the state of the robot and the remote environment, making sense of such state, deciding on the next action to be taken, and carrying out that action. Any of these steps may make use of automation. The human-robot interaction (HRI) cycle in Figure 1 happens indefinitely as the task is carried out. In the case of urban search-and-rescue (USAR), the main focus area of this paper, little automation is generally present, though the use of point navigation has become a common approach in robot teleoperation [22]USAR teleoperation is generally done through the use of ordinary input devices such as keyboard, mouse, and joystick. Most if not all of the information sensed from the robot is presented in a graphic display. During a mission, the operator uses this interface not only as a means to understand the state of the robot and its surrounding environment, but also as a tool to complete mission goals. Depending on how data is represented on screen, succeeding in both of these tasks may turn out to be very cognitively demanding. This increase in cognitive load may cause a decrease in operator situation awareness (SA) [12], and hence hinder the performance of the entire HRI system [10][24] [29].The research presented here aims at evaluating the impact on SA and performance when part of the data transmitted by the robot is displayed to the operator using senses other than vision. Specifically, the proposed interface uses a body-worn vibro-tactile display to provide feedback to the operator for collision proximity between the robot and the remote environment. In a four-way comparison, the use of vibro-tactile feedback is compared with the use of no feedback, the use of graphical feedback, and the use of both types of feed...