2007
DOI: 10.1002/rob.20215
|View full text |Cite
|
Sign up to set email alerts
|

Evolving interface design for robot search tasks

Abstract: This paper describes two steps in the evolution of human-robot interaction designs developed by the University of Massachusetts Lowell (UML) and the Idaho National Laboratory (INL) to support urban search and rescue tasks. We conducted usability tests to compare the two interfaces, one of which emphasized three-dimensional mapping while the other design emphasized the video feed. We found that participants desired a combination of the interface design approaches. As a result, we changed the UML system to augme… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
14
0

Year Published

2008
2008
2019
2019

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 30 publications
(17 citation statements)
references
References 28 publications
3
14
0
Order By: Relevance
“…Yanco et al presented work comparing interface design elements in a mock urban search and rescue task using eight urban search and rescue professionals [22]. This experiment compared the visualization of information from the robots, while our current work is based on combining and orchestrating the visualization of the information and the amount of initiative afforded the robot.…”
Section: A Expert User Experimentsmentioning
confidence: 99%
“…Yanco et al presented work comparing interface design elements in a mock urban search and rescue task using eight urban search and rescue professionals [22]. This experiment compared the visualization of information from the robots, while our current work is based on combining and orchestrating the visualization of the information and the amount of initiative afforded the robot.…”
Section: A Expert User Experimentsmentioning
confidence: 99%
“…Regardless of use domain, an unmanned vehicle must be able to detect and avoid obstacles, build data representations, plan paths, and accept a variety of inputs consistent with task demands. The framework has been going through an iterative development cycle where performance and behaviors have been iteratively developed in the laboratory in response to user needs and field evaluated and tested by INL and external users [3,5]. The framework ( Figure 2) consists of: (1) a robot-sensor architecture for interfacing a variety of robot platforms, perceptual sensors, and algorithmic capabilities, (2) a communications server for sending and receiving messages to trigger capabilities via the operator control unit and external processes, and (3) an application layer that consists of task level and interaction behaviors for intelligent unmanned ground vehicle navigation.…”
Section: Rik Overviewmentioning
confidence: 99%
“…These RIK interaction modes (and the underlying and associated behaviors) have empirically shown the ability to reduce remote operator workload and improve system user performance when used for search and detection or reconnaissance missions [3,4,6]. While the different interaction modes remained the same, the underlying implementation of the associated tactical behaviors was expanded significantly to accommodate the countermine mission.…”
Section: Rik Behaviorsmentioning
confidence: 99%
“…With this technique, problems can arise when the human operator does not understand why a part of the system they do not have direct control over is behaving in a particular manner (see Figure 1), usually due to poor situation awareness [1]. Attempts have been made to correct these issues by displaying additional sensor and system state information in the operator control unit (e.g., [8]). …”
Section: Introductionmentioning
confidence: 99%