Objective: A series of experiments examined human operators' strategies for interacting with highly (93%) reliable automated decision aids in a binary signal detection task.Background: Operators often interact with automated decision aids in a suboptimal way, achieving performance levels lower than predicted by a statistically ideal model of information integration. To better understand operators' inefficient use of decision aids, the current study compared participants' automation-aided performance levels to the predictions of seven statistical models of collaborative decision making.Method: Participants performed a binary signal detection task that asked them to classify random dot images as either blue-or orange-dominant. They made their judgments either unaided or with assistance from a 93%-reliable automated decision aid that provided either graded (Experiments 1 and 3) or binary (Experiment 2) cues. Analysis compared automation-aided performance to the predictions of seven statistical models of collaborative decision making, including a statistically optimal model (Sorkin & Dai, 1994) and Robinson and Sorkin's (1985) contingent criterion model. Results and conclusion:Automation-aided sensitivity hewed closest to the predictions of the two least efficient collaborative models, well short of statistically ideal levels. Performance was similar whether the aid provided graded or binary judgments. Model comparisons identified potential strategies by which participants integrated their judgments with the aid's. Application:Results lend insight into participants' automation-aided decision strategies, and provide benchmarks for predicting automation-aided performance levels.Keywords: human-automation interaction, signal detection theory, decision-making strategies, contingent criterion model BENCHMARKING AIDED DECISIONS 2 Benchmarking Aided Decision Making in a Signal Detection Task Human operators in everyday and professional contexts work with the assistance of automated decision aids. The assisted tasks often take the form of binary signal detection judgments, which ask a decision maker to classify potentially ambiguous states of the world into either of two discrete categories (Green & Swets, 1966;Macmillan & Creelman, 2005).A credibility assessment aid, for instance, might help organizational decision makers distinguish deceptive from honest responses when questioning interviewees in negotiations or investigations (Jensen, Lowry, & Jenkins, 2011). Analogously, a combat identification system might help soldiers distinguish friends from foes on the battlefield (Wang, Jamieson, & Hollands, 2009). Ideally, assistance from an automated aid will help the human operator to achieve higher levels of sensitivity, the ability to distinguish between states of the world. But like the human operator, an automated decision aid performing a signal detection task is typically required to render judgments based on incomplete or uncertain data. The aid's sensitivity will therefore be imperfect, just as the human operato...
Objective The present study replicated and extended prior findings of suboptimal automation use in a signal detection task, benchmarking automation-aided performance to the predictions of several statistical models of collaborative decision making. Background Though automated decision aids can assist human operators to perform complex tasks, operators often use the aids suboptimally, achieving performance lower than statistically ideal. Method Participants performed a simulated security screening task requiring them to judge whether a target (a knife) was present or absent in a series of colored X-ray images of passenger baggage. They completed the task both with and without assistance from a 93%-reliable automated decision aid that provided a binary text diagnosis. A series of three experiments varied task characteristics including the timing of the aid’s judgment relative to the raw stimuli, target certainty, and target prevalence. Results and Conclusion Automation-aided performance fell closest to the predictions of the most suboptimal model under consideration, one which assumes the participant defers to the aid’s diagnosis with a probability of 50%. Performance was similar across experiments. Application Results suggest that human operators’ performance when undertaking a naturalistic search task falls far short of optimal and far lower than prior findings using an abstract signal detection task.
Objective: To investigate whether manipulating the format of an automated decision aid’s cues can improve participants’ information integration strategies in a signal detection task. Background: Automation-aided decision making is often suboptimal, falling well short of statistically ideal levels. The choice of format in which the cues from the aid are displayed may help users to better understand and integrate the aid’s judgments with their own. Method: Participants performed a signal detection task that asked them to classify random dot images as either blue or orange dominant. They made their judgments either unaided or with assistance from a 93% reliable automated decision aid. The aid provided a binary judgment, along with an estimate of signal strength in the form of either a raw value, a likelihood ratio, or a confidence rating (Experiments 1 and 2) or a binary judgment along with either a verbal or verbal-visuospatial expression of confidence (Experiment 3). Aided sensitivity was benchmarked to the predictions of various statistical models of collaborative decision making. Results and Conclusion: Aided performance was suboptimal, matching the predictions of some of the least efficient models. Most importantly, performance was similar across cue formats. Application: Results indicate that changes to the format in which cues from a signal detection aid are rendered are unlikely to dramatically improve the efficiency of automation-aided decision making.
Robo‐advisors, a type of automated decision aid, offer consumers a cost‐efficient alternative to traditional financial advisory services. Because aids do not always produce correct judgments, however, users may fail to act appropriately on their advice. To anticipate and protect against suboptimal aid use, designers need to understand the variables that influence automation trust and dependence, including the operators' inherent biases, and the characteristics of the automated system itself. This paper reviews the literature on human interaction with decision aids, aiming to inform the design of robo‐advisory platforms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.