Automated diagnostic aids can assist human operators in signal detection tasks, providing alarms, warnings, or diagnoses. Operators often use decision aids poorly, though, falling short of achievable performance levels. Previous research has suggested that operators interact with binary signal detection aids using a sluggish contingent cutoff (CC) strategy (Robinson & Sorkin, 1985), shifting their response criterion in the direction stipulated by the aid’s diagnosis each trial but making adjustments that are smaller than optimal. The current study tested this model by examining the efficiency of automation-aided signal detection under different levels of task difficulty. In two experiments, participants performed a numeric decision-making task requiring them to make signal or noise judgments on the basis of probabilistic gauge readings. The standard deviation of the readings differed between groups of participants, producing two levels of task difficulty. Data were fit with the CC model and two alternative accounts of automation-aided strategy, a discrete deference (DD) model, assuming participants defer to the aid on a subset of trials, and a mixture model, assuming participants chose randomly between the CC and DD strategies every trial. The mixture model best accounted for the data, indicating multiple forms of inefficiency in operators’ automation use strategies.
When human operators make signal detection judgments with assistance from an automated decision aid, they perform better than they could unaided but fail to reach optimal sensitivity. We investigated the decision strategies that produce this suboptimal performance. Participants ( N = 130) performed a two-response classification task that required them to mentally estimate the mean of a set of randomly sampled values each trial. The task was performed with and without assistance from a 93% reliable decision aid. Psychometric functions were fit to the classification data, and data were fit with two cognitive models of automation use. The first model assumed that participants made automation-aided judgments using a contingent criterion strategy, adjusting their response cutoff for yes vs. no responses following a cue from the aid. The second strategy, a discrete state model, assumed that participants made aided judgments by simply deferring to the aid on some proportion of trials. A measure of model fit favored the discrete-state process model, with parameter estimates indicating large individual differences in deferral rate between participants (range = 2% and 95%).
Automated signal detection aids assist human operators in various activities (e.g., medical diagnosis). However, operators tend to interact with these aids in suboptimal ways. An under-considered factor that might influence operators’ aid use is the correlation between the aid’s and the operator’s observations. Although prior research has generally assumed that human operators and automated aids rely on independent information sources, correlated observations may be common in naturalistic automation-aided detection tasks. The present study explored the effects of correlated observations on automation-use by analyzing performance in a numerical signal detection task using the Contingent Cutoff (CC) model, a statistical cognitive model of automation-aided decision making. Participants completed the task with and without assistance of an aid, and in one of two conditions: correlated observations vs. uncorrelated observations. The CC model accurately described observed performance for both conditions. Overall aid use efficiency was unaffected by correlated observations.
When individuals work together to make decisions in a signal detection task, they typically achieve greater sensitivity as a group than they could each achieve on their own. The present experiments investigate whether metacognitive, or Type 2, signal detection judgements would show a similar pattern of collaborative benefit. Thirty-two participants in Experiment 1 and sixty participants in Experiment 2 completed a signal detection task individually and in groups, and measures of Type 1 and Type 2 sensitivity were calculated from participants’ confidence judgments. Bayesian parameter estimates suggested that regardless of whether teams are given feedback on their performance (Experiment 1) or receive no feedback (Experiment 2), no credible differences were observed in metacognitive efficiency between the teams and the better members, nor between the teams and the worse members. These findings suggest that teams may self-assess their performance by deferring metacognitive judgments to the most metacognitively sensitive individual within the team, even without trial-by-trial feedback, rather than integrating their judgments and achieving increased metacognitive awareness of their own performance.
Decision makers working in teams generally achieve higher sensitivity than individual decision makers in a signal detection task. The current experiment asked whether metacognitive, or Type 2, signal detection judgments would show a similar effect. Thirty participants performed a signal detection task both individually and in groups. Measures of Type 1 and Type 2 sensitivity were calculated from participants’ confidence ratings. Robust evidence suggested that when working collaboratively, group metacognitive efficiency exceeded what the worse group members could achieve on their own, but was no different than the metacognitive efficiency of the better members. These results suggest that collaborative teams may assess their performance by defaulting to the judgment of the most meta-cognitively sensitive team member.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.