Automated diagnostic aids can assist human operators in signal detection tasks, providing alarms, warnings, or diagnoses. Operators often use decision aids poorly, though, falling short of achievable performance levels. Previous research has suggested that operators interact with binary signal detection aids using a sluggish contingent cutoff (CC) strategy (Robinson & Sorkin, 1985), shifting their response criterion in the direction stipulated by the aid’s diagnosis each trial but making adjustments that are smaller than optimal. The current study tested this model by examining the efficiency of automation-aided signal detection under different levels of task difficulty. In two experiments, participants performed a numeric decision-making task requiring them to make signal or noise judgments on the basis of probabilistic gauge readings. The standard deviation of the readings differed between groups of participants, producing two levels of task difficulty. Data were fit with the CC model and two alternative accounts of automation-aided strategy, a discrete deference (DD) model, assuming participants defer to the aid on a subset of trials, and a mixture model, assuming participants chose randomly between the CC and DD strategies every trial. The mixture model best accounted for the data, indicating multiple forms of inefficiency in operators’ automation use strategies.