Many critical search tasks, such as airport and medical screening, involve searching for targets that are rarely present. These low-prevalence targets are associated with extremely high miss rates Wolfe, Horowitz, & Kenner (Nature, 435, 439-440, 2005). The inflated miss rates are caused by a criterion shift, likely due to observers attempting to equate the numbers of misses and false alarms. This equalizing strategy results in a neutral criterion at 50 % target prevalence, but leads to a higher proportion of misses for low-prevalence targets. In the present study, we manipulated participants' perceived number of misses through explicit false feedback. As predicted, the participants in the false-feedback condition committed a higher number of false alarms due to a shifted criterion. Importantly, the participants in this condition were also more successful in detecting targets. These results highlight the importance of perceived prevalence in target search tasks.Keywords Signal detection theory . Visual search . Low prevalence . FeedbackVisual search tasks are something we partake in every day. While some searches are rather trivial in nature (e.g., looking for the shirt that we want to wear, finding food in the refrigerator, or locating our car keys), other search tasks play a vital role in our wellbeing. Airport security, radiologists, and military personnel all perform critical visual search tasks that can have serious repercussions if the targets that they are searching for are not detected. Just recently, security officials at LAX failed to detect a loaded gun in a handbag (Blankstein & Sewell, 2011) and caused a scare when they mistook an insulin pump for a gun (Blankstein, 2012). These critical search tasks are more difficult when the targets are rare (i.e., have a low prevalence rate), as is often the case. The likelihood of missing a target is substantially higher for low-prevalence targets, a finding termed the low-prevalence (LP) effect (Wolfe, Horowitz, & Kenner, 2005). Wolfe et al. (2005) found that target miss rates were only 7 % when a target appeared in 50 % of the trials, but miss rates increased to 30 % when a target appeared in only 1 % of the trials. This effect has serious implications in a critical search task such as medical screening, where the prevalence of a target can be less than 1 % (Fenton et al., 2007).Analyses using signal detection theory (SDT; Green & Swets, 1966) have revealed that the LP effect is the result of a criterion shift rather than of a loss in sensitivity (Wolfe et al., 2007;Wolfe & Van Wert, 2010). As the prevalence of a target decreases, observers become biased against the "target detected" response. Some evidence has also suggested that a speed-accuracy trade-off could contribute to the LP effect (Fleck & Mitroff, 2007), but further research revealed that the speed-accuracy trade-off was primarily responsible for misses due to motor-response errors, not misses resulting from a criterion shift (Rich et al., 2008;Van Wert, Horowitz, & Wolfe, 2009). Wolfe et...
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.
Three theories of the informational basis for object interception strategies were tested in an experiment where participants pursued toy helicopters. Helicopters were used as targets because their unpredictable trajectories have different effects on the optical variables that have been proposed as the basis of object interception, providing a basis for determining the variables that best explain this behavior. Participants pursued helicopters while the positions of both pursuer and helicopter were continuously monitored. Using models to predict the observed optical trajectories of the helicopter and ground positions of the pursuer, optical acceleration was eliminated as a basis of object interception. A model based on control of optical velocity (COV) provided the best match to pursuer ground movements, while one based on segments of linear optical trajectories (SLOT) provided the best match to the observed optical trajectories. We describe suggestions for further research to distinguish the COV and SLOT models.
Pigeons responded to intermittently reinforced classical conditioning trials with erratic bouts of responding to the CS. Responding depended on whether the prior trial contained a peck, food, or both. A linear-persistence/learning model moved animals into and out of a response state, and a Weibull distribution for number of within-trial responses governed in-state pecking. Variations of trial and inter-trial durations caused correlated changes in rate and probability of responding, and model parameters. A novel prediction-in the protracted absence of food, response rates can plateau above zero-was validated. The model predicted smooth acquisition functions when instantiated with the probability of food, but a more accurate jagged learning curve when instantiated with trialto-trial records of reinforcement. The Skinnerian parameter was dominant only when food could be accelerated or delayed by pecking. These experiments provide a framework for trial-by-trial accounts of conditioning and extinction that increases the information available from the data, permitting them to comment more definitively on complex contemporary models of momentum and conditioning. KeywordsAutoshaping; Behavioral momentum; Classical conditioning; Dynamic analyses; Instrumental conditioning Estes's stimulus sampling theory provided the first approximation to a general quantitative theory of learning; by adding a hypothetical attentional mechanism to conditioning, it carried analysis one step beyond extant linear learning models into the realm of theory (Atkinson & Estes, 1962;Bower, 1994;Estes, 1950Estes, , 1962Healy, Kosslyn, & Shiffrin, 1992). Wagner and Rescorla (1972) added the important nuance that the asymptotic level of conditioning might be partitioned among stimuli that are associated with reinforcers, as a function of their reliability as predictors of reinforcement; that refinement has had tremendous and widespread impact (Siegel & Allan, 1996). The attempt to couch the theory in ways that account for increasing amounts of the variance in behavior has been one of the main engines driving modern learning theory. Models have been the agents of progress, the go-betweens that reshaped both our theoretical inferences about the conditioning processes, and our modes of analysis of the data. In this theoretical-empirical dialog, the Rescorla-Wagner (R-W) model has been paragon.Despite the elegant mathematical form of their arguments, the predictions of recent learning models are almost always qualitative-a particular constellation of cues is predicted to block or enhance conditioning more than others, due to their differential associability, or their history Correspond with: Peter Killeen, Department of Psychology, Arizona State University, Box 871104, Tempe, Arizona 85287-1104, USA, Killeen@asu.edu. Publisher's Disclaimer: The following manuscript is the final accepted manuscript. It has not been subjected to the final copyediting, fact-checking, and proofreading required for formal publication. It is not the definitive, publis...
Our ability to detect a target in visual search relates to the prevalence of the target, whereby rare targets are missed more than common targets. The current study sought to identify operator characteristics that could account for the higher miss rates associated with rare targets. The results found that working-memory capacity, which is strongly related to attentional control and inhibition of irrelevant information, was significantly correlated with the ability to detect low-prevalence targets. High-capacity observers also exhibited lengthened target-absent responses with rare targets, suggesting that the high-capacity observers were more persistent in their searches than others.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.