The stop-signal task has been used to study normal cognitive control and clinical dysfunction. Its utility is derived from a race model that accounts for performance and provides an estimate of the time it takes to stop a movement. This model posits a race between go and stop processes with stochastically independent finish times. However, neurophysiological studies demonstrate that the neural correlates of the go and stop processes produce movements through a network of interacting neurons. The juxtaposition of the computational model with the neural data exposes a paradox-how can a network of interacting units produce behavior that appears to be the outcome of an independent race? The authors report how a simple, competitive network can solve this paradox and provide an account of what is measured by stop-signal reaction time.
The university participant pool is a key resource for behavioral research, and data quality is believed to vary over the course of the academic semester. This crowdsourced project examined time of semester variation in 10 known effects, 10 individual differences, and 3 data quality indicators over the course of the academic semester in 20 participant pools (N = 2,696) and with an online sample (N = 737). Weak time of semester effects were observed on data quality indicators, participant sex, and a few individual differences-conscientiousness, mood, and stress. However, there was little evidence for time of semester qualifying experimental or correlational effects. The generality of this evidence is unknown because only a subset of the tested effects demonstrated evidence for the original result in the whole sample. Mean characteristics of pool samples change slightly during the semester, but these data suggest that those changes are mostly irrelevant for detecting effects. Word count = 151Keywords: social psychology; cognitive psychology; replication; participant pool; individual differences; sampling effects; situational effects 4 Many Labs 3: Evaluating participant pool quality across the academic semester via replication University participant pools provide access to participants for a great deal of published behavioral research. The typical participant pool consists of undergraduates enrolled in introductory psychology courses that require students to complete some number of experiments over the course of the academic semester. Common variations might include using other courses to recruit participants or making study participation an option for extra credit rather than a pedagogical requirement. Research-intensive universities often have a highly organized participant pool with a participant management system for signing up for studies and assigning credit. Smaller or teaching-oriented institutions often have more informal participant pools that are organized ad hoc each semester or for an individual class.To avoid selection bias based on study content, most participant pools have procedures to avoid disclosing the content or purpose of individual studies during the sign-up process.However, students are usually free to choose the time during the semester that they sign up to complete the studies. This may introduce a selection bias in which data collection on different dates occurs with different kinds of participants, or in different situational circumstances (e.g., the carefree semester beginning versus the exam-stressed semester end).If participant characteristics differ across time during the academic semester, then the results of studies may be moderated by the time at which data collection occurs. Indeed, among behavioral researchers there are widespread intuitions, superstitions, and anecdotes about the "best" time to collect data in order to minimize error and maximize power. It is common, for example, to hear stories of an effect being obtained in the first part of the semester that then "d...
Many Labs 3 is a crowdsourced project that systematically evaluated time-of-semester effects across many participant pools. See the Wiki for a table of contents of files and to download the manuscript.
The stop-signal or countermanding task probes the ability to control action by requiring subjects to withhold a planned movement in response to an infrequent stop signal which they do with variable success depending on the delay of the stop signal. We investigated whether performance of humans and macaque monkeys in a saccade countermanding task was influenced by stimulus and performance history. In spite of idiosyncrasies across subjects several trends were evident in both humans and monkeys. Response time decreased after successive trials with no stop signal. Response time increased after successive trials with a stop signal. However, post-error slowing was not observed. Increased response time was observed mainly or only after cancelled (signal inhibit) trials and not after noncancelled (signal respond) trials. These global trends were based on rapid adjustments of response time in response to momentary fluctuations in the fraction of stop signal trials. The effects of trial sequence on the probability of responding were weaker and more idiosyncratic across subjects when stop signal fraction was fixed. However, both response time and probability of responding were influenced strongly by variations in the fraction of stop signal trials. These results indicate that the race model of countermanding performance requires extension to account for these sequential dependencies and provide a basis for physiological studies of executive control of countermanding saccade performance.
To explore how eye and hand movements are controlled in a stop task, we introduced effector uncertainty by instructing subjects to initiate and occasionally inhibit eye, hand, or eye + hand movements in response to a color-coded foveal or tone-coded auditory stop signal. Regardless of stop signal modality, stop signal reaction time was shorter for eye movements than for hand movements, but notably did not vary with knowledge about which movement to cancel. Most errors on eye + hand stopping trials were combined eye + hand movements. The probability and latency of signal respond eye and hand movements corresponded to predictions of Logan and Cowan's (1984) race model applied to each effector independently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.