Increasing the number of available sources of information may impair or facilitate performance, depending on the capacity of the processing system. Tests performed on response time distributions are proving to be useful tools in determining the workload capacity (as well as other properties) of cognitive systems. In this article, we develop a framework and relevant mathematical formulae that represent different capacity assays (Miller's race model bound, Grice's bound, and Townsend's capacity coefficient) in the same space. The new space allows a direct comparison between the distinct bounds and the capacity coefficient values and helps explicate the relationships among the different measures. An analogous common space is proposed for the AND paradigm, relating the capacity index to the Colonius-Vorberg bounds. We illustrate the effectiveness of the unified spaces by presenting data from two simulated models (standard parallel, coactive) and a prototypical visual detection experiment. A conversion table for the unified spaces is provided.
Systems Factorial Technology is a powerful framework for investigating the fundamental properties of human information processing such as architecture (i.e., serial or parallel processing) and capacity (how processing efficiency is affected by increased workload). The Survivor Interaction Contrast (SIC) and the Capacity Coefficient are effective measures in determining these underlying properties, based on response-time data. Each of the different architectures, under the assumption of independent processing, predicts a specific form of the SIC along with some range of capacity. In this study, we explored SIC predictions of discrete-state (Markov process) and continuous-state (Linear Dynamic) models that allow for certain types of cross-channel interaction. The interaction can be facilitatory or inhibitory: one channel can either facilitate, or slow down processing in its counterpart. Despite the relative generality of these models, the combination of the architecture-oriented plus the capacity oriented analyses provide for precise identification of the underlying system.
A huge set of focused attention experiments show that when presented with color words printed in color, observers report the ink color faster if the carrier word is the name of the color rather than the name of an alternative color, the Stroop effect. There is also a large number (although not so numerous as the Stroop task) of so-called “redundant targets studies” that are based on divided attention instructions. These almost always indicate that observers report the presence of a visual target (‘redness’ in the stimulus) faster if there are two replications of the target (the word RED in red ink color) than if only one is present (RED in green or GREEN in red). The present set of four experiments employs the same stimuli and same participants in both designs. Evidence supports the traditional interference account of the Stroop effect, but also supports a non-interference parallel processing account of the word and the color in the divided attention task. Theorists are challenged to find a unifying model that parsimoniously explains both seemingly contradictory results.
Cognitive load from secondary tasks is a source of distraction causing injuries and fatalities on the roadway. The Detection Response Task (DRT) is an international standard for assessing cognitive load on drivers' attention that can be performed as a secondary task with little to no measurable effect on the primary driving task. We investigated whether decrements in DRT performance were related to the rate of information processing, levels of response caution, or the non-decision processing of drivers. We had pairs of participants take part in the DRT while performing a simulated driving task, manipulated cognitive load via the conversation between driver and passenger, and observed associated slowing in DRT response time. Fits of the single-bound diffusion model indicated that slowing was mediated by an increase in response caution. We propose the novel hypothesis that, rather than the DRT's sensitivity to cognitive load being a direct result of a loss of information processing capacity to other tasks, it is an indirect result of a general tendency to be more cautious when making responses in more demanding situations.
With the advancement of technologies like in-car navigation and smartphones, concerns around how cognitive functioning is influenced by "workload" are increasingly prevalent.Research shows that spreading effort across multiple tasks can impair cognitive abilities through an overuse of resources, and that similar overload effects arise in difficult single-task paradigms. We developed a novel lab-based extension of the Detection Response Task, which measures workload, and paired it with a Multiple Object Tracking Task to manipulate cognitive load. Load was manipulated either by changing within-task difficulty or by the addition of an extra task. Using quantitative cognitive modelling we showed that these manipulations cause similar cognitive impairments through diminished processing rates, but that the introduction of a second task tends to invoke more cautious response strategies that do not occur when only difficulty changes. We conclude that more prudence should be exercised when directly comparing multitasking and difficulty-based workload impairments, particularly when relying on measures of central tendency.
This investigation brings together a response-time system identification methodology (e.g., Townsend & Wenger Psychonomic Bulletin & Review 11, 391-418, 2004a) and an accuracy methodology, intended to assess models of integration across stimulus dimensions (features, modalities, etc.) that were proposed by Shaw and colleagues (e.g., Mulligan & Shaw Perception & Psychophysics 28, 471-478, 1980). The goal was to theoretically examine these separate strategies and to apply them conjointly to the same set of participants. The empirical phases were carried out within an extension of an established experimental design called the double factorial paradigm (e.g., Townsend & Nozawa Journal of Mathematical Psychology 39, 321-359, 1995). That paradigm, based on response times, permits assessments of architecture (parallel vs. serial processing), stopping rule (exhaustive vs. minimum time), and workload capacity, all within the same blocks of trials. The paradigm introduced by Shaw and colleagues uses a statistic formally analogous to that of the double factorial paradigm, but based on accuracy rather than response times. We demonstrate that the accuracy measure cannot discriminate between parallel and serial processing. Nonetheless, the class of models supported by the accuracy data possesses a suitable interpretation within the same set of models supported by the response-time data. The supported model, consistent across individuals, is parallel and has limited capacity, with the participants employing the appropriate stopping rule for the experimental setting.Keywords Response time . Accuracy . Parallel processing . Redundant targets . Interaction contrast . No probability response contrast . Integration . Coactivation . OR task . AND task How does the cognitive system combine information from separate sources? This question is central to basic human information processing and also possesses many potential applications, from clinical science to human factors and engineering. In the present study, we bring together two previously distinct approaches that can combine to provide strong converging evidence about some of the critical properties of human information processing. The approaches are applicable to the two primary measures of performance in psychological research, response accuracy and response times (hereafter, RTs), so when considered together they allow for strong inference regarding the mechanisms underlying cognitive performance. With regard to the measure of RT, we employed Townsend and Nozawa's (1995) systems factorial technology (hereafter, SFT) framework, and expanded it empirically as we will outline shortly. With regard to the measure of response accuracy, we built on the seminal efforts of Marilyn Shaw and colleagues (e
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.