For disclosure, the data analyzed in this study was part of a larger data collection sample and we reported data that included some of the same tasks in different publications. The following link has a summary of the larger data collection procedure and a reference list of all publications to come out of this data collection sample with information on which tasks were used for each publication: https://osf.io/s5kxb. We reported data on the relationship between sensory discrimination, fluid intelligence, working memory capacity, and attention control in a separate article (Tsukahara, Harrison, Draheim, Martin, & Engle, 2019). We also reported data on visual arrays tasks in a separate paper (Martin et al., 2019) extensively discussing the nature of the tasks and what constructs they measure. We reported data from the attention tasks and a follow-up session in Martin, Mashburn, and Engle (2019) which focuses on predictive validity of the attention measures. The broader issue of measurement concerns in individual differences research, with some discussion of the issues as they pertain to attention control, were discussed in another separate article (Draheim, Mashburn, & Engle, 2019). In addition, data and ideas from the present study were disseminated in various conference presentations (Draheim,
Reaction time is believed to be a good indicator of the speed and efficiency of mental processes and is a ubiquitous variable in the behavioral sciences. Despite this popularity, there are numerous issues associated with using reaction time (RT), specifically in differential and developmental research. Here, we identify and focus on two main problems—unreliability and sensitivity to speed–accuracy interactions. The use of difference scores is a primary factor that leads to many RT measures having demonstrably low reliability, and RT measures in general often do not properly account for speed–accuracy interactions. Both factors jeopardize the validity and interpretability of results based on RT. Here, we evaluate conceptually and empirically how these issues affect individual differences research. Although the empirical evidence we provide are primarily within the domains of attention control and task switching, we highlight examples from various other areas of psychological inquiry. We also discuss many of the statistical and methodological alternatives available to researchers conducting correlational studies. Ultimately, we encourage researchers comparing individuals of differing cognitive and developmental levels to strongly consider using these alternatives in lieu of RT, specifically RT difference scores.
Process overlap theory provides a contemporary explanation for the positive correlations observed among cognitive ability measures, a phenomenon which intelligence researchers refer to as the positive manifold. According to process overlap theory, cognitive tasks tap domaingeneral executive processes as well as domain-specific processes, and correlations between measures reflect the degree of overlap in the cognitive processes that are engaged when performing the tasks. In this article, we discuss points of agreement and disagreement between the executive attention framework and process overlap theory, with a focus on attention control: the domain-general ability to maintain focus on task-relevant information and disengage from irrelevant and no-longer relevant information. After describing the steps our lab has taken to improve the measurement of attention control, we review evidence suggesting that attention control can explain many of the positive correlations between broad cognitive abilities, such as fluid intelligence, working memory capacity, and sensory discrimination ability. Furthermore, when these latent variables are modeled under a higher-order g factor, attention control has the highest loading on g, indicating a strong relationship between attention control and domaingeneral cognitive ability. In closing, we reflect on the challenge of directly measuring cognitive processes and provide suggestions for future research.
Cognitive tasks that produce reliable and robust effects at the group level often fail to yield reliable and valid individual differences. An ongoing debate among attention researchers is whether conflict resolution mechanisms are task-specific or domain-general, and the lack of correlation between most attention measures seems to favor the view that attention control is not a unitary concept. We have argued that the use of difference scores, particularly in reaction time, is the primary cause of null and conflicting results at the individual differences level, and that methodological issues with existing tasks preclude making strong theoretical conclusions. The present article is an empirical test of this view in which we used a toolbox approach to develop and validate new tasks hypothesized to reflect attention processes. Here, we administered existing, modified, and new attention tasks to over 400 subjects (final N = 396). Compared to the traditional Stroop and flanker tasks, performance on the accuracy-based measures was more reliable, had stronger intercorrelations, formed a more coherent latent factor, and had stronger associations to measures of working memory capacity and fluid intelligence. Further, attention control fully accounted for the relationship between working memory capacity and fluid intelligence. These results show that accuracy-based tasks can be better suited to individual differences investigations than traditional reaction time tasks, particularly when the goal is to maximize prediction. We conclude that attention control is a unitary concept.
We evaluated the predictive value of the Armed Services Vocational Aptitude Battery (ASVAB) at the latent level, using multitasking as a proxy for real-world job performance. We also examined whether adding measures of attention control to the ASVAB could improve its predictive validity. To answer these questions, data were collected from 171 young adults recruited from the Georgia Institute of Technology and the greater Atlanta community. Both regression and latent variable analyses revealed that the ASVAB does predict multitasking at the latent level but that measures of attention control add substantial predictive validity in explaining multitasking above and beyond the ASVAB, fluid intelligence, and processing speed. Theoretical as well as practical applications of these results are discussed in terms of theories of attention control, and potential cost savings in selection for military positions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.