Previous work suggests that lifespan developmental differences in cognitive control reflect maturational and aging-related changes in prefrontal cortex functioning. However, complementary explanations exist: It could be that children and older adults differ from younger adults in how they balance the effort of engaging in control against its potential benefits. Here we test whether the degree of cognitive effort expenditure depends on the opportunity cost of time (average reward rate per unit time): if the average reward rate is high, participants should withhold cognitive effort whereas if it is low, they should invest more. In Experiment 1, we examine this hypothesis in children, adolescents, younger, and older adults, by applying a reward rate manipulation in two cognitive control tasks: a modified Erikson Flanker and a task-switching paradigm. We found that young adults and adolescents reflexively withheld effort when the opportunity cost of time was high, whereas older adults and, to a lesser degree children, invested more resources to accumulate reward as quickly as possible. We tentatively interpret these results in terms of age- and task-specific differences in the processing of the opportunity cost of time. We qualify our findings in a second experiment in younger adults in which we address an alternative explanation of our results and show that the observed age differences in effort expenditure may not result from differences in task difficulty. To conclude, we think that our results present an interesting first step at relating opportunity costs to motivational processes across the lifespan. We frame the implications of further work in this area within a recent developmental model of resource-rationality, which points to developmental sweet spots in cognitive control.
The notion that humans avoid effortful action is one of the oldest and most persistent in psychology. Influential theories of effort propose that effort valuations are made according to a cost-benefit trade-off: we tend to invest mental effort only when the benefits outweigh the costs. While these models provide a useful conceptual framework, the affective components of effort valuation remain poorly understood. Here, we examined whether primitive components of affective response—positive and negative valence, captured via facial electromyography (fEMG)—can be used to better understand valuations of cognitive effort. Using an effortful arithmetic task, we find that fEMG activity in the corrugator supercilii—thought to index negative valence—1) tracks the anticipation and exertion of cognitive effort and 2) is attenuated in the presence of high rewards. Together, these results suggest that activity in the corrugator reflects the integration of effort costs and rewards during effortful decision-making.
The now-classic goal gradient hypothesis posits that organisms increase effort expenditure as a function of their proximity to a goal. Despite nearly a century having passed since its original formulation, goal gradient-like behaviour in human cognitive performance remains poorly understood: are we more willing to engage in costly cognitive processing when we are near, versus far from a goal state? Moreover, the computational mechanisms underpinning these potential goal gradient effects—for example, whether goal proximity affects fidelity of stimulus encoding, response caution, or other identifiable mechanisms governing speed and accuracy—are unclear. Here, in two experiments, we examine the effect of goal proximity, operationalized as progress towards completion of a rewarded task block, upon task performance in an attentionally demanding oddball task. Supporting the goal gradient hypothesis, we found that participants responded more quickly, but not less accurately, when rewards were proximal than when they were distal. Critically, this effect was only observed when participants were given information about goal proximity. Using hierarchical Drift Diffusion Modeling, we found that these apparent goal gradient performance effects were best explained by increased information processing efficiency, but also reduced response caution. Taken together, these results suggest that goal gradients could help explain the oft-observed fluctuations in engagement of cognitively effortful processing, extending the scope of the goal-gradient hypothesis to the domain of cognitive tasks.
Multilevel modeling techniques have gained traction among experimental psychologists for their ability to account for dependencies in nested data structures, such as responses nested within participants during an experiment. Increasingly, these techniques are extended to the analysis of binary data (e.g., choices, accuracy). Despite their popularity, these logistic multilevel models are often underutilized when researchers focus primarily—or solely—on fixed effects and ignore important heterogeneity that exists within and between participants, the random effects. Multilevel modeling textbooks often describe logistic multilevel models as a “simple” extension of linear models but fail to provide thorough explanations of why the variance components are difficult to estimate and interpret. In this tutorial, we review four techniques for estimating and quantifying residual- and cluster-level variance in logistic multilevel models in an accessible manner using real data. First, we introduce logistic multilevel modeling, including the interpretation of fixed and random effects. Second, we review the challenges associated with the estimation and interpretation of within- and between-participant variation in logistic multilevel models. Third, we demonstrate four existing methods of quantifying within- and between-participant variation in logistic multilevel models and discuss their relative advantages and disadvantages. Fourth, we present bootstrapping methods to make statistical inference about these variance estimates. To facilitate reuse, we developed R code to implement the discussed techniques, which is provided throughout the text and as supplemental materials.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.