Psychological sciences have identified a wealth of cognitive processes and behavioral phenomena, yet struggle to produce cumulative knowledge. Progress is hamstrung by siloed scientific traditions and a focus on explanation over prediction, two issues that are particularly damaging for the study of multifaceted constructs like self-regulation. Here, we derive a psychological ontology from a study of individual differences across a broad range of behavioral tasks, self-report surveys, and self-reported real-world outcomes associated with self-regulation. Though both tasks and surveys putatively measure self-regulation, they show little empirical relationship. Within tasks and surveys, however, the ontology identifies reliable individual traits and reveals opportunities for theoretic synthesis. We then evaluate predictive power of the psychological measurements and find that while surveys modestly and heterogeneously predict real-world outcomes, tasks largely do not. We conclude that self-regulation lacks coherence as a construct, and that data-driven ontologies lay the groundwork for a cumulative psychological science.
The ability to regulate behavior in service of long-term goals is a widely studied psychological construct known as self-regulation. This wide interest is in part due to the putative relations between self-regulation and a range of real-world behaviors. Selfregulation is generally viewed as a trait, and individual differences are quantified using a diverse set of measures, including selfreport surveys and behavioral tasks. Accurate characterization of individual differences requires measurement reliability, a property frequently characterized in self-report surveys, but rarely assessed in behavioral tasks. We remedy this gap by (i) providing a comprehensive literature review on an extensive set of self-regulation measures and (ii) empirically evaluating test-retest reliability of this battery in a new sample. We find that dependent variables (DVs) from self-report surveys of self-regulation have high testretest reliability, while DVs derived from behavioral tasks do not. This holds both in the literature and in our sample, although the test-retest reliability estimates in the literature are highly variable. We confirm that this is due to differences in between-subject variability. We also compare different types of task DVs (e.g., model parameters vs. raw response times) in their suitability as individual difference DVs, finding that certain model parameters are as stable as raw DVs. Our results provide greater psychometric footing for the study of self-regulation and provide guidance for future studies of individual differences in this domain.self-regulation | retest reliability | individual differences These data were previously presented as a poster at
The ability to regulate behavior in service of long-term goals is a widely studied psychological construct known as self-regulation. This wide interest is in part due to the putative relations between self-regulation and a range of real-world behaviors. Self-regulation is generally viewed as a trait, and individual differences are quantified using a diverse set of measures including self-report surveys and behavioral tasks. Accurate characterization of individual differences requires measurement reliability, a property frequently characterized in self-report surveys, but rarely assessed in behavioral tasks. We remedy this gap by (1) providing a comprehensive literature review on an extensive set of self-regulation measures, and (2) empirically evaluating retest reliability in this battery of measures in a new sample. We find that self-report survey measures of self-regulation have high test-retest reliability while measures derived from behavioral tasks do not. This holds both in the literature and in our sample. We confirm that this is due to differences in between-subjects variability. We also compare different types of task measures (e.g., model parameters vs. raw response times) in their suitability as individual difference measures, finding that certain model parameters are as stable as raw measures. Our results provide greater psychometric footing for the study of self-regulation and provide guidance for future studies of individual differences in this domain.
Age-related deterioration in cognitive ability may compromise the ability of older adults to make major financial decisions. We explore whether knowledge and expertise accumulated from past decisions can offset cognitive decline to maintain decision quality over the life span. Using a unique dataset that combines measures of cognitive ability (fluid intelligence) and of general and domain-specific knowledge (crystallized intelligence), credit report data, and other measures of decision quality, we show that domain-specific knowledge and expertise provide an alternative route for sound financial decisions. That is, cognitive aging does not spell doom for financial decision-making in domains where the decision maker has developed expertise. These results have important implications for public policy and for the design of effective interventions and decision aids.aging | decision-making | cognitive ability | consumer finance | credit score O ver the next decades, the average age of the worlds' population will rise rapidly. One in five Americans is expected to be over 65 y old by 2030, and the number of people 65 and older worldwide will double by 2035.This "gray tsunami" will propel two trends. The first, described by economics' life cycle model (1), is that more people who have accumulated wealth for retirement will face difficult decumulation decisions: how quickly to consume their wealth and how to ensure it will last for their remaining years of life. Fig. 1 shows wealth accumulation in the United States by age, with bars representing net worth and wealth held in equities (i.e., stocks and mutual funds)-financial holdings requiring more active monitoring and choices. In 2011, Americans over 65 collectively managed 43% of US household wealth and 47% of privately held equities. Furthermore, policy changes [e.g., to defined contribution retirement plans such as 401(k)s] have transferred many complex financial and healthcare decisions to individuals.The second trend results from one of the most sizable and robust findings in all of psychology: The brain slows with age.
Self-regulation is a broad construct representing the general ability to recruit cognitive, motivational and emotional resources to achieve long-term goals. This construct has been implicated in a host of health-risk behaviors, and is a promising target for fostering beneficial behavior change. Despite its clear importance, the behavioral, psychological and neural components of self-regulation remain poorly understood, which contributes to theoretical inconsistencies and hinders maximally effective intervention development. We outline a research program that seeks to define a neuropsychological ontology of self-regulation, articulating the cognitive components that compose self-regulation, their relationships, and their associated measurements. The ontology will be informed by two large-scale approaches to assessing individual differences: first purely behaviorally using data collected via Amazon's Mechanical Turk, then coupled with neuroimaging data collected from a separate population. To validate the ontology and demonstrate its utility, we will then use it to contextualize health risk behaviors in two exemplar behavioral groups: overweight/obese adults who binge eat and smokers. After identifying ontological targets that precipitate maladaptive behavior, we will craft interventions that engage these targets. If successful, this work will provide a structured, holistic account of self-regulation in the form of an explicit ontology, which will better clarify the pattern of deficits related to maladaptive health behavior, and provide direction for more effective behavior change interventions.
The administration of behavioral and experimental paradigms for psychology research is hindered by lack of a coordinated effort to develop and deploy standardized paradigms. While several frameworks (Mason and Suri, 2011; McDonnell et al., 2012; de Leeuw, 2015; Lange et al., 2015) have provided infrastructure and methods for individual research groups to develop paradigms, missing is a coordinated effort to develop paradigms linked with a system to easily deploy them. This disorganization leads to redundancy in development, divergent implementations of conceptually identical tasks, disorganized and error-prone code lacking documentation, and difficulty in replication. The ongoing reproducibility crisis in psychology and neuroscience research (Baker, 2015; Open Science Collaboration, 2015) highlights the urgency of this challenge: reproducible research in behavioral psychology is conditional on deployment of equivalent experiments. A large, accessible repository of experiments for researchers to develop collaboratively is most efficiently accomplished through an open source framework. Here we present the Experiment Factory, an open source framework for the development and deployment of web-based experiments. The modular infrastructure includes experiments, virtual machines for local or cloud deployment, and an application to drive these components and provide developers with functions and tools for further extension. We release this infrastructure with a deployment (http://www.expfactory.org) that researchers are currently using to run a set of over 80 standardized web-based experiments on Amazon Mechanical Turk. By providing open source tools for both deployment and development, this novel infrastructure holds promise to bring reproducibility to the administration of experiments, and accelerate scientific progress by providing a shared community resource of psychological paradigms.
Psychological sciences have identified a wealth of cognitive processes and behavioral phenomena, yet struggle to produce cumulative knowledge. Progress is hamstrung by siloed scientific traditions and a focus on explanation over prediction, two issues we address by examining individual differences across an unprecedented range of behavioral tasks, self-report surveys, and real-world outcomes. We derive a cognitive ontology and evaluate the predictive power of many psychological measurements related to self-regulation. Though both tasks and surveys putatively measure self-regulation, they show little empirical relationship. Within tasks and surveys, however, the ontology reveals opportunities for theoretic synthesis and identifies stable individual traits. Additionally, surveys predict self-reported real-world outcomes while tasks largely do not. We conclude that data-driven ontologies lay the groundwork for a cumulative psychological science.
Consistent decisions are intuitively desirable and theoretically important for utility maximization. Neuroeconomics has established the neurobiological substrate of value representation, but brain regions that provide input to this network is less explored. The constructed-preference tradition within behavioral decision research gives a critical role to associative cognitive processes, suggesting a hippocampal role in making consistent decisions. We compared the performance of 31 patients with mediotemporal lobe (MTL) epilepsy and hippocampal lesions, 30 patients with extratemporal lobe epilepsy, and 30 healthy controls on two tasks: binary choices between candy bars based on their preferences and a number-comparison control task where the larger number is chosen. MTL patients made more inconsistent choices than the other two groups for the value-based choice but not the number-comparison task. These inconsistencies correlated with the volume of compromised hippocampal tissue. These results add to increasing evidence on a critical involvement of the MTL in preference construction and value-based choices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.