Psychological science relies on behavioral measures to assess cognitive processing; however, the field has not yet developed a tradition of routinely examining the reliability of these behavioral measures. Reliable measures are essential to draw robust inferences from statistical analyses, and subpar reliability has severe implications for measures’ validity and interpretation. Without examining and reporting the reliability of measurements used in an analysis, it is nearly impossible to ascertain whether results are robust or have arisen largely from measurement error. In this article, we propose that researchers adopt a standard practice of estimating and reporting the reliability of behavioral assessments of cognitive processing. We illustrate the need for this practice using an example from experimental psychopathology, the dot-probe task, although we argue that reporting reliability is relevant across fields (e.g., social cognition and cognitive psychology). We explore several implications of low measurement reliability and the detrimental impact that failure to assess measurement reliability has on interpretability and comparison of results and therefore research quality. We argue that researchers in the field of cognition need to report measurement reliability as routine practice so that more reliable assessment tools can be developed. To provide some guidance on estimating and reporting reliability, we describe the use of bootstrapped split-half estimation and intraclass correlation coefficients to estimate internal consistency and test-retest reliability, respectively. For future researchers to build upon current results, it is imperative that all researchers provide psychometric information sufficient for estimating the accuracy of inferences and informing further development of cognitive-behavioral assessments.
During social interactions we automatically infer motives, intentions, and feelings from bodily cues of others, especially from the eye region of their faces. This cognitive empathic ability is one of the most important components of social intelligence, and is essential for effective social interaction. Females on average outperform males in this cognitive empathy, and the male sex hormone testosterone is thought to be involved. Testosterone may not only down-regulate social intelligence organizationally, by affecting fetal brain development, but also activationally, by its current effects on the brain. Here, we show that administration of testosterone in 16 young women led to a significant impairment in their cognitive empathy, and that this effect is powerfully predicted by a proxy of fetal testosterone: the right-hand second digit-to-fourth digit ratio. Our data thus not only demonstrate down-regulatory effects of current testosterone on cognitive empathy, but also suggest these are preprogrammed by the very same hormone prenatally. These findings have importance for our understanding of the psychobiology of human social intelligence.
BackgroundNew indices, calculated on data from the widely used Dot Probe Task, were recently proposed to capture variability in biased attention allocation. We observed that it remains unclear which data pattern is meant to be indicative of dynamic bias and thus to be captured by these indices. Moreover, we hypothesized that the new indices are sensitive to SD differences at the response time (RT) level in the absence of bias.MethodRandomly generated datasets were analyzed to assess properties of the Attention Bias Variability (ABV) and Trial Level Bias Score (TL-BS) indices. Sensitivity to creating differences in 1) RT standard deviation, 2) mean RT, and 3) bias magnitude were assessed. In addition, two possible definitions of dynamic attention bias were explored by creating differences in 4) frequency of bias switching, and 5) bias magnitude in the presence of constant switching.ResultsABV and TL-BS indices were found highly sensitive to increasing SD at the response time level, insensitive to increasing bias, linearly sensitive to increasing bias magnitude in the presence of bias switches, and non-linearly sensitive to increasing the frequency of bias switches. The ABV index was also found responsive to increasing mean response times in the absence of bias.ConclusionRecently proposed DPT derived variability indices cannot uncouple measurement error from bias variability. Significant group differences may be observed even if there is no bias present in any individual dataset. This renders the new indices in their current form unfit for empirical purposes. Our discussion focuses on fostering debate and ideas for new research to validate the potentially very important notion of biased attention being dynamic.
Resilience is considered to be the process by which individuals demonstrate more positive outcomes than would be expected, given the nature of the adversity experienced. We propose that a cognitive approach has the potential to guide studies investigating the relationships between adversity, stress, and resilience. We outline a preliminary cognitive model of resilience in order to facilitate the application of cognitive approaches to the investigation of resilience in the face of adversity. We argue that the situationally appropriate application of flexibility or rigidity in affective-cognitive systems is a key element in promoting resilient responses. We propose that this mapping of cognitive processing can be conceptualised as being undertaken by an overarching mapping system, which serves to integrate information from a variety of sources, including the current situation, prior experience, as well as more conscious and goal-driven processes. We propose that a well-functioning mapping system is an integral part of the cognitive basis for resilience to adversity. Our preliminary model is intended to provide an initial theoretical framework to guide research on the development of cognitive functions that are considered to be important in the resilience process.
Psychological science relies on behavioural measures to assess cognitive processing; however, the field has not yet developed a tradition of routinely examining the reliability of these behavioural measures. Reliable measures are essential to draw robust inferences from statistical analyses, while subpar reliability has severe implications for the measures’ validity and interpretation. Without examining and reporting the reliability of cognitive behavioural measurements, it is near impossible to ascertain whether results are robust or have arisen largely from measurement error. In this paper we propose that researchers adopt a standard practice of estimating and reporting the reliability of behavioural assessments. We illustrate this proposal using an example from experimental psychopathology, the dot-probe task; although we argue that reporting reliability is relevant across fields (e.g. social cognition and cognitive psychology). We explore several implications of low measurement reliability, and the detrimental impact that failure to assess measurement reliability has on interpretability and comparison of results and therefore research quality. We argue that the field needs to a) report measurement reliability as routine practice so that we can b) develop more reliable assessment tools. To provide some guidance on estimating and reporting reliability, we describe bootstrapped split half estimation and IntraClass Correlation Coefficient procedures to estimate internal consistency and test-retest reliability, respectively. For future researchers to build upon current results it is imperative that all researchers provide sufficient psychometric information to estimate the accuracy of inferences and inform further development of cognitive behavioural assessments.
BackgroundCognitive reactivity to sad mood is a vulnerability marker of depression. Implicit self-depressed associations are related to depression status and reduced remission probability. It is unknown whether these cognitive vulnerabilities precede the first onset of depression.AimTo test the predictive value of cognitive reactivity and implicit self-depressed associations for the incidence of depressive disorders.MethodsProspective cohort study of 834 never-depressed individuals, followed over a two-year period. The predictive value of cognitive reactivity and implicit self-depressed associations for the onset of depressive disorders was assessed using binomial logistic regression. The multivariate model corrected for baseline levels of subclinical depressive symptoms, neuroticism, for the presence of a history of anxiety disorders, for family history of depressive or anxiety disorders, and for the incidence of negative life events.ResultsAs single predictors, both cognitive reactivity and implicit self-depressed associations were significantly associated with depression incidence. In the multivariate model, cognitive reactivity was significantly associated with depression incidence, together with baseline depressive symptoms and the number of negative life events, whereas implicit self-depressed associations were not.ConclusionCognitive reactivity to sad mood is associated with the incidence of depressive disorders, also when various other depression-related variables are controlled for. Implicit self-depressed associations predicted depression incidence in a bivariate test, but not when controlling for other predictors.
Background: Considerable effort and funding have been spent on developing Attention Bias Modification (ABM) as a treatment for anxiety disorders, theorized to exert therapeutic effects through reduction of a tendency to orient attention towards threat. However, meta-analytical evidence that clinical anxiety is characterized by threat-related attention bias is thin. The largest meta-analysis to date included dotprobe data for n=337 clinically anxious individuals. Baseline measures of biased attention obtained in ABM RCTs form an additional body of data that has not previously been meta-analyzed. Method: This paper presents a meta-analysis of threat-related dot-probe bias measured at baseline for 1005 clinically anxious individuals enrolled in 13 ABM RCTs. Results: Random-effects meta-analysis indicated no evidence that the mean bias index (BI) differed from zero (k= 13, n= 1005, mean BI = 1.8 ms, SE = 1.26 ms, p = .144, 95% CI [-0.6-4.3]. Additional Bayes factor analyses also supported the pointzero hypothesis (BF10 = .23), whereas interval-based analysis indicated that mean bias in clinical anxiety is unlikely to extend beyond the 0 to 5 ms interval. Discussion: Findings are discussed with respect to strengths (relatively large samples, possible bypassing of publication bias), limitations (lack of control comparison, repurposing data, specificity to dot-probe data), and theoretical and practical context. We suggest that it should no longer be assumed that clinically anxious individuals are characterized by selective attention towards threat. Conclusion: Clinically anxious individuals enrolled in RCTs for Attention Bias Modification are not characterized by threat-related attention bias at baseline.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.