Attention deficit/hyperactivity disorder (ADHD) is frequently characterized as a disorder of executive function (EF). However, behavioral tests of EF, such as go/No-go tasks, often fail to grasp the deficiency in EF revealed by questionnaire-based measures. This inability is usually attributed to questionnaires and behavioral tasks assessing different constructs of EFs. We propose an additional explanation for this discrepancy. We hypothesize that this problem stems from the lack of dynamic assessment of decision-making (e.g., continuous monitoring of motor behavior such as velocity and acceleration in choice reaching) in classical versions of behavioral tasks. We test this hypothesis by introducing dynamic assessment in the form of mouse motion in a go/No-go task. Our results indicate that, among healthy college students, self-report measures of ADHD symptoms become strongly associated with performance in behavioral tasks when continuous assessment (e.g., acceleration in the mouse-cursor motion) is introduced.
The accurate detection of attention-deficit/hyperactivity disorder (ADHD) symptoms, such as inattentiveness and behavioral disinhibition, is crucial for delivering timely assistance and treatment. ADHD is commonly diagnosed and studied with specialized questionnaires and behavioral tests such as the stop-signal task. However, in cases of late-onset or mild forms of ADHD, behavioral measures often fail to gauge the deficiencies well-highlighted by questionnaires. To improve the sensitivity of behavioral tests, we propose a novel version of the stop-signal task (SST), which integrates mouse cursor tracking. In two studies, we investigated whether introducing mouse movement measures to the stop-signal task improves associations with questionnaire-based measures, as compared to the traditional (keypress-based) version of SST. We also scrutinized the influence of different parameters of stop-signal tasks, such as the method of stop-signal delay setting or definition of response inhibition failure, on these associations. Our results show that a) SSRT has weak association with impulsivity, while mouse movement measures have strong and significant association with impulsivity; b) machine learning models trained on the mouse movement data from “known” participants using nested cross-validation procedure can accurately predict impulsivity ratings of “unknown” participants; c) mouse movement features such as maximum acceleration and maximum velocity are among the most important predictors for impulsivity; d) using preset stop-signal delays prompts behavior that is more indicative of impulsivity.
The human mind is multimodal. Yet most behavioral studies rely on century-old measures of behavior—task accuracy and latency (response time). Multimodal and multisensory analysis of human behavior creates a better understanding of how the mind works. The problem is that designing and implementing these experiments is technically complex and costly. This paper introduces versatile and economical means of developing multimodal-multisensory human experiments. We provide an experimental design framework that automatically integrates and synchronizes measures including electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, virtual reality (VR), body movement, mouse/cursor motion and response time. Unlike proprietary systems (e.g., iMotions), our system is free and open-source; it integrates PsychoPy, Unity and Lab Streaming Layer (LSL). The system embeds LSL inside PsychoPy/Unity for the synchronization of multiple sensory signals—gaze motion, electroencephalogram (EEG), galvanic skin response (GSR), mouse/cursor movement, and body motion—with low-cost consumer-grade devices in a simple behavioral task designed by PsychoPy and a virtual reality environment designed by Unity. This tutorial shows a step-by-step process by which a complex multimodal-multisensory experiment can be designed and implemented in a few hours. When conducting the experiment, all of the data synchronization and recoding of the data to disk will be done automatically.
Mouse tracking, a new action-based measure of behavior, has advanced theories of decision making with the notion that cognitive and social decision making is fundamentally dynamic. Implicit in this theory is that people's decision strategies, such as discounting delayed rewards, are stable over task design and that mouse trajectory features correspond to specific segments of decision making. By applying the hierarchical drift diffusion model and the Bayesian delay discounting model, we tested these assumptions. Specifically, we investigated the extent to which the "mouse-tracking" design of decision-making tasks (delay discounting task, DDT and stop-signal task, SST) deviate from the standard "keypress" design of decision making tasks. We found remarkable agreement in delay discounting rates (intertemporal impatience) obtained in the keypress and mouse-tracking versions of DDT (ρ = 0.90) even though these tasks were given about 1 week apart. Rates of evidence accumulation converged well in the two versions (DDT, ρ = .86; SST, ρ = .55). Omission/commission error in SST showed high agreement (ρ = .42, ρ = .53). Mouse-motion features such as maximum velocity and AUC (area under the curve) correlated well with nondecision time (ρ = −.42) and boundary separation (ρ = .44)-the amount of information needed to accumulate prior to making a response. These results indicate that the response time (RT) and motion-based decision tasks converge well at a fundamental level, and that mouse-tracking features such as AUC and maximum velocity do indicate the degree of decision conflict and impulsivity.
The human mind is multimodal. Yet most behavioral studies rely on century-old measures of behavior - task accuracy and latency (response time). Multimodal and multisensory analysis of human behavior creates a better understanding of how the mind works. The problem is that designing and implementing these experiments is technically complex and costly. This paper introduces versatile and economical means of developing multimodal-multisensory human experiments. We provide an experimental design framework that automatically integrates and synchronizes measures including electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, virtual reality (VR), body movement, mouse/cursor motion and response time. Unlike proprietary systems (e.g., iMotions), our system is free and open-source; it integrates PsychoPy, Unity and Lab Streaming Layer (LSL). The system embeds LSL inside PsychoPy/Unity for the synchronization of multiple sensory signals - gaze motion, electroencephalogram (EEG), galvanic skin response (GSR), mouse/cursor movement, and body motion - with low-cost consumer-grade devices in a simple behavioral task designed by PsychoPy and a virtual reality environment designed by Unity. This tutorial shows a step-by-step process by which a complex multimodal-multisensory experiment can be designed and implemented in a few hours. When conducting the experiment, all of the data synchronization and recoding of the data to disk will be done automatically.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.