Precision psychiatry demands the rapid, efficient, and temporally dense collection of large scale and multi-omic data across diverse samples, for better diagnosis and treatment of dynamic clinical phenomena. To achieve this, we need approaches for measuring behavior that are readily scalable, both across participants and over time. Efforts to quantify behavior at scale are impeded by the fact that our methods for measuring human behavior are typically developed and validated for single time-point assessment, in highly controlled settings, and with relatively homogeneous samples. As a result, when taken to scale, these measures often suffer from poor reliability, generalizability, and participant engagement. In this review, we attempt to bridge the gap between gold standard behavioral measurements in the lab or clinic and the large-scale, high frequency assessments needed for precision psychiatry. To do this, we introduce and integrate two frameworks for the translation and validation of behavioral measurements. First, borrowing principles from computer science, we lay out an approach for iterative task development that can optimize behavioral measures based on psychometric, accessibility, and engagement criteria. Second, we advocate for a participatory research framework (e.g., citizen science) that can accelerate task development as well as make large-scale behavioral research more equitable and feasible. Finally, we suggest opportunities enabled by scalable behavioral research to move beyond single time-point assessment and toward dynamic models of behavior that more closely match clinical phenomena.
Mobile-and web-based psychological research are a valuable addition to the set of tools available for scientific study, reducing logistical barriers for research participation and allowing the recruitment of larger and more diverse participant groups. However, this comes at the cost of reduced control over the technology used by participants, which can introduce new sources of variability into study results. In this study, we examined differences in measured performance on timed and untimed cognitive tests between users of common digital devices in 59,587 (Study 1) and 3818 (Study 2) visitors to TestMyBrain.org, a web-based cognitive testing platform. Controlling for age, gender, educational background, and cognitive performance on an untimed vocabulary test, users of mobile devices, particularly Android smartphones, showed significantly slower performance on tests of reaction time than users of laptop and desktop computers, suggesting that differences in device latency affect measured reaction times. Users of devices that differ in user interface (e.g. screen size, mouse vs. touchscreen) also show significant differences (p < 0.001) in measured performance on tests requiring fast reactions or fine motor movements. By quantifying the contribution of device differences to measured cognitive performance in an online setting, we hope to improve the accuracy of mobile-and web-based cognitive assessments, allowing these methods to be used more effectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.