Four studies with 180 5-7 year olds, 165 8-11 year olds and 199 adults show that young children appreciate the distinctive role played by mechanistic explanations in tracking causal patterns. Young children attributed greater knowledge to individuals offering mechanistic reasons for a claim than others who provide equally detailed nonmechanistic reasons. In Study 1, 5-7 year olds attributed greater knowledge to those offering mechanistic reasons. In Studies 2 and 3, all ages (5-7 and adults for Study 2; 5-7, 8-11 and adults for Study 3) assigned greater knowledge to those offering mechanistic reasons about causally central features than those offering nonmechanistic reasons. In Study 4, all ages (5-7, 8-11, adults) modulated the epistemic bias as a function of embedding goals. Sophie Kerr is currently at the
Online data collection methods are expanding the ease and access of developmental research for researchers and participants alike. While its popularity among developmental scientists has soared during the COVID-19 pandemic, its potential goes beyond just a means for safe, socially distanced data collection. In particular, advances in video conferencing software has enabled researchers to engage in face-to-face interactions with participants from nearly any location at any time. Due to the novelty of these methods, however, many researchers still remain uncertain about the differences in available approaches as well as the validity of online methods more broadly. In this article, we aim to address both issues with a focus on moderated (synchronous) data collected using video-conferencing software (e.g., Zoom). First, we review existing approaches for designing and executing moderated online studies with young children. We also present concrete examples of studies that implemented choice and verbal measures (Studies 1 and 2) and looking time (Studies 3 and 4) across both in-person and online moderated data collection methods. Direct comparison of the two methods within each study as well as a meta-analysis of all studies suggest that the results from the two methods are comparable, providing empirical support for the validity of moderated online data collection. Finally, we discuss current limitations of online data collection and possible solutions, as well as its potential to increase the accessibility, diversity, and replicability of developmental science.
An increasing number of psychological experiments with children are being conducted using online platforms, in part due to the COVID-19 pandemic. Individual replications have compared the findings of particular experiments online and in-person, but the general effect of online data collection on data collected from children is still unknown. Therefore, the current meta-analysis examines how the effect sizes of developmental studies conducted online compare to the same studies conducted in-person. Our pre-registered analysis includes 145 effect sizes calculated from 24 papers with 2440 children, ranging in age from four months to six years. We examined several moderators of the effect of online testing, including the role of dependent measure (looking vs verbal), online study method (moderated vs unmoderated), and age. The mean effect size of studies conducted in-person (d = .68) was slightly larger than the mean effect size of their counterparts conducted online (d = .54), but this difference was not significant. Additionally, we found no significant moderating effect of dependent measure, online study method, or age. Overall, the results of the current meta-analysis suggest developmental data collected online are generally comparable to data collected in-person.
Previous research shows that children effectively extract and utilize causal information, yet we find that adults doubt children’s ability to understand complex mechanisms. Since adults themselves struggle to explain how everyday objects work, why expect more from children? Although remembering details may prove difficult, we argue that exposure to mechanism benefits children via the formation of abstract causal knowledge that supports epistemic evaluation. We tested 240 6–9 year-olds’ memory for concrete details and the ability to distinguish expertise before, immediately after, or a week after viewing a video about how combustion engines work. By around age 8, children who saw the video remembered mechanistic details and were better able to detect car-engine experts. Beyond detailed knowledge, the current results suggest that children also acquired an abstracted sense of how systems work that can facilitate epistemic reasoning.
Moving beyond distinguishing knowledge and beliefs, we propose two lines of inquiry for the next generation of theory of mind (ToM) research: (1) characterizing the contents of different mental-state representations and (2) formalizing the computations that generate such contents. Studying how children reason about what others think of the self provides an illuminating window into the richness and flexibility of human social cognition.
As adults, we intuitively understand how others’ goals influence their information-seeking preferences. For example, you might recommend a dense book full of mechanistic details to someone trying to learn about a topic in-depth, but a more lighthearted book filled with surprising stories to someone seeking entertainment. Moreover, you might do this with confidence despite knowing few details about either book. Even though we offer or receive such recommendations frequently as adults, we know little about how the ability to evaluate and recommend information sources to others develops. Two studies examined how children (6–9 years, Eastern U.S. residents, n = 311) and adults (U.S. residents, n = 180) select mechanistic and entertaining information sources for others depending on their goals. Participants recommended books containing mechanistic information to agents who wanted to learn and entertaining information to agents who wanted to have fun. In contrast to adults who strongly favored entertaining books, children recommended both kinds of books equally to a generally curious agent. These results suggest children can infer others’ information-seeking preferences based on their goals and recommend appropriate information sources to satisfy those goals despite possessing little topical knowledge themselves.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.