Preprocessing of functional MRI (fMRI) involves numerous steps to clean and standardize data before statistical analysis. Generally, researchers create ad-hoc preprocessing workflows for each new dataset, building upon a large inventory of tools available. The complexity of these workflows has snowballed with rapid advances in acquisition and processing. We introduce fMRIPrep , an analysis-agnostic tool that addresses the challenge of robust and reproducible preprocessing for fMRI data. FMRIPrep automatically adapts a best-in-breed workflow to the idiosyncrasies of virtually any dataset, ensuring high-quality preprocessing with no manual intervention. By introducing visual assessment checkpoints into an iterative integration framework for software-testing, we show that fMRIPrep robustly produces high-quality results on a diverse fMRI data collection. Additionally, fMRIPrep introduces less uncontrolled spatial smoothness than commonly used preprocessing tools. FMRIPrep equips neuroscientists with a high-quality, robust, easy-to-use and transparent preprocessing workflow, which can help ensure the validity of inference and the interpretability of their results.
Functional neuroimaging techniques have transformed our ability to probe the neurobiological basis of behaviour and are increasingly being applied by the wider neuroscience community. However, concerns have recently been raised that the conclusions drawn from some human neuroimaging studies are either spurious or not generalizable. Problems such as low statistical power, flexibility in data analysis, software errors, and lack of direct replication apply to many fields, but perhaps particularly to fMRI. Here we discuss these problems, outline current and suggested best practices, and describe how we think the field should evolve to produce the most meaningful answers to neuroscientific questions. Main textNeuroimaging, particularly using functional magnetic resonance imaging (fMRI), has become the primary tool of human neuroscience 1 , and recent advances in the acquisition and analysis of fMRI data have provided increasingly powerful means to dissect brain function. The most common form of fMRI (known as "blood oxygen level dependent" or BOLD fMRI) measures brain activity indirectly through localized changes in blood oxygenation that occur in relation to 2 synaptic signaling 2 . These signal changes provide the ability to map activation in relation to specific mental processes, identify functionally connected networks from resting fMRI 3 , characterize neural representational spaces 4 , and decode or predict mental function from brain activity 5,6 . These advances promise to offer important insights into the workings of the human brain, but also generate the potential for a "perfect storm" of irreproducible results. In particular, the high dimensionality of fMRI data, relatively low power of most fMRI studies, and the great amount of flexibility in data analysis all potentially contribute to a high degree of false positive findings.Recent years have seen intense interest in the reproducibility of scientific results and the degree to which some problematic yet common research practices may be responsible for high rates of false findings in the scientific literature, particularly within psychology but also more generally [7][8][9] . There is growing interest in "meta-research" 10 , and a corresponding growth in studies investigating factors that contribute to poor reproducibility. These factors include study design characteristics which may introduce bias, low statistical power, and flexibility in data collection, analysis, and reporting -termed "researcher degrees of freedom" by Simmons and colleagues 8 . There is clearly concern that these issues may be undermining the value of science -in the UK, the Academy of Medical Sciences recently convened a joint meeting with a number of other funders to explore these issues, while in the US the National Institutes of Health has an ongoing initiative to improve research reproducibility 11 .In this article we outline a number of potentially problematic research practices in neuroimaging that can lead to increased risk of false or exaggerated results. For each prob...
Preprocessing of functional MRI (fMRI) involves numerous steps to clean and standardize data 24Preprocessing of fMRI in a nutshell, for a summary). Extracting a signal that is most faithful to the 25 underlying neural activity is crucial to ensure the validity of inference and interpretability of results 6 .
Functional neuroimaging techniques have transformed our ability to probe the neurobiological basis of behaviour and are increasingly being applied by the wider neuroscience community. However, concerns have recently been raised that the conclusions drawn from some human neuroimaging studies are either spurious or not generalizable. Problems such as low statistical power, flexibility in data analysis, software errors, and lack of direct replication apply to many fields, but perhaps particularly to fMRI. Here we discuss these problems, outline current and suggested best practices, and describe how we think the field should evolve to produce the most meaningful answers to neuroscientific questions. Main textNeuroimaging, particularly using functional magnetic resonance imaging (fMRI), has become the primary tool of human neuroscience 1 , and recent advances in the acquisition and analysis of fMRI data have provided increasingly powerful means to dissect brain function. The most common form of fMRI (known as "blood oxygen level dependent" or BOLD fMRI) measures brain activity indirectly through localized changes in blood oxygenation that occur in relation to 2 synaptic signaling 2 . These signal changes provide the ability to map activation in relation to specific mental processes, identify functionally connected networks from resting fMRI 3 , characterize neural representational spaces 4 , and decode or predict mental function from brain activity 5,6 . These advances promise to offer important insights into the workings of the human brain, but also generate the potential for a "perfect storm" of irreproducible results. In particular, the high dimensionality of fMRI data, relatively low power of most fMRI studies, and the great amount of flexibility in data analysis all potentially contribute to a high degree of false positive findings.Recent years have seen intense interest in the reproducibility of scientific results and the degree to which some problematic yet common research practices may be responsible for high rates of false findings in the scientific literature, particularly within psychology but also more generally [7][8][9] .There is growing interest in "meta-research" 10 , and a corresponding growth in studies investigating factors that contribute to poor reproducibility. These factors include study design characteristics which may introduce bias, low statistical power, and flexibility in data collection, analysis, and reporting -termed "researcher degrees of freedom" by Simmons and colleagues 8 . There is clearly concern that these issues may be undermining the value of science -in the UK, the Academy of Medical Sciences recently convened a joint meeting with a number of other funders to explore these issues, while in the US the National Institutes of Health has an ongoing initiative to improve research reproducibility 11 .In this article we outline a number of potentially problematic research practices in neuroimaging that can lead to increased risk of false or exaggerated results. For each probl...
SummaryData analysis workflows in many scientific domains have become increasingly complex and flexible. To assess the impact of this flexibility on functional magnetic resonance imaging (fMRI) results, the same dataset was independently analyzed by 70 teams, testing nine ex-ante hypotheses. The flexibility of analytic approaches is exemplified by the fact that no two teams chose identical workflows to analyze the data. This flexibility resulted in sizeable variation in hypothesis test results, even for teams whose statistical maps were highly correlated at intermediate stages of their analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Importantly, meta-analytic approaches that aggregated information across teams yielded significant consensus in activated regions across teams. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. Our findings show that analytic flexibility can have substantial effects on scientific conclusions, and demonstrate factors related to variability in fMRI. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for multiple analyses of the same data. Potential approaches to mitigate issues related to analytical variability are discussed.
BACKGROUND AND OBJECTIVES: Child mobile device use is increasingly prevalent, but research is limited by parent-report survey methods that may not capture the complex ways devices are used. We aimed to implement mobile device sampling, a set of novel methods for objectively measuring child mobile device use. METHODS:We recruited 346 English-speaking parents and guardians of children aged 3 to 5 years to take part in a prospective cohort study of child media use. All interactions with participants were through e-mail, online surveys, and mobile device sampling; we used a passive-sensing application (Chronicle) in Android devices and screenshots of the battery feature in iOS devices. Baseline data were analyzed to describe usage behaviors and compare sampling output with parent-reported duration of use. RESULTS:The sample comprised 126 Android users (35 tablets, 91 smartphones) and 220 iOS users (143 tablets, 77 smartphones); 35.0% of children had their own device. The most commonly used applications were YouTube, YouTube Kids, Internet browser, quick search or Siri, and streaming video services. Average daily usage among the 121 children with their own device was 115.3 minutes/day (SD 115.1; range 0.20-632.5) and was similar between Android and iOS devices. Compared with mobile device sampling output, most parents underestimated (35.7%) or overestimated (34.8%) their child's use. CONCLUSIONS:Mobile device sampling is an unobtrusive and accurate method for assessing mobile device use. Parent-reported duration of mobile device use in young children has low accuracy, and use of objective measures is needed in future research.
Barr et al. Beyond Screen Time exposure in families with young children: measuring attitudes and practices; capturing content and context; measuring short bursts of mobile device usage; and integrating data to capture the complexity of household media usage. We illustrate how each of these challenges can be addressed with preliminary data collected with the CAFE tool and visualized on our dashboard. We conclude with future directions including plans to test reliability, validity, and generalizability of these measures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.