Most brain research to date has focused on studying the amplitude of evoked fMRI responses, though there has recently been an increased interest in measuring onset, peak latency and duration of the responses as well. A number of modeling procedures provide measures of the latency and duration of fMRI responses. In this work we compare several techniques that vary in their assumptions, model complexity, and interpretation. For each method, we introduce methods for estimating amplitude, peak latency, and duration and for performing inference in a multi-subject fMRI setting. We then assess the techniques’ relative sensitivity and their propensity for mis-attributing task effects on one parameter (e.g., duration) to another (e.g., amplitude). Finally, we introduce methods for quantifying model misspecification and assessing bias and power-loss related to the choice of model. Overall, the results show that it is surprisingly difficult to accurately recover true task-evoked changes in BOLD signal and that there are substantial differences among models in terms of power, bias and parameter confusability. Because virtually all fMRI studies in cognitive and affective neuroscience employ these models, the results bear on the interpretation of hemodynamic response estimates across a wide variety of psychological and neuroscientific studies.
In this paper, we examine the validity of non-parametric spatial bootstrap as a procedure to quantify errors in estimates of N-point correlation functions. We do this by means of a small simulation study with simple point process models and estimating the two-point correlation functions and their errors. The coverage of confidence intervals obtained using bootstrap is compared with those obtained from assuming Poisson errors. The bootstrap procedure considered here is adapted for use with spatial (i.e. dependent) data. In particular, we describe a marked point bootstrap where, instead of resampling points or blocks of points, we resample marks assigned to the data points. These marks are numerical values that are based on the statistic of interest. We describe how the marks are defined for the two-and three-point correlation functions. By resampling marks, the bootstrap samples retain more of the dependence structure present in the data. Furthermore, this method of bootstrap can be performed much quicker than some other bootstrap methods for spatial data, making it a more practical method with large datasets. We find that with clustered point datasets, confidence intervals obtained using the marked point bootstrap has empirical coverage closer to the nominal level than the confidence intervals obtained using Poisson errors. The bootstrap errors were also found to be closer to the true errors for the clustered point datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.