2016
DOI: 10.3389/fninf.2016.00027
|View full text |Cite
|
Sign up to set email alerts
|

CoSMoMVPA: Multi-Modal Multivariate Pattern Analysis of Neuroimaging Data in Matlab/GNU Octave

Abstract: Recent years have seen an increase in the popularity of multivariate pattern (MVP) analysis of functional magnetic resonance (fMRI) data, and, to a much lesser extent, magneto- and electro-encephalography (M/EEG) data. We present CoSMoMVPA, a lightweight MVPA (MVP analysis) toolbox implemented in the intersection of the Matlab and GNU Octave languages, that treats both fMRI and M/EEG data as first-class citizens. CoSMoMVPA supports all state-of-the-art MVP analysis techniques, including searchlight analyses, c… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
372
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 513 publications
(373 citation statements)
references
References 56 publications
(97 reference statements)
1
372
0
Order By: Relevance
“…To account for activation differences between runs, the mean activation across all blocks was subtracted from each voxel's values, separately for each run. Decoding analyses were performed using CoSMoMVPA (Oosterhof, Connolly, & Haxby, ), and were carried out separately for each ROI and participant. We used data from four runs to train linear discriminant analysis (LDA) classifiers to discriminate multi‐voxel response patterns (i.e., patterns of voxel activations across all voxels of an ROI) for two conditions (e.g., spatially intact versus spatially jumbled scenes).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To account for activation differences between runs, the mean activation across all blocks was subtracted from each voxel's values, separately for each run. Decoding analyses were performed using CoSMoMVPA (Oosterhof, Connolly, & Haxby, ), and were carried out separately for each ROI and participant. We used data from four runs to train linear discriminant analysis (LDA) classifiers to discriminate multi‐voxel response patterns (i.e., patterns of voxel activations across all voxels of an ROI) for two conditions (e.g., spatially intact versus spatially jumbled scenes).…”
Section: Methodsmentioning
confidence: 99%
“…EEG decoding was performed separately for each time point (i.e., every 5 ms) from –200 ms to 800 ms relative to stimulus onset, using CoSMoMVPA (Oosterhof et al, ). We used data from all‐but‐one trials for two conditions to train LDA classifiers to discriminate topographical response patterns (i.e., patterns across all electrodes) for two conditions (e.g., spatially intact versus spatially jumbled scenes).…”
Section: Methodsmentioning
confidence: 99%
“…3 e , f ) [36,40] or to each other (figure 3 g ) [30,58]. Importantly, excellent toolboxes that ease the application of RSA in different programming environments are readily available [59,60].
Figure 3.RSA as a quantitative framework for combining models and data from different neuroimaging techniques.
…”
Section: A Tripartite Approach To Tackle Current Methodsological Challmentioning
confidence: 99%
“…To calculate the z value associated with each beta value (reflecting the fit of the behavioral matrix with the neural matrix for that searchlight), we used the Cosmo Monte Carlo Cluster Stat function with multiple comparison correction within the CoSMo toolbox (see Oosterhof, Connolly, & Haxby, 2016). This function was run separately for each ROI.…”
Section: Methodsmentioning
confidence: 99%
“…4 Next, null TFCE distributions are generated by randomly flipping the signs of the observed beta values and performing t tests on each of 10,000 permutations. Finally, a final z map is derived by comparing, for each searchlight center, the number of times the observed TFCE was smaller than the maximum TFCE in the null maps and dividing this by the number of iterations (therefore correcting for all comparisons within an ROI; see Oosterhof et al, 2016). …”
Section: Methodsmentioning
confidence: 99%