Wavelet transform has been widely used in image and signal processing applications such as denoising and compression. In this study, we explore the relation of the wavelet representation of stimuli with MEG signals acquired from a human object recognition experiment. To investigate the signature of wavelet descriptors in the visual system, we apply five levels of multi-resolution wavelet decomposition to the stimuli presented to participants during MEG recording and extract the approximation and detail sub-bands (horizontal, vertical, diagonal) coefficients in each level of decomposition. Apart from, employing multivariate pattern analysis (MVPA), a linear support vector classifier (SVM) is trained and tested over the time on MEG pattern vectors to decode neural information. Then, we calculate the representational dissimilarity matrix (RDM) on each time point of the MEG data and also on wavelet descriptors using classifier accuracy and one minus Pearson correlation coefficient, respectively. Given the time-courses calculated from performing the Pearson correlation between the wavelet descriptors RDMs and MEG decoding accuracy in each time point, our result shows that the peak latency of the wavelet approximation time courses occurs later for higher level coefficients. Furthermore, studying the neural trace of detail sub-bands indicates that the overall number of statistically significant time points for the horizontal and vertical detail coefficients is noticeably higher than diagonal detail coefficients, confirming the evidence of the oblique effect that the horizontal and vertical lines are more decodable in the human brain.
While the popularity of multivariate pattern classification is growing rapidly in magnetoencephalography (MEG) data analysis, the analysis pipelines used by the neuroscience community are still missing some fundamental machinelearning techniques and principles that would increase their effectiveness.Here, we show that MEG decoding accuracy improves significantly with the addition of feature selection methods to the analysis pipeline. We compare one unsupervised and two supervised feature reduction methods in the current study. Our results show that supervised feature selection methods like statistical dependency and mutual information improve decoding performance and attain higher session-to-session reliability compared to unsupervised dimensionality reduction methods like principal component analysis. Furthermore, we demonstrate that the selected sensors in the data related to a visual task at each time point are consistent with the pattern reflecting the sweep of information in the ventral visual pathway.
K E Y W O R D Sdimensionality reduction, feature selection, MEG analysis, MEG sensor selection, multivariate pattern analysis, mutual information, statistical dependency
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.