Understanding how brains process sensory signals in natural environments is one of the key goals of twenty-first century neuroscience. While brain imaging and invasive electrophysiology will play key roles in this endeavor, there is also an important role to be played by noninvasive, macroscopic techniques with high temporal resolution such as electro- and magnetoencephalography. But challenges exist in determining how best to analyze such complex, time-varying neural responses to complex, time-varying and multivariate natural sensory stimuli. There has been a long history of applying system identification techniques to relate the firing activity of neurons to complex sensory stimuli and such techniques are now seeing increased application to EEG and MEG data. One particular example involves fitting a filter—often referred to as a temporal response function—that describes a mapping between some feature(s) of a sensory stimulus and the neural response. Here, we first briefly review the history of these system identification approaches and describe a specific technique for deriving temporal response functions known as regularized linear regression. We then introduce a new open-source toolbox for performing this analysis. We describe how it can be used to derive (multivariate) temporal response functions describing a mapping between stimulus and response in both directions. We also explain the importance of regularizing the analysis and how this regularization can be optimized for a particular dataset. We then outline specifically how the toolbox implements these analyses and provide several examples of the types of results that the toolbox can produce. Finally, we consider some of the limitations of the toolbox and opportunities for future development and application.
The human ability to localize sound is essential for monitoring our environment and helps us to analyse complex auditory scenes. Although the acoustic cues mediating sound localization have been established, it remains unknown how these cues are represented in human cortex. In particular, it is still a point of contention whether binaural and monaural cues are processed by the same or distinct cortical networks. In this study, participants listened to a sequence of auditory stimuli from different spatial locations while we recorded their neural activity using electroencephalography (EEG). The stimuli were presented over a loudspeaker array, which allowed us to deliver realistic, free-field stimuli in both the horizontal and vertical planes. Using a multivariate classification approach, we showed that it is possible to decode sound source location from scalp-recorded EEG. Robust and consistent decoding was shown for stimuli that provide binaural cues (i.e. Left vs. Right stimuli). Decoding location when only monaural cues were available (i.e. Front vs. Rear and elevational stimuli) was successful for a subset of subjects and showed less consistency. Notably, the spatio-temporal pattern of EEG features that facilitated decoding differed based on the availability of binaural and monaural cues. In particular, we identified neural processing of binaural cues at around 120 ms post-stimulus and found that monaural cues are processed later between 150 and 200 ms. Furthermore, different spatial activation patterns emerged for binaural and monaural cue processing. These spatio-temporal dissimilarities suggest the involvement of separate cortical mechanisms in monaural and binaural acoustic cue processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.