The ability to detect sudden changes in the environment is critical for survival. Hearing is hypothesized to play a major role in this process by serving as an “early warning device,” rapidly directing attention to new events. Here, we investigate listeners' sensitivity to changes in complex acoustic scenes—what makes certain events “pop-out” and grab attention while others remain unnoticed? We use artificial “scenes” populated by multiple pure-tone components, each with a unique frequency and amplitude modulation rate. Importantly, these scenes lack semantic attributes, which may have confounded previous studies, thus allowing us to probe low-level processes involved in auditory change perception. Our results reveal a striking difference between “appear” and “disappear” events. Listeners are remarkably tuned to object appearance: change detection and identification performance are at ceiling; response times are short, with little effect of scene-size, suggesting a pop-out process. In contrast, listeners have difficulty detecting disappearing objects, even in small scenes: performance rapidly deteriorates with growing scene-size; response times are slow, and even when change is detected, the changed component is rarely successfully identified. We also measured change detection performance when a noise or silent gap was inserted at the time of change or when the scene was interrupted by a distractor that occurred at the time of change but did not mask any scene elements. Gaps adversely affected the processing of item appearance but not disappearance. However, distractors reduced both appearance and disappearance detection. Together, our results suggest a role for neural adaptation and sensitivity to transients in the process of auditory change detection, similar to what has been demonstrated for visual change detection. Importantly, listeners consistently performed better for item addition (relative to deletion) across all scene interruptions used, suggesting a robust perceptual representation of item appearance.
In the phenomenon of perceptual filling-in, missing sensory information can be reconstructed via interpolation or extrapolation from adjacent contextual cues by what is necessarily an endogenous, not yet well understood, neural process. In this investigation, sound stimuli were chosen to allow observation of fixed cortical oscillations driven by contextual (but missing) sensory input, thus entirely reflecting endogenous neural activity. The stimulus employed was a 5 Hz frequency-modulated tone, with brief masker probes (noise bursts) occasionally added. For half the probes, the rhythmic frequency modulation was moreover removed. Listeners reported whether the tone masked by each probe was perceived as being rhythmic or not. Time-frequency analysis of neural responses obtained by magnetoencephalography (MEG) shows that for maskers without the underlying acoustic rhythm, trials where rhythm was nonetheless perceived show higher evoked sustained rhythmic power than trials for which no rhythm was reported. The results support a model in which perceptual filling-in is aided by differential co-modulations of cortical activity at rates directly relevant to human speech communication. We propose that the presence of rhythmically-modulated neural dynamics predicts the subjective experience of a rhythmically modulated sound in real time, even when the perceptual experience is not supported by corresponding sensory data.
The spectrotemporal response function (STRF) model of neural encoding quantitatively associates dynamic auditory neural (output) responses to a spectrogramlike representation of a dynamic (input) stimulus. STRFs were experimentally obtained via whole-head human cortical responses to dynamic auditory stimuli using magnetoencephalography (MEG). The stimuli employed consisted of unpredictable pure tones presented at a range of rates. The predictive power of the estimated STRFs was found to be comparable to those obtained from the cortical single and multiunit activity . CC-BY-NC 4.0 International license peer-reviewed) is the author/funder. It is made available under aThe copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/168997 doi: bioRxiv preprint first posted online Jul. 26, 2017; literature. The STRFs were also qualitatively consistent with those obtained from electrophysiological studies in animal models; in particular their local-field-potentialgenerated spectral distributions and multiunit-activity-generated temporal distributions.Comparison of these MEG STRFs with others obtained using natural speech and music stimuli reveal a general structure consistent with common baseline auditory processing, including evidence for a transition in low-level neural representations of natural speech by 100 ms, when an appropriately chosen stimulus representation was used. It is also demonstrated that MEG-based STRFs contain information similar to that obtained using classic auditory evoked potential based approaches, but with extended applications to long-duration, non-repeated stimuli. Author summaryThe spectrotemporal response function (STRF) model of linking dynamic acoustic stimuli to dynamic neural responses is applied to whole-head non-invasive magnetoencephalography (MEG) recordings of the human auditory cortex. MEG STRFs were consistent predictors of neural activity, quantitatively and qualitatively, by comparison to those obtained from animal models using local field potential or multiunit activity as neural responses. Comparison of STRFs using stimuli as diverse as tone clouds, natural speech, and music revealed a common structure consistent with shared baseline auditory processing, when an appropriately chosen stimulus representation was used.
Cooperation upholds life in organized societies, but its neurobiological mechanisms remain unresolved. Recent theoretical analyses have contrasted cooperation by its fast versus slower modes of decisionmaking. This raises the question of the neural timescales involved in the integration of decision-related information, and of the participating neural circuits. Using time-resolved electroencephalography (EEG) methods, we characterized relevant neural signatures of feedback processing at the iterated prisoner's dilemma (iPD), an economic task that addresses cooperation-based exchange between social 1 5 10 15 20
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.