The auditory system has been shown to detect predictability in a tone sequence, but does it use the extracted regularities for actually predicting the continuation of the sequence? The present study sought to find evidence for the generation of such predictions. Predictability was manipulated in an isochronous series of tones in which every other tone was a repetition of its predecessor. The existence of predictions was probed by occasionally omitting either the first (unpredictable) or the second (predictable) tone of a same-frequency tone pair. Event-related electrical brain activity elicited by the omission of an unpredictable tone differed from the response to the actual tone right from the tone onset. In contrast, early electrical brain activity elicited by the omission of a predictable tone was quite similar to the response to the actual tone. This suggests that the auditory system preactivates the neural circuits for expected input, using sequential predictions to specifically prepare for future acoustic events.
The auditory system continuously parses the acoustic environment into auditory objects, usually representing separate sound sources. Sound sources typically show characteristic emission patterns. These regular temporal sound patterns are possible cues for distinguishing sound sources. The present study was designed to test whether regular patterns are used as cues for source distinction and to specify the role that detecting these regularities may play in the process of auditory stream segregation. Participants were presented with tone sequences, and they were asked to continuously indicate whether they perceived the tones in terms of a single coherent sequence of sounds (integrated) or as two concurrent sound streams (segregated). Unknown to the participant, in some stimulus conditions, regular patterns were present in one or both putative streams. In all stimulus conditions, participants' perception switched back and forth between the two sound organizations. Importantly, regular patterns occurring in either one or both streams prolonged the mean duration of two-stream percepts, whereas the duration of one-stream percepts was unaffected. These results suggest that temporal regularities are utilized in auditory scene analysis. It appears that the role of this cue lies in stabilizing streams once they have been formed on the basis of simpler acoustic cues.
The remarkable capabilities displayed by humans in making sense of an overwhelming amount of sensory information cannot be explained easily if perception is viewed as a passive process. Current theoretical and computational models assume that to achieve meaningful and coherent perception, the human brain must anticipate upcoming stimulation. But how are upcoming stimuli predicted in the brain? We unmasked the neural representation of a prediction by omitting the predicted sensory input. Electrophysiological brain signals showed that when a clear prediction can be formulated, the brain activates a template of its response to the predicted stimulus before it arrives to our senses.
Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm.
Traditional auditory oddball paradigms imply the brain's ability to encode regularities, but are not optimal for investigating the process of regularity establishment. In the present study, a dynamic experimental protocol was developed that simulates a more realistic auditory environment with changing regularities. The dynamic sequences were included in a distraction paradigm in order to study regularity extraction and application. Subjects discriminated the duration of sequentially presented tones. Without relevance to the task, tones repeated or changed in frequency according to a pattern unknown to the subject. When frequency repetitions were broken by a deviating tone, behavioral distraction (prolonged reaction time in the duration discrimination task) was elicited. Moreover, event-related brain potential components indicated deviance detection (mismatch negativity), involuntary attention switches (P3a), and attentional reorientation. These results suggest that regularities were extracted from the dynamic stimulation and were used to predict forthcoming stimuli. The effects were already observed with deviants occurring after as few as two presentations of a standard frequency, that is, violating a just emerging rule. Effects of regularity violation strengthened with the number of standard repetitions. Control stimuli comprising no regularity revealed that the observed effects were due to both improvements in standard processing (benefits of regularity establishment) and deteriorations in deviant processing (costs of regularity violation). Thus, regularities are exploited in two different ways: for an efficient processing of regularity-conforming events as well as for the detection of nonconforming, presumably important events. The present results underline the brain's flexibility in its adaptation to environmental demands.
The ability to encode rules and to detect rule-violating events outside the focus of attention is vital for adaptive behavior. Our brain recordings reveal that violations of abstract auditory rules are processed even when the sounds are unattended. When subjects performed a task related to the sounds but not to the rule, rule violations impaired task performance and activated a network involving supratemporal, parietal and frontal areas although none of the subjects acquired explicit knowledge of the rule or became aware of rule violations. When subjects tried to behaviorally detect rule violations, the brain's automatic violation detection facilitated intentional detection. This shows the brain's capacity for abstraction – an important cognitive function necessary to model the world. Our study provides the first evidence for the task-independence (i.e. automaticity) of this ability to encode abstract rules and for its immediate consequences for subsequent mental processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.