Aggression observed in 2-year-old children of well and depressed mothers was examined in relation to problem behaviors at ages 5–6. Both normative (e.g., object struggles, rough play) and maladaptive (e.g., dysregulated, out-of-control behavior) forms of toddler aggression were identified. Dysregulated aggression predicted (a) externalizing problems reported by mothers when children were 5 years old, and (b) children's reports of difficulties during a structured psychiatric interview at age 6. Problems were more frequent and continuity patterns more evident in children of depressed, than well, mothers. Early maladaptive aggression was a better predictor of later externalizing, than internalizing problems. Childrearing practices of mothers of toddlers also appeared to contribute to later outcomes: negative influences were evident but protective patterns were present as well. Depressed mothers who used proactive childrearing approaches (e.g., anticipating the child's point of view; exerting modulated, respectful control; providing structure and organization during play environment) had children who showed fewer externalizing problems 3 years later.
The auditory system continuously parses the acoustic environment into auditory objects, usually representing separate sound sources. Sound sources typically show characteristic emission patterns. These regular temporal sound patterns are possible cues for distinguishing sound sources. The present study was designed to test whether regular patterns are used as cues for source distinction and to specify the role that detecting these regularities may play in the process of auditory stream segregation. Participants were presented with tone sequences, and they were asked to continuously indicate whether they perceived the tones in terms of a single coherent sequence of sounds (integrated) or as two concurrent sound streams (segregated). Unknown to the participant, in some stimulus conditions, regular patterns were present in one or both putative streams. In all stimulus conditions, participants' perception switched back and forth between the two sound organizations. Importantly, regular patterns occurring in either one or both streams prolonged the mean duration of two-stream percepts, whereas the duration of one-stream percepts was unaffected. These results suggest that temporal regularities are utilized in auditory scene analysis. It appears that the role of this cue lies in stabilizing streams once they have been formed on the basis of simpler acoustic cues.
Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm.
Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the dynamics of human perception in auditory streaming.
The auditory two-tone streaming paradigm has been used extensively to study the mechanisms that underlie the decomposition of the auditory input into coherent sound sequences. Using longer tone sequences than usual in the literature, we show that listeners hold their first percept of the sound sequence for a relatively long period, after which perception switches between two or more alternative sound organizations, each held on average for a much shorter duration. The first percept also differs from subsequent ones in that stimulus parameters influence its quality and duration to a far greater degree than the subsequent ones. We propose an account of auditory streaming in terms of rivalry between competing temporal associations based on two sets of processes. The formation of associations (discovery of alternative interpretations) mainly affects the first percept by determining which sound group is discovered first and how long it takes for alternative groups to be established. In contrast, subsequent percepts arise from stochastic switching between the alternatives, the dynamics of which are determined by competitive interactions between the set of coexisting interpretations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.