In this series of behavioural and electroencephalography (EEG) experiments, we investigate the extent to which repeating patterns of sounds capture attention. Work in the visual domain has revealed attentional capture by statistically predictable stimuli, consistent with predictive coding accounts which suggest that attention is drawn to sensory regularities. Here, stimuli comprised rapid sequences of tone pips, arranged in regular (REG) or random (RAND) patterns. EEG data demonstrate that the brain rapidly recognizes predictable patterns manifested as a rapid increase in responses to REG relative to RAND sequences. This increase is reminiscent of the increase in gain on neural responses to attended stimuli often seen in the neuroimaging literature, and thus consistent with the hypothesis that predictable sequences draw attention. To study potential attentional capture by auditory regularities, we used REG and RAND sequences in two different behavioural tasks designed to reveal effects of attentional capture by regularity. Overall, the pattern of results suggests that regularity does not capture attention.This article is part of the themed issue ‘Auditory and visual scene analysis’.
The brain draws on knowledge of statistical structure in the environment to facilitate detection of new events. Understanding the nature of this representation is a key challenge in sensory neuroscience. Specifically, it is unknown whether real-time perception of rapidly-unfolding sensory signals is driven by a coarse or detailed representation of the proximal stimulus history. We recorded electroencephalography brain responses to frequency outliers in regularly-patterned (REG) versus random (RAND) tone-pip sequences which were generated anew on each trial. REG and RAND sequences were matched in frequency content and span, only differing in the specific order of the tone-pips. Stimuli were very rapid, limiting conscious reasoning in favour of automatic processing of regularity. Listeners were naïve and performed an incidental visual task. Outliers within REG evoked a larger response than matched outliers in RAND. These effects arose rapidly (within 80 msec) and were underpinned by distinct sources from those classically associated with frequency-based deviance detection. These findings are consistent with the notion that the brain continually maintains a detailed representation of ongoing sensory input and that this representation shapes the processing of incoming information. Predominantly auditory-cortical sources code for frequency deviance whilst frontal sources are associated with tracking more complex sequence structure.
How are brain responses to deviant events affected by the statistics of the preceding context? We recorded electroencephalography (EEG) brain responses to frequency deviants in matched, regularly-patterned (REG) versus random (RAND) tone-pip sequences. Listeners were naïve and distracted by an incidental visual task. Stimuli were very rapid so as to limit conscious reasoning about the sequence order and tap automatic processing of regularity. Deviants within REG sequences evoked a substantially larger response (by 71%) than matched deviants in RAND sequences from 80 ms after deviant onset. This effect was underpinned by distinct sources in right temporal pole and orbitofrontal cortex in addition to the standard bilateral temporal and right pre-frontal network for generic frequency deviancedetection. These findings demonstrate that the human brain rapidly acquires a detailed representation of regularities within the sensory input and evaluates incoming information according to the context established by the specific pattern.Detection of new events within a constantly fluctuating sensory input is a fundamental challenge to organisms in dynamic environments. Hypothesized to underlie this process is a continually-refined internal model of the real-world causes of sensations, made possible by exploiting statistical structure in the sensory input 1-4 . Evidence from multiple domains, including speech 5 , abstract sound sequences 6,7 , vision 8 and motor control 9 reveals sensitivity to environmental statistics which influences top-down, expectation-driven perceptual processing. When the organism encounters a sensory input that is inconsistent with the established internal model, and is therefore indicative of a potentially relevant change in the environment, a 'prediction error' or 'surprise' response is generated 10 , promoting a rapid reaction to the associated environmental change. Understanding what aspects of stimuli are 'surprising', is therefore central to understanding this network.The auditory system has been a fertile ground for probing sensory error responses, at multiple levels of the processing hierarchy [11][12][13] . A common approach involves using a stream of standard sounds to establish a regularity that is occasionally interrupted by 'deviant' sounds [14][15][16][17] . Deviants usually evoke an increased response relative to that measured for the standards 16,18,19 . Since many of the investigated sequences have been very simple, often a repeated tone, neural adaptation is likely a major contributor to this process 12,20,21 . However, accumulating evidence suggests that at least part of the response arises from neural processes associated with computing 'surprise' or detecting a mismatch between expected and actual sensory input 15,22,23 What is the information used in calculating surprise? By modelling brain responses to two-tone sequences of different base probabilities, Rubin et al. 4 demonstrated that trial-wise neural responses in auditory cortex are well explained by the probability of occ...
Psychophysical tests of spectro-temporal resolution may aid the evaluation of methods for improving hearing by cochlear implant (CI) listeners. Here the STRIPES (Spectro-Temporal Ripple for Investigating Processor EffectivenesS) test is described and validated. Like speech, the test requires both spectral and temporal processing to perform well. Listeners discriminate between complexes of sine sweeps which increase or decrease in frequency; difficulty is controlled by changing the stimulus spectro-temporal density. Care was taken to minimize extraneous cues, forcing listeners to perform the task only on the direction of the sweeps. Vocoder simulations with normal hearing listeners showed that the STRIPES test was sensitive to the number of channels and temporal information fidelity. An evaluation with CI listeners compared a standard processing strategy with one having very wide filters, thereby spectrally blurring the stimulus. Psychometric functions were monotonic for both strategies and five of six participants performed better with the standard strategy. An adaptive procedure revealed significant differences, all in favour of the standard strategy, at the individual listener level for six of eight CI listeners. Subsequent measures validated a faster version of the test, and showed that STRIPES could be performed by recently implanted listeners having no experience of psychophysical testing.
We know that reading involves coordination between textual characteristics and visual attention, but research linking eye movements during reading and comprehension assessed after reading is surprisingly limited, especially for reading long connected texts. We tested two competing possibilities: (a) the weak association hypothesis: Links between eye movements and comprehension are weak and short-lived, versus (b) the strong association hypothesis: The two are robustly linked, even after a delay. Using a predictive modeling approach, we trained regression models to predict comprehension scores from global eye movement features, using participant-level crossvalidation to ensure that the models generalize across participants. We used data from three studies in which readers (Ns = 104, 130, 147) answered multiple-choice comprehension questions 30 min after reading a 6,500-word text, or after reading up to eight 1,000-word texts. The models generated accurate predictions of participants' text comprehension scores (correlations between observed and predicted comprehension: 0.384, 0.362, 0.372, ps < .001), in line with the strong association hypothesis. We found that making more, but shorter fixations, consistently predicted comprehension across all studies. Furthermore, models trained on one study's data could successfully predict comprehension on the others, suggesting generalizability across studies. Collectively, these findings suggest that there is a robust link between eye movements and subsequent comprehension of a long connected text, thereby connecting theories of low-level eye movements with those of higher order text processing during reading.
Psychological science can benefit from and contribute to emerging approaches from the computing and information sciences driven by the availability of real-world data and advances in sensing and computing. We focus on one such approach, machine-learned computational models (MLCMs)—computer programs learned from data, typically with human supervision. We introduce MLCMs and discuss how they contrast with traditional computational models and assessment in the psychological sciences. Examples of MLCMs from cognitive and affective science, neuroscience, education, organizational psychology, and personality and social psychology are provided. We consider the accuracy and generalizability of MLCM-based measures, cautioning researchers to consider the underlying context and intended use when interpreting their performance. We conclude that in addition to known data privacy and security concerns, the use of MLCMs entails a reconceptualization of fairness, bias, interpretability, and responsible use.
Duncan and Humphreys (1989) identified two key factors that affected performance in a visual search task for a target among distractors. The first was the similarity of the target to distractors (TD), and the second was the similarity of distractors to each other (DD). Here we investigate if it is the perceived similarity in foveal or peripheral vision that determines performance. We studied search using stimuli made from patches cut from colored images of natural objects; differences between targets and their modified distractors were estimated using a ratings task peripherally and foveally. We used search conditions in which the targets and distractors were easy to distinguish both foveally and peripherally ("high" stimuli), in which they were difficult to distinguish both foveally and peripherally ("low"), and in which they were easy to distinguish foveally but difficult to distinguish peripherally ("metamers"). In the critical metameric condition, search slopes (change of search time with number of distractors) were similar to the "low" condition, indicating a key role for peripheral information in visual search as both conditions have low perceived similarity peripherally. Furthermore, in all conditions, search slope was well described quantitatively from peripheral TD and DD but not foveal. However, some features of search, such as error rates, do indicate roles for foveal vision too.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.