Localization of a 2-ms-click target was previously shown to be influenced by interleaved localization trials in which the target was preceded by an identical distractor [Kopčo, Best, and Shinn-Cunningham (2007). J. Acoust. Soc. Am. 121, 420–432]. Here, two experiments were conducted to explore this contextual effect. Results show that context-related bias is not eliminated (1) when the response method is changed so that vision is available or that no hand-pointing is required; or (2) when the distractor-target order is reversed. Additionally, a keyboard-based localization response method is introduced and shown to be more accurate than traditional pointer-based methods.
Two experiments examined plasticity induced by context in a simple target localization task. The context was represented by interleaved localization trials with the target preceded by a distractor. In a previous study, the context induced large response shifts when the target and distractor stimuli were identical 2-ms-noise clicks [Kopčo, Best, and Shinn-Cunningham (2007). J. Acoust. Soc. Am. 121, 420-432]. Here, the temporal characteristics of the contextual effect were examined for the same stimuli. Experiment 1 manipulated the context presentation rate and the distractor-target inter-stimulus interval (ISI). Experiment 2 manipulated the temporal structure of the context stimulus, replacing the one-click distractor either by a distractor consisting of eight sequentially presented clicks or by a noise burst with total energy and duration identical to the eight-click distractor. In experiment 1, the contextual shift size increased with increasing context rate while being largely independent of ISI. In experiment 2, the eight-click-distractor induced a stronger shift than the one-click-distractor context, while the noise-distractor context induced a very small shift. These results suggest that contextual plasticity is an adaptation driven both by low-level factors like spatiotemporal context distribution and higher-level factors like perceptual similarity between the stimuli, possibly related to precedence buildup.
The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user’s eye gaze.
Superdirectional acoustic beamforming technology provides a high signal-to-noise ratio, but potential speech intelligibility benefits to hearing aid users are limited by the way the users move their heads. Steering the beamformer using eye gaze instead of head orientation could mitigate this problem. This study investigated the intelligibility of target speech with a dynamically changing direction when heard through gaze-controlled (GAZE) or head-controlled (HEAD) superdirectional simulated beamformers. The beamformer provided frequency-independent noise attenuation of either 8 dB (WIDE [moderately directional]) or 12 dB (NARROW [highly directional]) relative to no beamformer referred as the OMNI (omni-directional) condition. Before the main experiment, signal-to-noise ratios were normalized for each participant and each beam width condition to yield equal percentage of correct performance in a reference condition. Hence, results are presented as normalized speech intelligibility (NSI). In an ongoing presentation, the participants ( n = 17), of varying degree of hearing loss, heard single-word targets every 1.5 s coming from either left (−30°) or right (+30°) presented in continuous, spatially distributed, speech-shaped noise. When the target was static, NSI was better in the GAZE than in the HEAD condition, but only when the beam was NARROW. When the target switched location without warning, NSI performance dropped. In this case, the WIDE HEAD condition provided the best average NSI performance, because some participants tended to orient their head in between the targets, allowing them to hear out the target regardless of location. The difference in NSI between GAZE and HEAD conditions for individual participants was related to the observed head-orientation strategy, which varied widely across participants.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.