2017
DOI: 10.1080/23273798.2017.1354129
|View full text |Cite
|
Sign up to set email alerts
|

Word segmentation from noise-band vocoded speech

Abstract: Spectral degradation reduces access to the acoustics of spoken language and compromises how learners break into its structure. We hypothesised that spectral degradation disrupts word segmentation, but that listeners can exploit other cues to restore detection of words. Normal-hearing adults were familiarised to artificial speech that was unprocessed or spectrally degraded by noise-band vocoding into 16 or 8 spectral channels. The monotonic speech stream was pause-free (Experiment 1), interspersed with isolated… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 77 publications
0
3
0
Order By: Relevance
“…For example, Barakat, Seitz, and Shams () reported that the presence of statistical regularities enhances the detection of individual visual elements even when they appear outside the context of the learned regularities. In a similar vein, Grieco‐Calub, Simeon, Snyder, and Lew‐Williams () showed that auditory SL performance is impaired when the familiarization stream comprises spectrally degraded speech sounds. From a broader perspective, the interaction between encoding and learning aligns with the well‐documented benefits of context on stimulus encoding, such as the classic effect of schema congruency on subsequent memory of new events (Hintzman, ).…”
Section: Introductionmentioning
confidence: 89%
See 1 more Smart Citation
“…For example, Barakat, Seitz, and Shams () reported that the presence of statistical regularities enhances the detection of individual visual elements even when they appear outside the context of the learned regularities. In a similar vein, Grieco‐Calub, Simeon, Snyder, and Lew‐Williams () showed that auditory SL performance is impaired when the familiarization stream comprises spectrally degraded speech sounds. From a broader perspective, the interaction between encoding and learning aligns with the well‐documented benefits of context on stimulus encoding, such as the classic effect of schema congruency on subsequent memory of new events (Hintzman, ).…”
Section: Introductionmentioning
confidence: 89%
“…This interaction suggested that the ability to encode the individual shapes and the sensitivity to their co-occurrences are not independent processes, but rather that the statistical properties of the stream may facilitate encoding, and conversely, that optimal conditions of encoding can serve to enhance sensitivity to the statistical structure of the input. In a similar vein, Grieco-Calub, Simeon, Snyder, and Lew-Williams (2017) showed that auditory SL performance is impaired when the familiarization stream comprises spectrally degraded speech sounds. Participants who showed greater sensitivity to changes in ED tended to show greater sensitivity to changes in TPs, and vice versa.…”
Section: Introductionmentioning
confidence: 90%
“…They have also been used to outline how interactions between channels in the CI signal impact speech recognition (Grange et al, 2017;Oxenham and Kreft, 2014), and how degraded speech signals may impact language development (e.g. Newman et al 2020) and listening effort (Milvae et al, 2021;Winn et al, 2015), as well as the auditory pattern recognition skills that support word learning (Grieco-Calub et al, 2017).…”
Section: Introductionmentioning
confidence: 99%