2018
DOI: 10.1121/1.5078589
|View full text |Cite
|
Sign up to set email alerts
|

Effects of intelligibility on within- and cross-modal sentence recognition memory for native and non-native listeners

Abstract: The goal of the study was to examine whether enhancing the clarity of the speech signal through conversational-to-clear speech modifications improves sentence recognition memory for native and non-native listeners, and if so, whether this effect would hold when the stimuli in the test phase are presented in orthographic instead of auditory form (cross-modal presentation). Sixty listeners (30 native and 30 non-native English) participated in a within-modal (i.e., audio-audio) sentence recognition memory task (E… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 73 publications
2
9
0
Order By: Relevance
“…That is, if the listeners could not hear the sentences, they would not be able to recall the information. Previous work, though, showed that even highly intelligible conversational sentences were recalled less well compared to clear sentences, suggesting an increased processing cost that is independent of accurate word recognition ( Keerstock and Smiljanic, 2018 , 2019 ; Winn and Teece, 2020 ). Similarly, greater cognitive load indicated by greater pupil dilation was found for an L2 talker compared to an L1 talker even when recognition accuracy was at ceiling for both talkers, as was found here in the quiet condition ( McLaughlin and Van Engen, 2020 ).…”
Section: Discussionmentioning
confidence: 91%
See 1 more Smart Citation
“…That is, if the listeners could not hear the sentences, they would not be able to recall the information. Previous work, though, showed that even highly intelligible conversational sentences were recalled less well compared to clear sentences, suggesting an increased processing cost that is independent of accurate word recognition ( Keerstock and Smiljanic, 2018 , 2019 ; Winn and Teece, 2020 ). Similarly, greater cognitive load indicated by greater pupil dilation was found for an L2 talker compared to an L1 talker even when recognition accuracy was at ceiling for both talkers, as was found here in the quiet condition ( McLaughlin and Van Engen, 2020 ).…”
Section: Discussionmentioning
confidence: 91%
“…Clear speech benefit extends to speech processing beyond word recognition; it improves recognition memory and recall for speech in quiet and in noise ( Gilbert et al , 2014 ; Keerstock and Smiljanic, 2018 , 2019 ; Van Engen et al , 2012 ). The exaggerated acoustic-phonetic clear speech cues seem to enhance memory traces for sentences produced in that style, enabling listeners to retain more information.…”
Section: Introductionmentioning
confidence: 99%
“…In a follow-up study, Keerstock and Smiljanić (2019) conducted a cued-recall experiment, which is more difficult than a yes/no recognition memory task. Results showed that more words were recalled in the clear speech condition; as in Keerstock and Smiljanić (2018) , this effect occurred for both L1 and L2 listening populations.…”
Section: Introductionmentioning
confidence: 66%
“…The corresponding scoring function includes the likelihood score of the signal, the score of the sentence, and the score of the posterior probability of the data. The corresponding scoring formulas are shown in formula (13), formula (14), and formula (15), respectively. Final output voice results…”
Section: Advances In Mathematical Physicsmentioning
confidence: 99%
“…The wavelet transform denoising algorithm is essentially a multiscale signal analysis algorithm, which can map the speech signal to be processed to the corresponding wavelet domain, and carry out targeted processing on the noisy wavelet coefficients that do not pass the speech characteristics in different sizes according to the corresponding speech system and the wavelet coefficients of noise [10]. In the actual processing process, based on the series space function obtained by the expansion and translation of the wavelet generating function corresponding to the wavelet signal, and according to the corresponding threshold standard, the best approximation of the original signal is found, so as to distinguish the original signal from the noise signal [11][12][13][14]. However, the traditional wavelet signal is subject to the selection of threshold.…”
Section: Introductionmentioning
confidence: 99%