2023
DOI: 10.1162/nol_a_00084
|View full text |Cite
|
Sign up to set email alerts
|

Auditory Word Comprehension Is Less Incremental in Isolated Words

Abstract: Partial speech input is often understood to trigger rapid and automatic activation of successively higher-level representations of words, from sound to meaning. Here we show evidence from magnetoencephalography that this type of incremental processing is limited when words are heard in isolation as compared to continuous speech. This suggests a less unified and automatic word recognition process than is often assumed. We present evidence from isolated words that neural effects of phoneme probability, quantifie… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

4
2

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 62 publications
1
8
0
Order By: Relevance
“…The sentence disappeared by pressing the SPACE bar after participants understood; then, a true or false judgment question was presented and participants were asked to make a judgment based on the sentences they have just read. As in previous studies (Gaston et al, 2023), good performance on the task requires lexical-syntactic and conceptual information access to have occurred for most stimuli because the probe words were unpredictable. Thus, the question in our study will change the following areas of the original sentence: (1) the animate noun, (2) the verb, (3) the quantity word, (4) the target noun, and (5) the rest of the sentence.…”
Section: Methodsmentioning
confidence: 65%
“…The sentence disappeared by pressing the SPACE bar after participants understood; then, a true or false judgment question was presented and participants were asked to make a judgment based on the sentences they have just read. As in previous studies (Gaston et al, 2023), good performance on the task requires lexical-syntactic and conceptual information access to have occurred for most stimuli because the probe words were unpredictable. Thus, the question in our study will change the following areas of the original sentence: (1) the animate noun, (2) the verb, (3) the quantity word, (4) the target noun, and (5) the rest of the sentence.…”
Section: Methodsmentioning
confidence: 65%
“…Before starting, we first analyzed how the phonetic features, phoneme onset, phoneme surprisal and cohort entropy should best be modeled, since different previous studies have used used different approaches: modeling word-initial phonemes as separate features 1 ; including word-initial phonemes only in phoneme surprisal and cohort entropy 85 ; and including word-initial phoneme only in phoneme onset 2 . We compared models with and without word-initial phoneme onset on a base model with envelope spectrogram, envelope onset and word onset.…”
Section: Methodsmentioning
confidence: 99%
“…These three TRFs were averaged to generate one average TRF per testing fold, which was then used to compute the prediction accuracy against the testing set. The TRFs and corresponding prediction accuracies from each of the testing folds were further averaged to generate a single TRF and single prediction accuracy per source dipole.Phonetic feature modellingBefore starting, we first analyzed how the phonetic features, phoneme onset, phoneme surprisal and cohort entropy should best be modeled, since different previous studies have used used different approaches: modeling word-initial phonemes as separate features 1 ; including wordinitial phonemes only in phoneme surprisal and cohort entropy85 ; and including word-initial phoneme only in phoneme onset 2 . We compared models with and without word-initial phoneme onset on a base model with envelope spectrogram, envelope onset and word onset.…”
mentioning
confidence: 99%
“…The stimuli were based on a set of 1000 target words used in a MEG experiment (Gaston et al, 2023), spoken by a human male speaker for the massive auditory lexical decision (MALD) database (Tucker et al, 2019). To simulate realistic lexical neighborhoods for those target words, additional words were added to the lexicon if they 1) differed from a target word by a single phoneme, 2) had a frequency count of at least 1000 in the SUBTLEX-US corpus (Brysbaert and New, 2009), and 3) were included in the MALD database.…”
Section: Methodsmentioning
confidence: 99%
“…Auditory baseline model A model of auditory processing was generated based on gammatone spectrograms as in previous experiments (Brodbeck et al, 2020;Gaston et al, 2023). First, high resolution gammatone spectrograms were generated for all word stimuli with 256 frequency bands in equivalent rectangular bandwidth (ERB) space between 20 and 5000 Hz.…”
Section: Preprocessingmentioning
confidence: 99%