2021
DOI: 10.3389/fnhum.2021.629517
|View full text |Cite
|
Sign up to set email alerts
|

Neural Representation of the English Vowel Feature [High]: Evidence From /ε/ vs. /ɪ/

Abstract: Many studies have observed modulation of the amplitude of the neural index mismatch negativity (MMN) related to which member of a phoneme contrast [phoneme A, phoneme B] serves as the frequent (standard) and which serves as the infrequent (deviant) stimulus (i.e., AAAB vs. BBBA) in an oddball paradigm. Explanations for this amplitude modulation range from acoustic to linguistic factors. We tested whether exchanging the role of the mid vowel /ε/ vs. high vowel /ɪ/ of English modulated MMN amplitude and whether … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 88 publications
(146 reference statements)
0
3
0
Order By: Relevance
“…We recorded EEG at a 500-Hz sampling rate. Impedances were below 50 kΩ, per recommendation [ 91 ] and existing standard (e.g., [ 72 , 92 95 ]) for high-input impedance amplifiers. All electrodes were referenced to Cz.…”
Section: Methodsmentioning
confidence: 99%
“…We recorded EEG at a 500-Hz sampling rate. Impedances were below 50 kΩ, per recommendation [ 91 ] and existing standard (e.g., [ 72 , 92 95 ]) for high-input impedance amplifiers. All electrodes were referenced to Cz.…”
Section: Methodsmentioning
confidence: 99%
“…In fact, MMN evidence for features largely arises from findings that demonstrate that some speech sounds elicit larger MMNs than others. These asymmetric results have been observed for vowels (Cornell et al, 2011;de Rue et al, 2021;Eulitz & Lahiri, 2004;Scharinger et al, 2012Scharinger et al, , 2016Yu & Shafer, 2021), consonants (Cornell et al, 2013;Fu & Monahan, 2021;Hestvik et al, 2020;Hestvik & Durvasula, 2016;Maiste et al, 1995;Schluter et al, , 2017 and lexical tone (Politzer-Ahles et al, 2016). These asymmetries are oftenalthough not exclusively (see Maiste et al, 1995)-taken to reflect the underlying featural content of the two categories consistent with underspecified representations (Lahiri & Reetz, 2002, 2010.…”
Section: Introductionmentioning
confidence: 96%
“…In the field of neural decoding for direct communication in brain-computer interfaces (BCIs), research is progressing for detecting spoken signals from multi-channel electrocorticograms (ECoGs) at the brain cortex (Knight and Heinze, 2008;Pasley et al, 2012;Bouchard et al, 2013;Flinker et al, 2015;Herff and Schultz, 2016;Martin et al, 2018;Anumanchipalli et al, 2019;Miller et al, 2020). If we could instead detect linguistic information from scalp EEGs, then BCIs could enjoy much wider practical applications, for instance improving the quality of life (QoL) of amyotrophic lateral sclerosis (ALS) patients, but this goal is hampered by many unsolved problems (Wang et al, 2012;Min et al, 2016;Rojas and Ramos, 2016;Yoshimura et al, 2016;Yu and Shafer, 2021;Zhao et al, 2021). While studies on spoken EEGs can leverage motor command information to help identify speech-related signals, imagined speech EEGs (that is, EEGs of silent, unspoken speech) lack that luxury (Levelt, 1993;Indefrey and Levelt, 2004), which necessitates identifying linguistic representations solely from within the EEG.…”
Section: Introductionmentioning
confidence: 99%