2018
DOI: 10.1016/j.wocn.2018.07.009
|View full text |Cite
|
Sign up to set email alerts
|

Oropharygneal articulation of phonemic and phonetic nasalization in Brazilian Portuguese

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
14
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 21 publications
(15 citation statements)
references
References 45 publications
1
14
0
Order By: Relevance
“…Dynamic MRI is useful for research and clinical studies in speech, especially in capturing structural and functional changes of the vocal tract. Recent applications of dynamic MRI to speech‐related studies include investigating articulatory dynamics , studying phonetic variability , learning language variation , examining physiological defects , monitoring swallow functions and observing professional singing or voice training . A review of the clinical needs and impact of dynamic speech MRI can be found in .…”
Section: Introductionmentioning
confidence: 99%
“…Dynamic MRI is useful for research and clinical studies in speech, especially in capturing structural and functional changes of the vocal tract. Recent applications of dynamic MRI to speech‐related studies include investigating articulatory dynamics , studying phonetic variability , learning language variation , examining physiological defects , monitoring swallow functions and observing professional singing or voice training . A review of the clinical needs and impact of dynamic speech MRI can be found in .…”
Section: Introductionmentioning
confidence: 99%
“…Collecting MRI data from larger groups of speakers, however, is becoming increasingly possible. This has been recently done, for example, by Narayanan et al (2014; 10 speakers of American English) and Barlaz et al (2018; 12 speakers of Brazilian Portuguese). Of note are also the exceptionally large‐sample studies by Tilsen et al (2016; 25 speakers of American English) and Dediu & Moisik (2019; 80 L1 and L2 speakers of English), both of which were not part of the above survey (see Footnote 1).…”
Section: Entire Vocal Tractmentioning
confidence: 89%
“…The increasing availability of rt‐MRI data has triggered the development of methods analyzing continuous speech. Among some notable rt‐MRI analyses are investigations of relative timing of the tongue tip and the larynx raising gestures (based on articulator tracings, Kim et al, 2011) or of the tongue tip and the velum lowering gestures (based on area functions for these regions; Byrd, Tobin, Bresch, & Narayanan, 2009), temporal changes in pixel intensity in particular articulatory regions (e.g., the tongue tip, Parrell & Narayanan, 2018; the velopharyngeal port, Johnson et al, 2019), or the entire area function changes over time (Barlaz et al, 2018). These rapid developments in analytical approaches could not have been possible without a close collaboration between researchers in the fields of speech science and engineering.…”
Section: Entire Vocal Tractmentioning
confidence: 99%
“…However, unlike articulometry data, rt-MRI video frames must first be quantified in some manner before analysis can be carried out. A variety of quantification methods has been proposed (see Ramanarayanan et al, 2018 for a detailed overview), including (but not limited to) region-of-interest analysis (Lammert, Ramanarayanan, Proctor, & Narayanan, 2013;Teixeira et al, 2012;Tilsen et al, 2016), grid-based area or distance functions (Barlaz, Shosted, Fu, & Sutton, 2018;Proctor, Bone, Katsamanis, & Narayanan, 2010;Zhang et al, 2016), image cross-correlation (Lammert, Proctor, & Narayanan, 2010), region-based principal components analysis (Carignan et al, 2019(Carignan et al, , 2015, and automated segmentation of individual speech articulators (Eryildirim, M.-O., & Berger, M.-O., 2011;Labrunie et al, 2018;Silva & Teixeira, 2015).…”
Section: Real-time Magnetic Resonance Imaging (Rt-mri)mentioning
confidence: 99%
“…Generalized additive mixed models (GAMMs; Wood, 2004Wood, , 2006a are an extension of GAMs as mixed models, in which random effects are estimated from a GAM by computing the variances of the so-called 'wiggly' components of the smooth terms (i.e., the degree of smoothness of the terms). GAMMs have previously been used to investigate speech production over time (Baayen, Vasishth, Kliegl, & Bates, 2017;Kirkham, Nance, Littlewood, Lightfoot, & Groarke, 2019;Mielke, Carignan, & Thomas, 2017;Sóskuthy, 2017;Wieling et al, 2016;Winter & Wieling, 2016) and space (Barlaz et al, 2018;Wieling, 2018), to observe the effects of word frequency and lexical proficiency on articulation (Tomaschek, Tucker, Fasiolo, & Baayen, 2018), and to model spatio-temporal relations in flesh-point kinematics (Tomaschek, Arnold, Bröker, & Baayen, 2018). One distinct advantage of employing GAMMs for speech articulation research is that they can capture the interaction effects of two different continuous variables (such as time and space), using tensor product interaction, which allows the smooth coefficients for one variable to vary in a non-linear fashion depending on the value of the other variable (Wieling, 2018, p. 102).…”
Section: Generalized Additive Mixed Models and Functional Linear Mixementioning
confidence: 99%