No abstract
Bodomo (1997) describes Dagaare (Gur; Ghana) as having a single low vowel, [a], which is neutral to ATR harmony. This paper presents acoustic data from a study of Dagaare <a> which is inconsistent with this description. A list of sentences was elicited from five native speakers of Dagaare. Each sentence contained <a> in one of four verbal particles situated in one of four contexts: ATR _ ATR, ATR _ RTR, RTR _ ATR, and RTR _ RTR. Formants of the low vowel were measured and compared across contexts. Results showed a substantial, significant difference in F1 values and a smaller but still significant difference in F2 values in contexts where <a> is followed by an ATR word compared to when it is followed by an RTR word. All speakers and all particles showed the same pattern. We conclude that, contrary to previous claims, the Dagaare low vowel is not neutral to harmony, but rather has acoustically distinct variants in RTR versus ATR contexts. Bodomo, A. (1997). The structure of Dagaare. California: CSLI publications. [Funded by SSHRC.]
Background/Aims: Both music and language impose constraints on fundamental frequency (F0) in sung music. Composers are known to set words of tone languages to music in a way that reflects tone height but fails to include tone contour. This study tests whether choral singers add linguistic tone contour information to an unfamiliar song by examining whether Cantonese singers make use of microtonal variation. Methods: 12 native Cantonese-speaking non-professional choral singers learned and sang a novel song in Cantonese which included a minimal set of the Cantonese tones to probe whether everyday singers add in missing contour information. Results: Cantonese singers add in a rising F0 contour of less than a semitone when singing syllables with lexical rising tones. This microtonal variation is not observed when singing in a lower register. Conclusion: Cantonese singers use microtonal contours to reflect rising contours of Cantonese linguistic tones.
Studies relating dental anomalies to misarticulations have noted that potential correlations appear to be obscured by articulatory compensation. Accommodation of tongue or mandible positions can help even individuals with severe malocclusion approximate perceptually typical speech [Johnson and Sandy, Angle Orthod. 69, 306-310 (1999)]. However, associations between malocclusion and articulation could surface if examined with acoustic analysis. The present study investigates the acoustic correlates of Cantonese speech as it relates to degree of overjet (horizontal overlap of upper and lower incisors). Production data was collected from native Cantonese-speaking adults, targeting the vowels /i, u, a/, and fricatives /f, s, ts, tsh/, previously found to be vulnerable phonemes in Cantonese speakers with dentofacial abnormalities [Whitehill et al., J Med Speech Lang Pathol. 9, 177-190 (2001)]. Measures of dental overjet and language background were included as well. Preliminary results from trained listeners show that productions were perceptually typical. Acoustic analysis consisted of spectral moments for fricatives and formant values for vowels. The results improve our understanding of the relationship between malocclusion, compensation and speech production in non-clinical populations.
This study contrasts different instructional reinforcements in the teaching of phonetics, i.e., learning tasks that supplement a classroom lecture on a phonetic contrast. 152 introductory linguistics students were split into four groups, each of which received the same lecture but a different instructional reinforcement, as follows: (1) a baseline textbook-style handout explaining the contrast, (2) classroom production practice, repeating after an instructor in unison, (3) pairwise production practice, in which students practice contrasts and give each other feedback, and (4) watching enhanced ultrasound videos illustrating the contrast [1]. Students were given a quiz evaluating their comprehension of the places of articulation and their perception of the contrast immediately after the activities and again one week later. We found that there were no large differences between the groups. While phonetics learning is argued to be improved through student engagement [2, 3, 4], interactivity [5], and pairwise practice [6], group 4 did not receive any of these but nevertheless performed as well as the other groups. We conclude that reinforcement using non-interactive enhanced ultrasound videos can be as effective as traditional classroom reinforcements at teaching phonetic contrasts.
Smiling during speech requires concurrent and often conflicting demands on the articulators. Thus, speaking while smiling may be modeled as a type of coarticulation. This study explores whether a context-invariant or a context-sensitive model of coarticulation better accounts for the variation seen in smiled versus neutral speech. While context-sensitive models assume some mechanism for planning of coarticulatory interactions [see Munhall et al., 2000, Lab Phon. V, 9–28], the simplest context-invariant models treat coarticulation as superposition [e.g., Joos, 1948, Language 24, 5–136]. In such a model, the intrinsic biomechanics of the body have been argued to account for many of the complex kinematic interactions associated with coarticulation [Gick et al., 2013, POMA 19, 060207]. Largely following the methods described in Fagel [2010, Dev. Multimod. Interf. 5967, 294–303], we examine articulatory variation in smiled versus neutral speech to test whether the local interactions of smiling and speech can be resolved in a context-invariant superposition model. Production results will be modeled using the ArtiSynth simulation platform (www.artisynth.org). Implications for theories of coarticulation will be discussed. [Research funded by NSERC.]
Smiling is a social signal that can be both seen and heard. Smiling can increase speech amplitude and raise F0 and formants. However, experimental research on the role of larynx height in smiled speech is limited. 21 English speakers (6 M) repeated words in a carrier phrase with a neutral face or while smiling. The participants were recorded with audio, video and laryngeal ultrasound. F0, F1 and F2 were extracted for the duration of target vowels /i/, /u/ and /a/. Ultrasound images of laryngeal position were measured using Optical Flow. The laryngeal and acoustic data were analyzed in R with linear mixed models with smiling condition, timepoint-in-vowel, and gender as fixed effects. There was a significant effect of timepoint-in-vowel for larynx height (raising towards the end) and a smile-timepoint interaction effect (the larynx raised more at the end for smiling condition). Acoustically, smiling led to significantly higher F0 across vowels, and significantly higher F1 and F2 for /a/ but not /i/ or /u/. F2 timepoints were significant for all three vowels (F2 trajectories differed) across smile conditions. Results indicate smiling has a consistent effect on larynx height and variable effect on specific speech sounds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.