No abstract
Bodomo (1997) describes Dagaare (Gur; Ghana) as having a single low vowel, [a], which is neutral to ATR harmony. This paper presents acoustic data from a study of Dagaare <a> which is inconsistent with this description. A list of sentences was elicited from five native speakers of Dagaare. Each sentence contained <a> in one of four verbal particles situated in one of four contexts: ATR _ ATR, ATR _ RTR, RTR _ ATR, and RTR _ RTR. Formants of the low vowel were measured and compared across contexts. Results showed a substantial, significant difference in F1 values and a smaller but still significant difference in F2 values in contexts where <a> is followed by an ATR word compared to when it is followed by an RTR word. All speakers and all particles showed the same pattern. We conclude that, contrary to previous claims, the Dagaare low vowel is not neutral to harmony, but rather has acoustically distinct variants in RTR versus ATR contexts. Bodomo, A. (1997). The structure of Dagaare. California: CSLI publications. [Funded by SSHRC.]
Background/Aims: Both music and language impose constraints on fundamental frequency (F0) in sung music. Composers are known to set words of tone languages to music in a way that reflects tone height but fails to include tone contour. This study tests whether choral singers add linguistic tone contour information to an unfamiliar song by examining whether Cantonese singers make use of microtonal variation. Methods: 12 native Cantonese-speaking non-professional choral singers learned and sang a novel song in Cantonese which included a minimal set of the Cantonese tones to probe whether everyday singers add in missing contour information. Results: Cantonese singers add in a rising F0 contour of less than a semitone when singing syllables with lexical rising tones. This microtonal variation is not observed when singing in a lower register. Conclusion: Cantonese singers use microtonal contours to reflect rising contours of Cantonese linguistic tones.
Studies relating dental anomalies to misarticulations have noted that potential correlations appear to be obscured by articulatory compensation. Accommodation of tongue or mandible positions can help even individuals with severe malocclusion approximate perceptually typical speech [Johnson and Sandy, Angle Orthod. 69, 306-310 (1999)]. However, associations between malocclusion and articulation could surface if examined with acoustic analysis. The present study investigates the acoustic correlates of Cantonese speech as it relates to degree of overjet (horizontal overlap of upper and lower incisors). Production data was collected from native Cantonese-speaking adults, targeting the vowels /i, u, a/, and fricatives /f, s, ts, tsh/, previously found to be vulnerable phonemes in Cantonese speakers with dentofacial abnormalities [Whitehill et al., J Med Speech Lang Pathol. 9, 177-190 (2001)]. Measures of dental overjet and language background were included as well. Preliminary results from trained listeners show that productions were perceptually typical. Acoustic analysis consisted of spectral moments for fricatives and formant values for vowels. The results improve our understanding of the relationship between malocclusion, compensation and speech production in non-clinical populations.
This study contrasts different instructional reinforcements in the teaching of phonetics, i.e., learning tasks that supplement a classroom lecture on a phonetic contrast. 152 introductory linguistics students were split into four groups, each of which received the same lecture but a different instructional reinforcement, as follows: (1) a baseline textbook-style handout explaining the contrast, (2) classroom production practice, repeating after an instructor in unison, (3) pairwise production practice, in which students practice contrasts and give each other feedback, and (4) watching enhanced ultrasound videos illustrating the contrast [1]. Students were given a quiz evaluating their comprehension of the places of articulation and their perception of the contrast immediately after the activities and again one week later. We found that there were no large differences between the groups. While phonetics learning is argued to be improved through student engagement [2, 3, 4], interactivity [5], and pairwise practice [6], group 4 did not receive any of these but nevertheless performed as well as the other groups. We conclude that reinforcement using non-interactive enhanced ultrasound videos can be as effective as traditional classroom reinforcements at teaching phonetic contrasts.
Previous research on multimodal speech perception with hearing-impaired individuals focused on audiovisual integration with mixed results. Cochlear-implant users integrate audiovisual cues better than perceivers with normal hearing when perceiving congruent [Rouger et al. 2007, PNAS, 104(17), 7295–7300] but not incongruent cross-modal cues [Rouger et al. 2008, Brain Research 1188, 87–99), leading to the suggestion that early auditory exposure is required for typical speech integration processes to develop (Schorr 2005, PNAS, 102(51), 18748–18750). If a deficit of one modality does indeed lead to a deficit in multimodal processing, then hard of hearing perceivers should show different patterns of integration in other modality pairings. The current study builds on research showing that gentle puffs of air on the skin can push individuals with normal hearing to perceive silent bilabial articulations as aspirated. We report on a visual-aerotactile perception task comparing individuals with congenital hearing loss to those with normal hearing. Results indicate that aerotactile information facilitated identification of /pa/ for all participants (p < 0.001) and we found no significant difference between the two groups (normal hearing and congenital hearing loss). This suggests that typical multi-modal speech perception does not require access to all modalities from birth. [Funded by NIH.]
Articulatory settings, language-specific default postures of the speech articulators, have been difficult to distinguish from segmental speech content [see Gick et al. 2004, Phonetica 61, 220-233]. The simplest construal of articulatory setting is as a constantly maintained set of tonic muscle activations that coarticulates globally with all segmental content. In his early Overlapping Innervation Wave theory, Joos [1948, Language Monogr. 23] postulated that all coarticulation can be understood as simple overlap, or superposition [Bizzi et al. 1991, Science 253, 287-291], of muscle activation patterns. The present paper describes an implementation of Joos’ proposals within a modular neuromuscular framework [see Gick & Stavness 2013, Front. Psych. 4, 977]. Results of a simulation and perception study will be reported in which muscle activations corresponding to English-like and French-like articulatory settings are simulated and superposed on activations for language-neutral vowels using the ArtiSynth biomechanical modeling toolset (www.artisynth.org). Simulated visible and acoustic outputs presented to perceivers familiar with both languages speak to the question of whether overlapping muscle activations generate outputs that look and sound language-appropriate to perceivers, testing a unified, context-independent model for both coarticulation and articulatory setting. [Research funded by NIH Grant DC-02717 and NSERC.]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.