Although formulated by Weinreich, Labov, and Herzog in 1968, the actuation problem has remained an unsolved problem in understanding sound change: if sound change is conceived as the accumulation of coarticulation, and coarticulation is widespread, how can some speech communities resist phonetic pressure to change? We present data from American English s-retraction that suggest a partial solution. S-retraction is the phenomenon in which /s/ is realized as an [ʃ]-like sound, especially when it occurs in an /stɹ/ cluster (‘street’ pronounced more like [ʃtɹit] than like [stɹit]). The speech of English speakers judgednotto exhibit s-retraction shows a large coarticulatory bias in the direction of retraction. Further, there is also substantial interspeaker variation in the extent of this bias. We propose that this interspeaker variation, coupled with the coarticulatory bias, facilitates the initiation of sound change. In this account, sound change begins when a listener accidentally interprets an extreme case of a phonetic effect as an articulatory target and then adjusts her own speech in response. This adoption of a new target requires phonetic variation that predates the change. Thus, sound change is predicted to be biased toward phonetic effects that exhibit interspeaker variability, and if sound change requires an accident that is rare, then sound change itself is correctly predicted to be rare as well.
Distinctive feature theory is an effort to identify the phonetic dimensions that are important for lexical contrasts and phonological patterns in human languages. The set of features and its explanatory role have both expanded over the years, with features being used to define not only the contrasts but the groupings of sounds involved in rules and phonotactic restrictions, as well as the changes involved in rules. Distinctive features have been used to account for a wide range of phonological phenomena, and this chapter overviews the incremental steps by which the feature model has changed, along with some of the evidence for these steps. An important point is that many of the steps involve non‐obvious connections, something that is harder to see in hindsight. Recognizing that these steps are not obvious is important in order to see the insights that have been made in the history of distinctive feature theory, and to see that these claims are associated with differing degrees of evidence, despite often being assumed to be correct.
We compare the complexity of idiosyncratic sound patterns involving American English /ɹ/ with the relative simplicity of clear/dark /l/-allophony patterns found in English and other languages. For /ɹ/, we report an ultrasound-based articulatory study of twenty-seven speakers of American English. Two speakers use only retroflex /ɹ/, sixteen use only bunched /ɹ/, and nine use both /ɹ/ types, with idiosyncratic allophonic distributions. These allophony patterns are covert, because the difference between bunched and retroflex /ɹ/ is not readily perceived by listeners. We compare this typology of /ɹ/-allophony patterns to clear/dark /l/-allophony patterns in seventeen languages. On the basis of the observed patterns, we show that individual-level /ɹ/ allophony and language-level /l/ allophony exhibit similar phonetic grounding, but that /ɹ/-allophony patterns are considerably more complex. The low complexity of language-level /l/-allophony patterns, which are more readily perceived by listeners, is argued to be the result of individual-level contact in the development of sound patterns. More generally, we argue that familiar phonological patterns (which are relatively simple and homogeneous within communities) may arise from individuallevel articulatory patterns, which may be complex and speaker-specific, by a process of koineization. We conclude that two classic properties of phonological rules, phonetic naturalness and simplicity, arise from different sources.*
Ambivalent segments are speech sounds whose cross-linguistic patterning is especially variable, creating contradictions for theories of universal distinctive features. This paper examines lateral liquids, whose [continuant] specification has been the subject of controversy because of their ability to pattern both with continuants and with non-continuants, and because phonetically they are situated in the contested ground between two different articulatory definitions for the feature [continuant]. Evidence from a survey of sound patterns in 561 languages shows that lateral liquids, like nasals, pattern with continuants about as often as with non-continuants. Ambivalent phonological behaviour is argued to be natural and expected for phonetically ambiguous segments in a theory of emergent distinctive features where features are the result of sound patterns, rather than the other way around.
It is likely that generalization of implicitly learned sound patterns to novel words and sounds is structured by a similarity metric, but how may this metric best be captured? We report on an experiment where participants were exposed to an artificial phonology, and frequency ratings were used to probe implicit abstraction of onset statistics. Non-words bearing an onset that was presented during initial exposure were subsequently rated most frequent, indicating that participants generalized onset statistics to new non-words. Participants also rated non-words with untrained onsets as somewhat frequent, indicating generalization to onsets that had not been used during the exposure phase. While generalization could be accounted for in terms of featural distance, it was insensitive to natural class structure. Generalization to untrained sounds was predicted better by models requiring prior linguistic knowledge (either traditional distinctive features or articulatory phonetic information) than by a model based on a linguistically naïve measure of acoustic similarity.
Summary Introduction Patients with dentofacial disharmonies (DFDs) seek orthodontic care and orthognathic surgery to address issues with mastication, esthetics, and speech. Speech distortions are seen 18 times more frequently in Class III DFD patients than the general population, with unclear causality. We hypothesize there are significant differences in spectral properties of stop (/t/ or /k/), fricative (/s/ or /ʃ/), and affricate (/tʃ/) consonants and that severity of Class III disharmony correlates with the degree of speech abnormality. Methods To understand how jaw disharmonies influence speech, orthodontic records and audio recordings were collected from Class III surgical candidates and reference subjects (n = 102 Class III, 62 controls). A speech pathologist evaluated subjects and recordings were quantitatively analysed by Spectral Moment Analysis for frequency distortions. Results A majority of Class III subjects exhibit speech distortions. A significant increase in the centroid frequency (M1) and spectral spread (M2) was seen in several consonants of Class III subjects compared to controls. Using regression analysis, correlations between Class III skeletal severity (assessed by cephalometric measures) and spectral distortion were found for /t/ and /k/ phones. Conclusions Class III DFD patients have a higher prevalence of articulation errors and significant spectral distortions in consonants relative to controls. This is the first demonstration that severity of malocclusion is quantitatively correlated with the degree of speech distortion for consonants, suggesting causation. These findings offer insight into the complex relationship between craniofacial structures and speech distortions.
Most dialects of North American English exhibit /æ/-raising in some phonological contexts. Both the conditioning environments and the temporal dynamics of the raising vary from region to region. To explore the articulatory basis of /æ/-raising across North American English dialects, acoustic and articulatory data were collected from a regionally diverse group of 24 English speakers from the United States, Canada, and the United Kingdom. A method for examining the temporal dynamics of speech directly from ultrasound video using EigenTongues decomposition [Hueber, Aversano, Chollet, Denby, Dreyfus, Oussar, Roussel, and Stone (2007). in IEEE International Conference on Acoustics, Speech and Signal Processing (Cascadilla, Honolulu, HI)] was applied to extract principal components of filtered images and linear regression to relate articulatory variation to its acoustic consequences. This technique was used to investigate the tongue movements involved in /æ/ production, in order to compare the tongue gestures involved in the various /æ/-raising patterns, and to relate them to their apparent phonetic motivations (nasalization, voicing, and tongue position).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.