In this paper, I present an analysis of the typology of laryngeal co-occurrence restrictions based on contrast markedness. The key ingredient of the analysis, for which I provide experimental support, is that laryngeal co-occurrence phenomena reflect a preference for maximising the perceptual distinctness of contrasts between words (Flemming 1995, 2004). An AX discrimination task finds that the contrast between an ejective and a plain stop is less accurately perceived in the context of another ejective in the word than in the context of another plain stop in the word. Pairs of words like [k'ap'i] and [k'api], which contrast 2vs. 1 ejectives, are less reliably distinguished than pairs of words like [kap'i] and [kapi], which contrast 1vs. 0 ejectives. The unifying factor of all laryngeal co-occurrence patterns is the neutralisation of the contrast between words with one and two laryngeally marked segments, exactly the contrast that is shown to be relatively perceptually weak.
This paper argues that long-distance assimilations between consonants come in two varieties: total identity, which arises via a non-local relation between the interacting segments; and partial identity, which results from local articulatory spreading through intervening segments (Flemming 1995; Gafos 1999). Our proposal differs from previous analyses (Hansson 2001; Rose and Walker 2004) in that only total identity is a non-logcal phenomenon. While non-adjacent consonants may interact via a relation we call linking, the only requirement which may be placed on linked consonants is total identity. All single feature identities are the result of local spreading. The interaction of a total identity requirement on ejectives and stridents with anteriority harmony in Chol (Mayan) highlights the distinction between these two types of long-distance phenomena. We show that theories that allow non-local, single-feature agreement make undesirable predictions, and that the more restrictive typology predicted by our framework is supported by the vast majority of long-distance assimilation cases. * We are very grateful to two anonymous reviewers for many useful and detailed comments, which have greatly improved this paper. We are also indebted to Adam Albright, Donca Steriade, and Michael Kenstowicz for reading and commenting on previous versions of this work. Thank you also to
The results of two artificial grammar experiments show that individuals learn a distinction between identical and non-identical consonant pairs better than an arbitrary distinction, and that they generalise the distinction to novel segmental pairs. These results have implications for inductive models of learning, because they necessitate an explicit representation of identity. While identity has previously been represented as root-node sharing in autosegmental representations (Goldsmith 1976, McCarthy 1986, or implicitly assumed to be a property that constraints can reference (MacEachern 1999, Coetzee & Pater 2008, the model of inductive learning proposed by Hayes & Wilson (2008) assumes strictly feature-based representations, and is unable to reference identity directly. This paper explores the predictions of the Hayes & Wilson model and compares it to a modification of the model where identity is represented (Colavin et al. 2010). The results of both experiments support a model incorporating direct reference to identity.
Speakers judge novel strings to be better potential words of their language if those strings consist of sound sequences that are attested in the language. These intuitions are often generalized to new sequences that share some properties with attested ones: Participants exposed to an artificial language where all words start with the voiced stops [b] and [d] will prefer words that start with other voiced stops (e.g., [g]) to words that start with vowels or nasals. The current study tracks the evolution of generalization across sounds during the early stages of artificial language learning. In Experiments 1 and 2, participants received varying amounts of exposure to an artificial language. Learners rapidly generalized to new sounds: In fact, following short exposure to the language, attested patterns were not distinguished from unattested patterns that were similar in their phonological properties to the attested ones. Following additional exposure, participants showed an increasing preference for attested sounds, alongside sustained generalization to unattested ones. Finally, Experiment 3 tested whether participants can rapidly generalize to new sounds based on a single type of sound. We discuss the implications of our results for computational models of phonotactic learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.