Abstract:This article reports on an experiment with miniature artificial languages that provides support for a synthesis of ideas from USAGE-BASED PHONOLOGY (Bybee 1985, 2001, Nesset 2008) and HARMONIC GRAMMAR (Legendre et al. 1990, Smolensky & Legendre 2006). All miniature artificial languages presented to subjects feature velar palatalization ( k → tſ ) before a plural suffix, - i . I show that (i) examples of - i simply attaching to a [tſ]-final stem help palatalization (supporting t → tſi over t → ti and p → … Show more
“…This is not an uncontroversial choice and we do not wish to claim that logistic regression is necessarily the best framework for formu lating grammars across the board, either in terms of ELanguage or ILanguage. Conditional inference trees in particular remain a serious contender (see Edding ton 2010; Kapatsinski 2013aKapatsinski , 2013bStrobl et al 2008;Tagliamonte and Baayen 2012 for a discussion of advantages and disadvantages). Multimodel inference (broadly speaking) is available for conditional inference trees in the form of ran dom forests (Breiman 2001;Strobl et al 2008).…”
A multimodel inference approach to categorical variant choice: construction, priming and frequency effects on the choice between full and contracted forms of am, are and is Abstract: The present paper presents a multimodel inference approach to lin guistic variation, expanding on prior work by Kuperman and Bresnan (2012). We argue that corpus data often present the analyst with high model selection uncer tainty. This uncertainty is inevitable given that language is highly redundant: ev ery feature is predictable from multiple other features. However, uncertainty in volved in model selection is ignored by the standard method of selecting the single best model and inferring the effects of the predictors under the assumption that the best model is true. Multimodel inference avoids committing to a single model. Rather, we make predictions based on the entire set of plausible models, with contributions of models weighted by the models' predictive value. We argue that multimodel inference is superior to model selection for both the ILanguage goal of inferring the mental grammars that generated the corpus, and the ELanguage goal of predicting characteristics of future speech samples from the community represented by the corpus. Applying multimodel inference to the classic problem of English auxiliary contraction, we show that the choice between multimodel inference and model selection matters in practice: the best model may contain predictors that are not significant when the full set of plau sible models is considered, and may omit predictors that are significant consid ering the full set of models. We also contribute to the study of English auxiliary contraction. We document the effects of priming, contextual predictability, and specific syntactic constructions and provide evidence against effects of phono logical context.
“…This is not an uncontroversial choice and we do not wish to claim that logistic regression is necessarily the best framework for formu lating grammars across the board, either in terms of ELanguage or ILanguage. Conditional inference trees in particular remain a serious contender (see Edding ton 2010; Kapatsinski 2013aKapatsinski , 2013bStrobl et al 2008;Tagliamonte and Baayen 2012 for a discussion of advantages and disadvantages). Multimodel inference (broadly speaking) is available for conditional inference trees in the form of ran dom forests (Breiman 2001;Strobl et al 2008).…”
A multimodel inference approach to categorical variant choice: construction, priming and frequency effects on the choice between full and contracted forms of am, are and is Abstract: The present paper presents a multimodel inference approach to lin guistic variation, expanding on prior work by Kuperman and Bresnan (2012). We argue that corpus data often present the analyst with high model selection uncer tainty. This uncertainty is inevitable given that language is highly redundant: ev ery feature is predictable from multiple other features. However, uncertainty in volved in model selection is ignored by the standard method of selecting the single best model and inferring the effects of the predictors under the assumption that the best model is true. Multimodel inference avoids committing to a single model. Rather, we make predictions based on the entire set of plausible models, with contributions of models weighted by the models' predictive value. We argue that multimodel inference is superior to model selection for both the ILanguage goal of inferring the mental grammars that generated the corpus, and the ELanguage goal of predicting characteristics of future speech samples from the community represented by the corpus. Applying multimodel inference to the classic problem of English auxiliary contraction, we show that the choice between multimodel inference and model selection matters in practice: the best model may contain predictors that are not significant when the full set of plau sible models is considered, and may omit predictors that are significant consid ering the full set of models. We also contribute to the study of English auxiliary contraction. We document the effects of priming, contextual predictability, and specific syntactic constructions and provide evidence against effects of phono logical context.
“…matter of fact, recent psycholinguistic studies support the idea that learners do not exclusively follow source-oriented (IP) rules in the production of morphologically complex words (cf. Kapatsinski 2012Kapatsinski , 2013.…”
This article deals with the acquisition of the German plural system. It raises the question how morphologically complex words are represented in the mental grammar and in the lexicon of children and how this representation emerges.There are several theoretical accounts dealing with this question. These accounts are basically of two kinds. One approach models the German number system as rulebased; i.e. source-oriented rules are postulated that operate on the singular form of the noun. The second approach is schema-based. Essential to this approach is the idea that speakers form the plural of a given noun according to prototypical plural shapes. Empirical evidence can be found for both approaches, but neither of them seems to be able to fully explain acquisitional paths on its own.On the basis of the analysis of acquisitional data, this article argues for an expanded schema account that embraces both source-and product-oriented mechanisms. We propose an acquisition model according to which learners start out with storing plural forms holistically in an associative network; then they abstract productoriented schemas from these stored forms that focus on the typical gestalts of German plural forms. In a last step, they establish source-oriented schemas that relate singular schemas with plural schemas.The data for this study were gathered in a nonce word elicitation experiment from children aged 6 to 10 learning German either as their native or second language. In the latter case, the children's L1 was either Russian or Turkish.
“…They may well generalize from their knowledge of English to the miniature artificial language they are exposed to and, perhaps, even impose English patterns on the language (e.g., Finn & Hudson Kam, 2008). Using native English speakers as the study population allows us to compare results to perceptual data from Guion (1998) and to previous results on palatalization learning obtained by Wilson (2006), Kapatsinski (2012Kapatsinski ( , 2013, and Stave et al (2013). However, it also leaves the observed biases susceptible to explanations based on first language phonological experience rather than differences in change magnitudes (see also Skoruppa et al, 2011, for similar concerns regarding their alternations; though cf.…”
Section: Limitations and Future Directionsmentioning
Smolek, A. and Kapatsinski, V. 2018 What happens to large changes? Saltation produces well-liked outputs that are hard to generate. Laboratory Phonology: Journal of the Association for Laboratory Phonology 9(1): 10, pp. 1-27, DOI: https://doi.org/10.5334/labphon.93 lab l a phon . Recent research has argued that saltation is diachronically unstable and documented one possible cause of instability: Learners exposed to saltatory alternations may overgeneralize them to intermediate sounds. However, this research has trained participants to criterion or excluded participants who did not reach criterion accuracy on familiar sounds. In first language acquisition, learners of languages with saltatory patterns cannot hope to receive more exposure to the pattern than those learning non-saltatory patterns. For this reason, we examined learning of saltatory and non-saltatory patterns after a constant amount of training. We compared saltatory labial palatalization to non-saltatory alveolar and velar palatalization. Participants showed overgeneralization of saltatory palatalization in a judgment task. However, saltatory alternations did not result in increased rates of palatalizing similar sounds, compared to non-saltatory alternations. Instead, saltatory alternations were less likely to be produced than non-saltatory alternations. These results suggest that large, saltatory alternations may be diachronically unstable because they are harder to (learn to) produce. Instead of being overgeneralized to intermediate sounds, saltatory alternations may disappear from the language by losing productivity and being replaced with faithful mappings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.