Are morphological patterns learned in the form of rules? Some models deny this, attributing all morphology to analogical mechanisms. The dual mechanism model (Pinker, S., & Prince, A. (1998). On language and connectionism: analysis of a parallel distributed processing model of language acquisition. Cognition, 28, 73-193) posits that speakers do internalize rules, but that these rules are few and cover only regular processes; the remaining patterns are attributed to analogy. This article advocates a third approach, which uses multiple stochastic rules and no analogy. We propose a model that employs inductive learning to discover multiple rules, and assigns them confidence scores based on their performance in the lexicon. Our model is supported over the two alternatives by new "wug test" data on English past tenses, which show that participant ratings of novel pasts depend on the phonological shape of the stem, both for irregulars and, surprisingly, also for regulars. The latter observation cannot be explained under the dual mechanism approach, which derives all regulars with a single rule. To evaluate the alternative hypothesis that all morphology is analogical, we implemented a purely analogical model, which evaluates novel pasts based solely on their similarity to existing verbs. Tested against experimental data, this analogical model also failed in key respects: it could not locate patterns that require abstract structural characterizations, and it favored implausible responses based on single, highly similar exemplars. We conclude that speakers extend morphological patterns based on abstract structural properties, of a kind appropriately described with rules.
Phonological judgements are often gradient: blick>?bwick>*bnick>**bzick. The mechanisms behind gradient generalisation remain controversial, however. This paper tests the role of phonological features in helping speakers evaluate which novel combinations receive greater lexical support. A model is proposed in which the acceptability of a string is based on the most probable combination of natural classes that it instantiates. The model is tested on its ability to predict acceptability ratings of nonce words, and its predictions are compared against those of models that lack features or economise on feature specifications. The proposed model achieves the best balance of performance on attested and unattested sequences, and is a significant predictor of acceptability even after the other models are factored out. The feature-based model's predictions do not completely subsume those of simpler models, however. This may indicate multiple levels of evaluation, involving segment-based phonotactic probability and feature-based gradient phonological grammaticality.
We describe here a supervised learning model that, given paradigms of related words, learns the morphological and phonological rules needed to derive the paradigm. The model can use its rules to make guesses about how novel forms would be inflected, and has been tested experimentally against the intuitions of human speakers.
This chapter shows that a confidence-based model can make correct predictions not only about individual cases, but also about the typology of analogical change. The chapter is organized as follows: first, it provides a brief overview of tendency-based vs. structurally-based approaches to analogical change, summarizing the major generalizations that have been uncovered, and situating the current work in an area that has been approached from radically different perspectives. It then presents an overview of the synchronic model developed by Albright (2002a). It shows how the synchronic confidence-based approach can explain the direction of analogy in individual cases, and then moves on to explore its typological implications. It considers first some apparent counter-examples to the confidence-based approach, showing that in at least some cases where analogy has seemingly favoured an uninformative member of the paradigm, that form is not nearly as uninformative as it might appear. An exploration of the parameter space of the model reveals that even without an explicit bias to select more frequent forms, they are nonetheless selected as bases under most conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.