Optimality Theory in Phonology 2004
DOI: 10.1002/9780470756171.ch5
|View full text |Cite
|
Sign up to set email alerts
|

Learnability in Optimality Theory

Abstract: In this article we show how Optimality Theory yields a highly general Constraint Demotion principle for grammar learning. The resulting learning procedure specifically exploits the grammatical structure of Optimality Theory, independent of the content of substantive constraints defining any given grammatical module. We decompose the learning problem and present formal results for a central subproblem, deducing the constraint ranking particular to a target language, given structural descriptions of positive exa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
222
0
4

Year Published

2004
2004
2019
2019

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 123 publications
(231 citation statements)
references
References 13 publications
5
222
0
4
Order By: Relevance
“…This is in contrast to many models of word segmentation that suggest that tracking of sequential probabilities arises from a completely different set of processes than learning of phonological regularities such as predominant lexical stress, phonology, or phonotactics. For example, in the StaGE model (Adriaans & Kager, 2010), learners identify words via calculation of transitional probabilities and then deduce phonological and phonotactic constraints via a hierarchical ranking of constraints similar to that of Optimality Theory (Tesar & Smolensky, 2000). Similarly, Mersad and Nazzi (2011) proposed that learners use a hierarchical ranking of phonological cues to parse the speech stream, relying on sequential statistics only in cases where these cues are uninformative.…”
Section: Comparison To Other Accountsmentioning
confidence: 97%
“…This is in contrast to many models of word segmentation that suggest that tracking of sequential probabilities arises from a completely different set of processes than learning of phonological regularities such as predominant lexical stress, phonology, or phonotactics. For example, in the StaGE model (Adriaans & Kager, 2010), learners identify words via calculation of transitional probabilities and then deduce phonological and phonotactic constraints via a hierarchical ranking of constraints similar to that of Optimality Theory (Tesar & Smolensky, 2000). Similarly, Mersad and Nazzi (2011) proposed that learners use a hierarchical ranking of phonological cues to parse the speech stream, relying on sequential statistics only in cases where these cues are uninformative.…”
Section: Comparison To Other Accountsmentioning
confidence: 97%
“…The constraint ranking mechanism in our model is also fundamentally different from mechanisms employed in earlier OT learners (in particular, the Constraint Demotion Algorithm; Tesar & Smolensky, 2000, and the Gradual Learning Algorithm; Boersma & Hayes, 2001). Rather than providing the learner with feedback about optimal forms, i.e.…”
Section: The Ot Segmentation Modelmentioning
confidence: 97%
“…If grammars are rankings of universal constraints, as is commonly assumed in OT, then acquiring a grammar must involve learning the adult constraint ranking (Boersma and Hayes, 2001;Tesar and Smolensky, 1998). In children who still entertain a non-adult constraint ranking, with one or more markedness constraints being ranked too high, the inherent asymmetry of the grammar may give rise to errors in production but at the same time result in adult-like performance in comprehension (Smolensky, 1996).…”
Section: An Asymmetrical Grammarmentioning
confidence: 98%