2011
DOI: 10.1590/s1984-63982011000200003
|View full text |Cite
|
Sign up to set email alerts
|

Corpus linguistics and naive discriminative learning

Abstract: Three classifiers from machine learning (the generalized linear mixed model, memory based learning, and support vector machines) are compared with a naive discriminative learning classifier, derived from basic principles of error-driven learning characterizing animal and human learning. Tested on the dative alternation in English, using the Switchboard data from (BRESNAN; CUENI; NIKITINA; BAAYEN, 2007), naive discriminative learning emerges with stateof-the-art predictive accuracy. Naive discriminative learnin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
47
0
1

Year Published

2011
2011
2020
2020

Publication Types

Select...
7
3

Relationship

2
8

Authors

Journals

citations
Cited by 52 publications
(51 citation statements)
references
References 27 publications
3
47
0
1
Order By: Relevance
“…The authors suggest that "it does not really matter what exactly [language] learners track, as long as they track enough features" (Divjak et al 2016: 29). A similar point is made by Baayen (2011) who shows for a set of models that the overall accuracy is hardly affected by permuting the values of a single predictor. It seems to be the case that individual higher-level abstract features are not that important, which is likely due to the correlational structure of the predictor space (Baayen 2011: 306): any given feature or predictor is predictable from other features or predictors.…”
Section: Discussion: Corpus-based Predictions Vs Preferential Choicessupporting
confidence: 63%
“…The authors suggest that "it does not really matter what exactly [language] learners track, as long as they track enough features" (Divjak et al 2016: 29). A similar point is made by Baayen (2011) who shows for a set of models that the overall accuracy is hardly affected by permuting the values of a single predictor. It seems to be the case that individual higher-level abstract features are not that important, which is likely due to the correlational structure of the predictor space (Baayen 2011: 306): any given feature or predictor is predictable from other features or predictors.…”
Section: Discussion: Corpus-based Predictions Vs Preferential Choicessupporting
confidence: 63%
“…In other words, the naive discriminative reader is as a statistical classifier grounded in basic principles of human learning. Baayen (2011) shows, for a binary classification task, that the naive discriminative reader performs with a classification accuracy comparable to state-of-the-art classifiers such as generalized linear mixed models and support vector machines.…”
Section: Discussionmentioning
confidence: 95%
“…It would be highly desirable to provide a systematic comparison of different ways to model the data; these need not necessarily be regression-based, as in this paper, but may also include less widely used techniques such as memory-based learning (Daelemans & Bosch 2005) or naïve discriminative learning (Baayen 2011). As for designing models of the genitive alternation specifically, with possessor animacy being such a crucial (and in some cases near-categorical) constraint it may be worth considering taking possessor animacy out of the regression models at a later step in the analysis in order to zoom in on the attributes of the variation grammars that are actually divergent and highly variable (Tagliamonte 2014 is a recent study that uses this technique; the idea goes back to Labov 1969: 729, who argues that inclusion of (near-)categorical contexts will obscure the real patterns of variation).…”
Section: Discussion and Directions For Future Researchmentioning
confidence: 99%