2019
DOI: 10.1080/15475441.2019.1695620
|View full text |Cite
|
Sign up to set email alerts
|

Patterns Bit by Bit. An Entropy Model for Rule Induction

Abstract: From limited evidence, children track the regularities of their language impressively fast and they infer generalized rules that apply to novel instances. This study investigated what drives the inductive leap from memorizing specific items and statistical regularities to extracting abstract rules. We propose an innovative entropy model that offers one consistent information-theoretic account for both learning the regularities in the input and generalizing to new input. The model predicts that rule induction i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

10
203
3

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(216 citation statements)
references
References 49 publications
10
203
3
Order By: Relevance
“…This study looks into the factors that drive the inductive step from encoding specific items and statistical regularities to inferring abstract rules. While supporting the single-mechanism hypothesis and a gradient of generalization proposed previously Newport, 2012, 2014), in Radulescu et al (2019), we took a step further in understanding the two qualitatively different representations discussed in previous research, which we dubbed, in accordance with previous suggestions (Gómez and Gerken, 2000), item-bound generalizations and categorybased generalizations. While item-bound generalizations describe relations between specific physical items (e.g., a relation based on physical identity, like "ba always follows ba" or "ke always predicts mi"), category-based generalizations are operations beyond specific items that describe relationships between categories (variables), e.g., "Y always follows X, " where Y and X are variables taking different values.…”
Section: Introductionsupporting
confidence: 64%
See 1 more Smart Citation
“…This study looks into the factors that drive the inductive step from encoding specific items and statistical regularities to inferring abstract rules. While supporting the single-mechanism hypothesis and a gradient of generalization proposed previously Newport, 2012, 2014), in Radulescu et al (2019), we took a step further in understanding the two qualitatively different representations discussed in previous research, which we dubbed, in accordance with previous suggestions (Gómez and Gerken, 2000), item-bound generalizations and categorybased generalizations. While item-bound generalizations describe relations between specific physical items (e.g., a relation based on physical identity, like "ba always follows ba" or "ke always predicts mi"), category-based generalizations are operations beyond specific items that describe relationships between categories (variables), e.g., "Y always follows X, " where Y and X are variables taking different values.…”
Section: Introductionsupporting
confidence: 64%
“…Both young and adult learners possess a domain-general distributional learning mechanism for finding statistical patterns in the input (Saffran et al, 1996;Thiessen and Saffran, 2007), and a learning mechanism that allows for category (rule) learning (Marcus et al, 1999;Wonnacott and Newport, 2005;Smith and Wonnacott, 2010;Wonnacott, 2011). While previously cognitive psychology theories claimed that there are two qualitatively different mechanisms, with rule learning relying on encoding linguistic items as abstract categories (Marcus et al, 1999), as opposed to learning statistical regularities between specific items (Saffran et al, 1996), recent views converge on the hypothesis that one mechanism, statistical learning, underlies both itembound learning and rule induction Newport, 2012, 2014;Frost and Monaghan, 2016;Radulescu et al, 2019). Rule induction (generalization or regularization) has often been explained as resulting from processing input variability (quantifiable amount of statistical variation), both in young and adult language learners (Gerken, 2006;Hudson Kam and Chang, 2009;Hudson Kam and Newport, 2009;Reeder et al, 2013).…”
Section: Introductionmentioning
confidence: 99%
“…Yet, experimental evidence in other artificial grammar learning tasks (Gomez, 2000;Endress & Bonatti, 2007;Radulescu, Wijnen, Avrutin;Valian & Coulson, 1988) suggests that input variability, complexity and frequency ratios play a role in selectively triggering different processing mechanisms. It is, therefore, important to understand under what input conditions the extraction of word order regularities take place.…”
Section: The Current Studymentioning
confidence: 99%
“…Another important factor that has been shown to impact learning in artificial grammar studies is the overall amount of exposure to the input. Shorter exposure typically leads to the extraction of structural regularities and their generalization (Endress & Bonatti, 2006;Radulescu, Wijnen, Avrutin, 2020), while longer exposure favors item-based learning and memorization. A possible explanation for these results is that short exposure doesn't allow sufficient time/opportunity for the memorization of individual items.…”
Section: The Current Studymentioning
confidence: 99%
See 1 more Smart Citation