2017
DOI: 10.1111/cogs.12496
|View full text |Cite
|
Sign up to set email alerts
|

Naïve and Robust: Class‐Conditional Independence in Human Classification Learning

Abstract: Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference problem, allows for informed inferences about novel feature combinations, and performs robustly across different statistical environments. We designed a new Bayesian clas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 14 publications
(20 citation statements)
references
References 126 publications
(151 reference statements)
0
20
0
Order By: Relevance
“…First, we describe the details of an ideal statistical learner which we call OPTIMAL. In our experiments, the stimuli were generated by following the principle of class-conditional independence (e.g., Anderson, 1990;Jarecki, Meder, & Nelson, 2013). As long as one knows the true category label y, then the probability of any particular feature value xi is completely independent of any other feature.…”
Section: An Ideal Observer Modelmentioning
confidence: 99%
“…First, we describe the details of an ideal statistical learner which we call OPTIMAL. In our experiments, the stimuli were generated by following the principle of class-conditional independence (e.g., Anderson, 1990;Jarecki, Meder, & Nelson, 2013). As long as one knows the true category label y, then the probability of any particular feature value xi is completely independent of any other feature.…”
Section: An Ideal Observer Modelmentioning
confidence: 99%
“…From a mathematical standpoint, the genes are class-conditionally-independent (Domingos & Pazzani, 1997;Jarecki, Meder & Nelson, 2018) and identically distributed across specimens.…”
Section: Stepwise Vs Multistep Strategies Given Binary Hypothesesmentioning
confidence: 99%
“…First, we describe the details of an ideal statistical learner which we call optimal. In our experiments, the stimuli were generated by following the principle of class-conditional independence (e.g., Anderson, 1990;Jarecki, Meder, & Nelson, 2013). As long as one knows the true category label y, then the probability of any particular feature value x i is completely independent of any other feature.…”
Section: An Ideal Observer Modelmentioning
confidence: 99%
“…, θ D ) where θ i = p(x i |y) describes the probability that feature i will have value x i . Although class-conditional independence is not always satisfied in real life where feature correlations are possible (Malt & Smith, 1984), it is a reasonable simplification in many situations (Jarecki et al, 2013), and one that is appropriate to our experimental design. Moreover, because the category can be represented in terms of a single idealised vector θ that describes the central tendency of the category, it is broadly similar to standard prototype models (Posner & Keele, 1968).…”
Section: An Ideal Observer Modelmentioning
confidence: 99%