2018 Conference on Cognitive Computational Neuroscience 2018
DOI: 10.32470/ccn.2018.1197-0
|View full text |Cite
|
Sign up to set email alerts
|

Are you sure about that? On the origins of confidence in concept learning

Abstract: Humans possess a rich repertoire of abstract concepts about which they can often judge their confidence. These judgements help guide behaviour, but the mechanisms underlying them are still poorly understood. Here, we examine the evolution of people's sense of confidence as they engage in probabilistic concept learning. Participants learned a continuous function of four continuous features, reporting their predictions and confidence about these predictions. Participants indeed had insight into their uncertainti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…In future work, we aim to extend our model to assess the neural correlates of generalization and uncertainty-guided exploration, test how people track their uncertainty in other domains that require functional knowledge (cf. Stojic, Eldar, Bassam, Dayan, & Dolan, 2018), as well as extend our model further to real world decision making such as consumer behavior.…”
Section: Discussionmentioning
confidence: 98%
“…In future work, we aim to extend our model to assess the neural correlates of generalization and uncertainty-guided exploration, test how people track their uncertainty in other domains that require functional knowledge (cf. Stojic, Eldar, Bassam, Dayan, & Dolan, 2018), as well as extend our model further to real world decision making such as consumer behavior.…”
Section: Discussionmentioning
confidence: 98%
“…What if the model has to extrapolate outside the convex hull of its training data? Studies in psychology and cognitive science show that humans and machines are usually less confident in their decisions and predictions when they extrapolate instead of interpolating, and a correlation exists between human confidence and the correctness of predictions [43].…”
Section: Studies On Promoting Accountability and Transparencymentioning
confidence: 99%
“…Let us be clear: we do not imply that all samples outside the convex hull of the training set should be considered out-of-distribution. Extrapolation and learning frequently supplement each other, but evidence suggests that both humans and AI models are more susceptible to mistakes when they extrapolate as opposed to when they interpolate [9,39,43,50]. This is sometimes reported as extrapolation tasks being more difficult to perform correctly [15,21,29].…”
Section: Studies On Extrapolation and Geometry Of Datasetsmentioning
confidence: 99%