2021
DOI: 10.1093/nc/niab039
|View full text |Cite
|
Sign up to set email alerts
|

A robust confidence–accuracy dissociation via criterion attraction

Abstract: Many studies have shown that confidence and accuracy can be dissociated in a variety of tasks. However, most of these dissociations involve small effect sizes, occur only in a subset of participants, and include a reaction time (RT) confound. Here, I develop a new method for inducing confidence–accuracy dissociations that overcomes these limitations. The method uses an external noise manipulation and relies on the phenomenon of criterion attraction where criteria for different tasks become attracted to each ot… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
10
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(14 citation statements)
references
References 73 publications
1
10
0
Order By: Relevance
“…It is often assumed that the sensory measurement is simply compared to static confidence criteria [ 6 , 22 ], which is no issue due to many SDT tasks being of a fixed difficulty level (note that this is not true of tasks that staircase difficulty, e.g., [ 25 , 45 ]). In mixed-difficulty designs, however, it has been proposed that confidence criteria are updated according to the level of sensory uncertainty [ 11 , 27 , 46 ]. The reason an observer would do this is to avoid a confidence paradox of more readily assigning high confidence to stimuli with large sensory noise that are more likely to have the measurement fall far from the perceptual decision criterion.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…It is often assumed that the sensory measurement is simply compared to static confidence criteria [ 6 , 22 ], which is no issue due to many SDT tasks being of a fixed difficulty level (note that this is not true of tasks that staircase difficulty, e.g., [ 25 , 45 ]). In mixed-difficulty designs, however, it has been proposed that confidence criteria are updated according to the level of sensory uncertainty [ 11 , 27 , 46 ]. The reason an observer would do this is to avoid a confidence paradox of more readily assigning high confidence to stimuli with large sensory noise that are more likely to have the measurement fall far from the perceptual decision criterion.…”
Section: Discussionmentioning
confidence: 99%
“…The reason an observer would do this is to avoid a confidence paradox of more readily assigning high confidence to stimuli with large sensory noise that are more likely to have the measurement fall far from the perceptual decision criterion. Yet, human observers do not shift their criteria appropriately to avoid this paradox [ 27 , 46 ]. However, without an incentive structure for confidence, confidence ratings are essentially meaningless and there is little motivating accurate shifts, which could explain these results.…”
Section: Discussionmentioning
confidence: 99%
“…In recent years, an increasing number of studies have reported dissociations between confidence and accuracy (Vaghi et al 2017; Rahnev, 2021). For example, it has been shown that whereas choices are equally informed by choice-relevant and choice-irrelevant information, decision confidence has been found to mostly reflect variation in choice-relevant information (“positive evidence bias”; Maniscalco et al, 2016; Peters et al, 2017; Koizumi et al, 2015; Zylberberg et al, 2012).…”
Section: Discussionmentioning
confidence: 99%
“…Much less attention has been devoted to the computational mechanisms underlying confidence biases. For example, in signal detection theory, an influential framework often used to quantify the sensitivity of decision confidence, biases in confidence can easily be modelled by changing the criteria that dissociate high from low confidence (Rahnev, 2021). However, this is merely descriptive and does not provide us with fundamental insight into why different people have different confidence criteria.…”
Section: Discussionmentioning
confidence: 99%
“…Lastly, we have examined criterion-dependency of metacognitive accuracy by estimating meta-dʹ parameters at different confidence criteria. For this purpose, we have converted multilevel confidence rating data into binary formats (i.e., high vs. low confidence) by making dichotomous cutoffs at different confidence criteria ( Rahnev, 2021 ; Shekhar & Rahnev, 2021 ). To illustrate this, let us consider a response frequency dataset of (2, 3, 5, 7, 11, 13), which is comprised of two type 1 response classes and three levels of confidence rating (i.e., ten S1 responses [sum of the first three] and thirty-one S2 responses [sum of the last three] are bounded by lower and higher confidence criteria, constituting a sequence of response frequencies from highest confidence S1 to highest confidence S2).…”
Section: Analyses On the Confidence Databasementioning
confidence: 99%