1998
DOI: 10.1037/0033-295x.105.2.280
|View full text |Cite
|
Sign up to set email alerts
|

Signal detection by human observers: A cutoff reinforcement learning model of categorization decisions under uncertainty.

Abstract: Previous experimental examinations of binary categorization decisions have documented robust behavioral regularities that cannot be predicted by signal detection theory (D.M. Green & J.A. Swets, 1966/1988). The present article reviews the known regularities and demonstrates that they can be accounted for by a minimal modification of signal detection theory: the replacement of the "ideal observer" cutoff placement rule with a cutoff reinforcement learning rule. This modification is derived from a cognitive game… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
119
0

Year Published

2000
2000
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 101 publications
(123 citation statements)
references
References 70 publications
4
119
0
Order By: Relevance
“…We use this model not as a replacement for SDT (and do not create new measures of sensitivity and bias based on it), but as an extension of classic SDT that can illustrate how different sources of noise may affect measurable statistics. This model incorporates confidence ratings and encapsulates aspects of decision uncertainty present in numerous previous models (see, e.g., Busemeyer & Myung, 1992;Erev, 1998;Kac, 1969;Schoeffler, 1965;Treisman & Williams, 1984), but does so at a level that does not incorporate learning and other trial-by-trial dynamics present in many of these previous models. This simplification allows us to evaluate the role of decision noise in general, independent of the specific assumptions of these theories (i.e., learning scheme, response mapping, criterion sampling/drift, etc.).…”
Section: The Decision Noise Model (Dnm) a Signal Detection Model Withmentioning
confidence: 99%
See 1 more Smart Citation
“…We use this model not as a replacement for SDT (and do not create new measures of sensitivity and bias based on it), but as an extension of classic SDT that can illustrate how different sources of noise may affect measurable statistics. This model incorporates confidence ratings and encapsulates aspects of decision uncertainty present in numerous previous models (see, e.g., Busemeyer & Myung, 1992;Erev, 1998;Kac, 1969;Schoeffler, 1965;Treisman & Williams, 1984), but does so at a level that does not incorporate learning and other trial-by-trial dynamics present in many of these previous models. This simplification allows us to evaluate the role of decision noise in general, independent of the specific assumptions of these theories (i.e., learning scheme, response mapping, criterion sampling/drift, etc.).…”
Section: The Decision Noise Model (Dnm) a Signal Detection Model Withmentioning
confidence: 99%
“…Some theorists have suggested that the decision criterion drifts along a sensory continuum from trial to trial, perhaps in response to error feedback (see, e.g., Kac, 1969). Others have suggested that decision criteria are sampled from a distribution on each trial (e.g., Erev, 1998), and still others have suggested that the observer learns a probabilistic function mapping sensory evidence onto the response (e.g., Schoeffler, 1965). Exactly how noise enters into the decision process is not important for our argument; thus, we aspresent substantial challenges for SDT and are not just complications caused by degenerate criterion placement, as was suggested by Treisman.…”
Section: Mapping From Percept To Responsementioning
confidence: 99%
“…A second alternative explanation is that optimal classifier feedback reduces overreaction to the objective feedback. Because the objective classifier is, by definition, correct on 100% of the trials, and since no decision criterion exists that can achieve this level of performance, in hill-climbing (e.g., Busemeyer & Myung, 1992) and error correction models (e.g., Erev, 1998;Erev, Gopher, Itkin, & Greenshpan, 1995;Roth & Erev, 1995), the errors for objective classifier feedback might be understoodas indicativeof a need to continue adjusting the decision criterion. Although it is unclear why this would always lead to greater conservatism in cutoff placement for objective classifier, relative to optimal classifier, feedback, we tested this notion in two ways.…”
Section: Two Alternative Explanationsmentioning
confidence: 99%
“…As suggested by many researchers, suppose that the observer adjusts the decision criterion based (at least in part) on the change in the rate of reward, with larger changes in rate being associated with faster, more nearly optimal, decision criterion learning (e.g., Busemeyer & Myung, 1992;Dusoir, 1980;Erev, 1998;Erev, Gopher, Itkin, & Greenshpan, 1995;Kubovy & Healy, 1977;Roth & Erev, 1995;Thomas, 1975;Thomas & Legge, 1970). To formalize this hypothesis one can construct the objective reward function.…”
Section: Flat-maxima Hypothesismentioning
confidence: 99%