2018
DOI: 10.31234/osf.io/p5rj9
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Testing the Foundations of Signal Detection Theory in Recognition Memory

Abstract: Signal Detection Theory (SDT) plays a central role in the characterization of human judgments in a wide range of domains, most prominently in recognition memory. But despite its success, many of its fundamental assumptions are often misunderstood, especially when its comes to its testability. The present work examines five main assumptions that are characteristic of existing SDT models -- the existence of a random scale representation, independent sampling, monotonic likelihood, Receiver Operating Characterist… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
1
1

Relationship

4
4

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 68 publications
0
9
0
Order By: Relevance
“…Otherwise, theoretically, σ 2 Target/σ 2 Nontarget needs to be estimated in a separate task by using different methods than confidence rating, such as manipulations on target/nontarget presentation probabilities and different incentives for Yes/No responses (for details, see Maniscalco & Lau, 2012. Realistically, however, stable estimation of σ 2 Target/σ 2 Nontarget would be difficult at the individual subjects' level, as empirical ROCs constructed with these methods are usually not as smooth as theoretically supposed to be (e.g., Kellen et al, 2018;Macmillan & Creelman, 2005, pp. 71-72).…”
Section: Practical Notes For Empirical Experimentsmentioning
confidence: 99%
“…Otherwise, theoretically, σ 2 Target/σ 2 Nontarget needs to be estimated in a separate task by using different methods than confidence rating, such as manipulations on target/nontarget presentation probabilities and different incentives for Yes/No responses (for details, see Maniscalco & Lau, 2012. Realistically, however, stable estimation of σ 2 Target/σ 2 Nontarget would be difficult at the individual subjects' level, as empirical ROCs constructed with these methods are usually not as smooth as theoretically supposed to be (e.g., Kellen et al, 2018;Macmillan & Creelman, 2005, pp. 71-72).…”
Section: Practical Notes For Empirical Experimentsmentioning
confidence: 99%
“…Note that the assumption that ROC functions are concave (which includes linearity as a boundary case) implies that the likelihood ratio is monotonic. Any point of the ROC with slope larger/smaller than 1 indicates that the value of binary-response criterion τ is more likely under the latent distribution associated with Same/Change trials (for details, seeKellen, Winiger, Dunn, & Singmann, 2019). To obtain biased confidence ratings, one only needs to place the confidence criteria associated with a "change"/"same" response along a range of values in which the latent memory-strength values are more likely under the distribution associated with Same/Change trials.…”
mentioning
confidence: 99%
“…Our study adds to previous efforts to test decision models with joint single-item and forced-choice recognition data (Jang et al, 2009;Kellen, Winiger, Dunn, & Singmann, 2019;Smith, & Duncan, 2004). These studies explored a variety of different models and theoretical issues, but one common theme is that all of these studies included the UVSD model, and it consistently provided a good fit to the joint single-item and forced-choice data (or made good predictions about one form of data based on the other).…”
Section: Discussionmentioning
confidence: 89%