How people set decision criteria in signal detection model is an important research question. The likelihood ratio (LR) theory, which is one of the most influential theories about criteria setting, typically assumes that (a) decisions are based on the objective LR of the signal and noise distributions, and (b) LR criteria do not change across tasks with various difficulty levels. However, it is often questioned whether people are really able to know the exact shape of signal and noise distributions, and compute the objective LR accordingly.Here we suggest whether decision criteria are set based on objective LR can be tested in two-condition experiments with different difficulty levels across conditions. We then asked participants in three empirical experiments to perform two-condition perceptual or memory tasks, and give their answer using confidence rating scale. Results revealed that the two assumptions of LR theory contradicted with each other: if we assumed decision criteria were based on objective LR, then the estimated LR criteria differed across difficulty levels, and fanned out as task difficulty decreased. We suggest people might inaccurately estimate the LR in signal detection tasks, and several possible explanations for the distortion of LR are discussed.
Public Significance StatementThis study demonstrates that the three regularities, including the mirror effect, variance effect and z-transformed receiver operating characteristic length effect, hold even when the objective likelihood ratio corresponding to the decision criteria differs between experimental conditions. This study indicates that the estimated decision criteria on the axis of objective likelihood ratio differ across difficulty levels, and fan out as task difficulty decreases. This study suggests that people might inaccurately estimate the likelihood ratio in signal detection tasks.