Estimating d' from extreme false-alarm or hit proportions (p = 0 orp = 1) requires the use of a correction, because the z score of such proportions takes on infinite values. Two commonly used corrections are compared by using Monte-Carlo simulations. The first is the l/(2N) rule for which an extreme proportion is corrected by this factor before d' is calculated. The second is the log-linear rule for which each cell frequency in the contingency table is increased by 0.5 irrespective ofthe contents of each cell. Results showed that the log-linear rule resulted in less biased estimates of d' that always underestimated population d'. The 1/(2N) rule, apart from being more biased, could either over-or underestimate population d'.
In a recognition memory test, subjects may be asked to decide whether a test item is old or new (item recognition) or to decide among alternative sources from which it might have been drawn for study (source recognition). Confidence-rating-based receiver operating characteristic (ROC) curves for these superficially similar tasks are quite different, leading to the inference of correspondingly different decision processes. A complete account of source and item recognition will require a single model that can be fit to the entire data set. We postulated a detection-theoretic decision space whose dimensions, in the style of Banks (2000), are item strength and the relative strengths of the two sources. A model that assumes decision boundaries of constant likelihood ratios, source guessing for unrecognized items, and nonoptimal allocation of attention can account for data from three canonical data sets without assuming any processes specifically devoted to recollection. Observed and predicted ROCs for one of these data sets are given in the article, and ROCs for the other two may be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, www.psychonomic.org/archive.
In recognition memory experiments, the tendency to identify a test item as "old" or "new" can be increased or decreased by instructions given at test. The effect of such response bias on remember-know judgments is to change "remember" as well as "old" responses. Existing models of the remember-know paradigm (based on dual-process and signal detection theories) interpret this effect as a shift i nresponse criteria, but differ on the nature ofthe dimension along which t he changes take place. W e extendedthe models to account simultaneously for remember-know and confidence rating data and tested them using old-new (Experiment 1) and remember-know (Experiment 2) rating designs. Quantitative fits show that the signal detection models provide the best overall description of the data.
The cortical mechanisms of perceptual segregation of concurrent sound sources were examined, based on binaural detection of interaural timing differences. Auditory event-related potentials were measured from 11 healthy subjects. Binaural stimuli were created by introducing a dichotic delay of 500-ms duration to a narrow frequency region within a broadband noise, and resulted in a perception of a centrally located noise and a right-lateralized pitch (dichotic pitch). In separate listening conditions, subjects actively discriminated and responded to randomly interleaved binaural and control stimuli, or ignored random stimuli while watching silent cartoons. In a third listening condition subjects ignored stimuli presented in homogenous blocks. For all listening conditions, the dichotic pitch stimulus elicited an object-related negativity (ORN) at a latency of about 150-250 ms after stimulus onset. When subjects were required to actively respond to stimuli, the ORN was followed by a P400 wave with a latency of about 320-420 ms. These results support and extend a two-stage model of auditory scene analysis in which acoustic streams are automatically parsed into component sound sources based on source-relevant cues, followed by a controlled process involving identification and generation of a behavioral response.
Individuals with developmental dyslexia show impairments in processing that require precise timing of sensory events. Here, we show that in a test of auditory temporal acuity (a gap-detection task) children ages 6-9 years with dyslexia exhibited a significant deficit relative to age-matched controls. In contrast, this deficit was not observed in groups of older reading-impaired individuals (ages 10-11 years; 12-13 years) or in adults (ages 23-25 years). It appears, therefore, that early temporal resolution deficits in those with reading impairments may significantly ameliorate over time. However, the occurrence of an early deficit in temporal acuity may be antecedent to other language-related perceptual problems (particularly those related to phonological processing) that persist after the primary deficit has resolved. This result suggests that if remedial interventions targeted at temporal resolution deficits are to be effective, the early detection of the deficit and early application of the remedial programme is especially critical.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.