The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2002
DOI: 10.1117/12.462684
|View full text |Cite
|
Sign up to set email alerts
|

<title>Visual discrimination modeling of lesion detectability</title>

Abstract: The Sarnoff JNDmetrix visual discrimination model (VDM) was applied to predict human psychophysical performance in the detection of simulated mammographic lesions. Contrast thresholds for the detection of synthetic Gaussian "masses" on mean backgrounds and simulated mammographic backgrounds were measured in two-alternative, forcedchoice (2AFC) trials. Experimental thresholds for 2-D Gaussian signal detection decreased with increasing signal size on mean backgrounds and on 1/f 3 filtered noise images presented … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
11
0

Year Published

2002
2002
2020
2020

Publication Types

Select...
5
4

Relationship

3
6

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 9 publications
(24 reference statements)
1
11
0
Order By: Relevance
“…22 Another method of using perceptual difference models is by assessing the discriminability between an image containing a signal and the same image without the signal. [19][20][21]43 This second method was also investigated to measure signal detectability as a function of compression. Although the detailed results are not presented in this proceedings paper, we found that this method was unsuccessful in predicting the degradation in human performance with compression ratio.…”
Section: Discussionmentioning
confidence: 99%
“…22 Another method of using perceptual difference models is by assessing the discriminability between an image containing a signal and the same image without the signal. [19][20][21]43 This second method was also investigated to measure signal detectability as a function of compression. Although the detailed results are not presented in this proceedings paper, we found that this method was unsuccessful in predicting the degradation in human performance with compression ratio.…”
Section: Discussionmentioning
confidence: 99%
“…Since the threshold data that we had gathered at that time for those backgrounds and signals had exhibited negative C-D slopes, their use in LP channel calibration ensured that the VDM would predict negative C-D slopes for those images. 8 More recent work 9 has shown that the absence of fixation cues scaled to signal size was primarily responsible for the negative C-D characteristics found in our earlier experiments. Fixation cues enhance detection by reducing uncertainty in signal location for the human observer.…”
Section: Introductionmentioning
confidence: 88%
“…7 In an earlier study, we also found experimentally a positive contrast-detail (C-D) slope for Gaussian "masses" in 1/f 3 filtered noise images when different backgrounds were used in the 2AFC trials; when the same background was used for both locations, however, we observed smaller detection thresholds and a negative C-D slope, i.e., larger Gaussians were more conspicuous that smaller Gaussians, in qualitative agreement with our threshold data for Gaussian detection on mean-luminance backgrounds. 8 When we first began simulating Gaussian detection thresholds with the JNDmetrix VDM, a lowpass (LP) channel was introduced in the model to provide an appropriate response to signals, such as Gaussians, with an amplitude in the frequency domain that increases as frequency approaches zero. The sensitivity and masking parameters of this new channel required psychophysical data for proper calibration relative to the existing bandpass channels, which respond to higher spatial frequencies and had been calibrated previously to fit psychophysical thresholds for the detection and discrimination of sine gratings.…”
Section: Introductionmentioning
confidence: 99%
“…21 JND channel maps were generated using "single-ended" simulations in which each test image from the signal (signal-present) and noise (signal-absent) sets was paired with a uniform, mean-luminance reference image. Model input in this case is a measure of the contrast visibility in a given test image.…”
Section: Methodsmentioning
confidence: 99%