144 observers, divided into eight groups of 18 each, were run in a two-alternative, temporal, forced-choice auditory-signal-detection task. At each of two signal intensities, four levels of information feedback were used. No feedback (NF); correct feedback on every trial (F100), on three-fourths (F75), or half (F50) of the trials, with incorrect feedback on remaining trials. The results were that (a) NF and F100 led to higher probability of correct responding P(C) than either F75 or F50 for both signal intensities; (b) P(C) for NF was higher under the higher intensity but lower under the lower intensity than for F100; (c) on trials immediately following trials on which observer's response and feedback agreed, detection rates were higher and false-alarm rates were lower than following disagreement trials, whereas these differences were close to zero for F50. It is argued that feedback leads the observer to change his criterion following disagreements. The effect of this variability is to depress the mean detectability index d′ of signal-detectability theory.
A fixed-base simulation experiment was performed to gather visual air-to-ground target recognition performance data for comparison with predictions from the Autonetics Detection Model. Color motion picture imagery obtained during a low-altitude fright was used to simulate the observer's forward view. Observer performance was measured in terms of probability and range of correct target recognition. The Autonetics Detection Model incorporates parameters related to the target, the environment, and the observer. In generating theoretical predictions from the model, values of all parameters were specified independently of the data obtained in the experiment. No curve fitting techniques were used to improve the fit between the empirical and theoretical curves. Results indicated a close relationship between the obtained performance data and the model predictions. A product-moment correlation of +0.53, significant at the 0,001 level, was obtained between the empirical and theoretical 50% recognition ranges. INTRODUCTIONSeveral attempts to develop mathematical models of air-to-ground target recognition performance have recently been made (Gilmour and Emerson, 1965;Franklin and Whittenburg, 1965; and Ornstein, Brainard, and Bishop, 196 1). Two major problems exist in attempting to validate existing models. First, the lack of empirical performance data obtained under controlled conditions precludes comparisons of actual performances with theoretical model predictions over a substantial range of parameter values. Second, the large number of model parameters requires quantitative specification of many variables which are difficult to measure accurately.As a result, much model validation work has been forced into one of two paths. Either the model is simplified to the extent that it is no longer applicable to operational problems, or the data used to test the model arise from simplified experimental conditions. This paper presents a preliminary attempt to evaluate the Autonetics Detection Model. It was felt that the simulation method used for obtaining empirical data in this study would preserve sufficient "real-world'' detail in target and background parameters while retaining experimental control so that all model parameters could be quantitatively specified. THE AUTONETICS DETECTION MODEL Statement of the ModelThe objective of the visual detection and recognition model utilized by Autonetics is to furnish an estimate of the cumulative probability of having recognized a target by a certain point in the target approach. This estimate is to be made on the basis of measurable characteristics of the target and its background, the environment, and the observer. The model is largely descriptive rather than rational, in that the formulation of the model is fitted to the form of existing experimental data on several aspects of the search process without serious consideration of the underlying neurological mechanisms.Basically, the cumulative probability is obtained as a product of a series of single glimpse probabilities according ...
In a 2-alternative, temporal, forced-choice signal-detection task, observers received degrees of correct information as to the interval in which the signal had occurred. Groups π(100), π(75), and π(50) received correct feedback following every trial in the proportions 1.00, 0.75, and 0.50, respectively. Group π(0) received no information on any trial. 18 observers were run for 400 trials under each combination of 2 E/N0's and the 4 information conditions. Results were (a) detection rate was greatest for π(100) and π(0) within both E/N0's; but (b) rate of Group π(100)>π(0) for low E/N0 and π(0)>π(100) for high E/N0, in agreement with an earlier result obtained in a Yes-No experiment. Are S1EjAk states on trial n independent of S1EjAk states on trial n−1 (where i, j, k=1, 2 and Si=stimulus interval, Ej=experimenter feedback to observer, Ak=response of observer)? That is, are transition probabilities stationary? For both E/N0's: (1) all states for π(0) were stationary; (2) a single state (S2E2Ak) was nonstationary for π(100) groups; (3) half the states (all S2) were nonstationary for π(75); while (4) more than half of the states (S1 and S2 randomly) were stationary for the π(50) groups. (7) First-order sequential effects clearly were strongest under feedback. (8) Detection rates were higher and false-alarm rates lower on trials following EjAk agreements (i.e., j=k) than on disagreement trials (j≠k).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.