If face images are degraded by block averaging, there is a nonlinear decline in recognition accuracy as block size increases, suggesting that identification requires a critical minimum range of object spatial frequencies. The identification of faces was measured with equivalent Fourier low-pass filtering and block averaging preserving the same information and with high-pass transformations. In Experiment 1, accuracy declined and response time increased in a significant nonlinear manner in all cases as the spatial-frequency range was reduced. However, it did so at a faster rate for the quantized and high-passed images. A second experiment controlled for the differences in the contrast of the high-pass faces and found a reduced but significant and nonlinear decline in performance as the spatial-frequency range was reduced. These data suggest that face identification is preferentially supported by a band of spatial frequencies of approximately 8-16 cycles per face; contrast or line-based explanations were found to be inadequate. The data are discussed in terms of current models of face identification.The questions of whether the information concerning the identity offaces is carried by a limited range ofspatial scales and whether the potential information from different regions ofthe spatial spectrum is given equal weight in the determination of identity have been approached in a number of different ways. One method ofconsidering these issues has been to make use ofspatial-frequencyfilteringtechniques (Harmon, 1973). However, variations in this method have produced contradictory results, with notably differentconclusions about the relative importance of different spatial-frequency bands specified in terms ofcycles per face. The term cycles perface is defmed as the number of sinusoidal repetitions of a given width that can be placed within the eye-level width of the face. The use ofthis metric to describe the information present in stimuli allows discussion ofthe degree of detail necessary for recognition, perhaps by defining the scale of facial configuration. A class ofobjects has a configuration if there is a consistent set of features all arranged in the same order. Thus, if a set ofexamples are superimposed, normalizing for scale and viewpoint, another example of the class is produced that is closer to the prototype. Clearly, faces have this property, since all have two eyes, a nose, and a mouth-and these are consistently arranged.Harmon ( can be seen in Figure 2. The images are formed by placing a regular square grid across the image and setting the pixel value at each grid square to the average gray level within it. This work suggested that the minimum image quality that allows effective identification corresponds to a 16 X 16 pixel image; however, since the images did not take up the whole of the screen, the number of pixels per face was slightly lower. Harmon also used a smooth low-pass filtering technique. This type of filtering operation does not introduce additional spatial frequencies (noise), as the pix...
It has recently become apparent that if face images are degraded by spatial quantisation, or block averaging, there is a nonlinear acceleration of the decline in accuracy of recognition as block size increases. This suggests recognition requires a critical minimum range of object spatial frequencies. Two experiments were performed to clarify the phenomenon. In experiment 1, the speed and accuracy of recognition for six frontoparallel photographs of faces were measured. After familiarisation training sessions, the images were shown for 100 ms with 11, 21, and 42 pixels per face, horizontally measured. Transformations calculated to remove the same range of spatial frequencies were performed by means of quantisation, a Fourier low-pass filter, and Gaussian blurring. Although accuracy declined and speed increased in a significant, nonlinear manner in all cases as the image quality was reduced, it did so at a faster rate for the quantised images. In experiment 2, faces rated as being typical were shown at 9, 12, 23, and 45 pixels per face and with appropriate Fourier low-pass versions. The nonlinear decline was confirmed and it was shown that it could not be attributed to a ceiling effect. A further condition allowed quantised and Fourier low-pass conditions to be compared with an unstructured-noise condition of equal strength to that of the quantised images. These gave comparable, but slightly less impaired, recognition than the quantised images. It can be inferred from these results that the removal of a critical range of at least 8-16 cycles per face of information explains the step decline in recognition seen with quantised images. However, the decline found with quantised images is reinforced by internal masking from pixelisation.
Factors which govern the temporal integration of spatial information were examined in a group of five experiments. A series of high-pass and low-pass spatially filtered versions of a visual scene were generated. Observers' ratings of these filtered versions of the scene for perceived image quality indicated that quality was determined both by the bandwidth of spatial information and the presence of high-spatial-frequency edge information. When sequences of three different versions of the scene were presented over an interval of 120 ms the perceived quality of the resulting composite image was determined both from the ratings of the individual components of that sequence and from the order in which these components were presented. When the order of spatial information in a sequence moved from coarse to fine detail the perceived quality of the composite image was significantly better than when the order moved from fine to coarse. This evidence of a coarse-to-fine bias in pattern integration was further investigated with a detection paradigm. The pattern of errors once again indicated that temporal integration of spatial information was superior when a coarse-to-fine mode of information delivery was employed. Taken together the data indicate that the pattern-integration mechanism has an inherent order bias and does not accumulate spatial information so efficiently when the 'natural' coarse-to-fine order is violated.
Nelson (1982) presented regression equations for the prediction of Wechsler Adult Intelligence Scale (WAIS) IQ from performance on the National Adult Reading Test (NART). In a cross‐validation sample (n = 151) these equations predicted 66, 72 and 33 per cent of the variance in WAIS Full Scale, Verbal and Performance IQ respectively. There were no ceiling or floor effects in the relationship between NART performance and WAIS IQ despite the wide IQ range of the sample. The standardization and cross‐validation samples were combined (n = 271) to generate new regression equations. These equations should be used in preference to the original equations as they are based on a larger sample with a wider IQ and age range. Combining NART and Schonell Graded Word Reading Test errors did not improve IQ prediction in poor readers. A detailed examination of the NART's test—retest and inter‐rater reliability was also conducted.
The language and tools of risk and uncertainty estimation in flood risk management (FRM) are rarely optimized for the extant communication challenge. This paper develops the rationale for a pragmatic semiotics of risk communication between scientists developing flood models and forecasts and those professional groups who are the receptors for flood risk estimates and warnings in the UK. The current barriers to effective communication and the constraints involved in the formation of a communication language are explored, focusing on the role of the professional's agenda or "mission" in creating or reducing those constraints. The tools available for the development of this discourse, for both flood warnings in real time and generalized FRM communications, are outlined. It is argued that the contested ownership of the articulation of uncertainties embedded in flood risk communications could be reduced by the development of a formally structured translational discourse between science and professionals in FRM, through which process "codes of practice" for uncertainty estimation in different application areas can be developed. Ways in which this might take place in an institutional context are considered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.