The information about the blur and noise of an original image is lost when a standard image thumbnail is generated by filtering and subsampling. Image browsing becomes difficult since the standard thumbnails do not distinguish between high-quality and low-quality originals. In this paper, an efficient algorithm with a blur-generating component and a noise-generating component preserves the local blur and the noise of the originals. The local blur is rapidly estimated using a scale-space expansion of the standard thumbnail and subsequently used to apply a space-varying blur to the thumbnail. The noise is estimated and rendered by using multirate signal transformations that allow most of the processing to occur at the lower spatial sampling rate of the thumbnail. The new thumbnails provide a quick, natural way for users to identify images of good quality. A subjective evaluation shows the new thumbnails are more representative of their originals for blurry images. The noise generating component improves the results for noisy images, but degrades the results for textured images. The blur generating component of the new thumbnails may always be used to advantage. The decision to use the noise generating component of the new thumbnails should be based on testing with the particular image mix expected for the application.
An all-digital ring-wedge detector system is presented that simulates the analog multielement array commonly used in coherent optoelectronic processors. The system is applicable with either hard-copy or digital imagery. Using neural-network software, we demonstrate high accuracy for the recognition of fingerprints, including both orientation and wide-scale size-independent sortings by using ring-only and wedge-only input neurons, respectively. Also, the system is applied on windowed subregions of fingerprint imagery, providing a feature set that summarizes localized information about spatial-frequency content and edge-angle correlations. Examples are presented in which this localized spatial-frequency information is used to produce local ridge-orientation maps and to detect regions of poor print quality. In summary, both direct-image data and spatial-transform data are found to be important.
In the automatic assessment of image quality we obtained a high accuracy in the classification of image degradations in a manner that is widely independent of scene content. Using an all-digital ring-wedge detector system combined with neural-network software, we conducted several experiments in which the end goal is to classify images according to numerical quality scales. Experiments are presented to stress the importance of both local and global image quality assessment. Two databases of degraded images were prepared. One uses five levels of Gaussian blur to simulate depth of field. The other was prepared with lossy compression and recovery with artifacts generated by a JPEG (Joint Photographic Experts Group) compression algorithm. In quantitative terms our best sorting of Gaussian blur without knowledge of the original scene was to an accuracy of 96%. For degradation using JPEG we obtained an accuracy of 95% without knowledge of the original and 98% when the original scene is available as a reference.
A full color visibility model has been developed that uses separate contrast sensitivity functions (CSFs) for contrast variations in luminance and chrominance (red-green and blue-yellow) channels. The width of the CSF in each channel is varied spatially depending on the luminance of the local image content. The CSF is adjusted so that more blurring occurs as the luminance of the local region decreases. The difference between the contrast of the blurred original and marked image is measured using a color difference metric.This spatially varying CSF performed better than a fixed CSF in the visibility model, approximating subjective measurements of a set of test color patches ranked by human observers for watermark visibility. The effect of using the CIEDE2000 color difference metric compared to CIEDE1976 (i.e., a Euclidean distance in CIELAB) was also compared.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.