This paper presents an algorithm designed to measure the local perceived sharpness in an image. Our method utilizes both spectral and spatial properties of the image: For each block, we measure the slope of the magnitude spectrum and the total spatial variation. These measures are then adjusted to account for visual perception, and then, the adjusted measures are combined via a weighted geometric mean. The resulting measure, i.e., S(3) (spectral and spatial sharpness), yields a perceived sharpness map in which greater values denote perceptually sharper regions. This map can be collapsed into a single index, which quantifies the overall perceived sharpness of the whole image. We demonstrate the utility of the S(3) measure for within-image and across-image sharpness prediction, no-reference image quality assessment of blurred images, and monotonic estimation of the standard deviation of the impulse response used in Gaussian blurring. We further evaluate the accuracy of S(3) in local sharpness estimation by comparing S(3) maps to sharpness maps generated by human subjects. We show that S(3) can generate sharpness maps, which are highly correlated with the human-subject maps.
This paper presents an algorithm for video quality assessment, spatiotemporal MAD (ST-MAD), which extends our previous image-based algorithm (MAD [1]) to take into account visual perception of motion artifacts. ST-MAD employs spatiotemporal "images" (STS images [2]) created by taking time-based slices of the original and distorted videos. Motion artifacts manifest in the STS images as spatial artifacts, which allows one to quantify motion-based distortion by using classical image-quality assessment techniques. ST-MAD estimates motion-based distortion by applying MAD's appearance-based model to compare the distorted video's STS images to the original video's STS images. This comparison is further adjusted by using optical-flow-derived weights designed to give greater precedence to fast-moving regions located toward the center of the video. Testing on the LIVE video database demonstrates that ST-MAD performs well in predicting video quality.
Network inference (or tomography) problems, such as traffic matrix estimation or completion and link loss inference, have been studied rigorously in different networking applications. These problems are often posed as under-determined linear inverse (UDLI) problems and solved in a centralized manner, where all the measurements are collected at a central node, which then applies a variety of inference techniques to estimate the attributes of interest. This paper proposes a novel framework for decentralizing these large-scale under-determined network inference problems by intelligently partitioning it into smaller sub-problems and solving them independently and in parallel. The resulting estimates, referred to as multiple descriptions, can then be fused together to compute the global estimate. We apply this Multiple Description and Fusion Estimation (MDFE) framework to three classical problems: traffic matrix estimation, traffic matrix completion, and loss inference. Using real topologies and traces, we demonstrate how MDFE can speed up computation while maintaining (even improving) the estimation accuracy and how it enhances robustness against noise and failures. We also show that our MDFE framework is compatible with a variety of existing inference techniques used to solve the UDLI problems.
This paper presents a block-based algorithm designed to measure the local perceived sharpness in an image. Our method utilizes both spectral and spatial properties of the image: For each block, we measure the slope of the magnitude spectrum and the total spatial variation. These measures are then adjusted to account for visual perception, and then the adjusted measures are combined via a weighted geometric mean. The resulting measure, S 3 (Spectral and Spatial Sharpness), yields a perceived sharpness map in which greater values denote perceptually sharper regions. This map can be collapsed into a single index which quantifies the overall perceived sharpness of the whole image. We demonstrate the utility of the S 3 measure for within-image and across-image sharpness prediction, for global blur estimation, and for no-reference image quality assessment of blurred images.
This paper presents the results of a computational experiment designed to investigate the extent to which metrics of image fidelity can be improved through knowledge of where humans tend to fixate in images. Five common metrics of image fidelity were augmented using two sets of fixation data, one set obtained under task-free viewing conditions and another set obtained when viewers were asked to judge image quality. The augmented metrics were then compared to subjective ratings of the images. The results show that most metrics can be improved using eye fixation data, but a greater improvement was found using fixations obtained in the task-free condition (task-free viewing).
Most methods of image quality assessment (QA) have been designed for QA of degraded images. This paper presents the results of a study designed to investigate whether existing QA methods can be adapted to succeed on enhanced images. We developed a database containing digitally enhanced images and associated subjective quality ratings. Next, we analyzed the efficacy of select QA methods and their reversemode versions in predicting the ratings. Given the fact that an enhanced image makes the original image appear degraded, we tested both normal and reverse-mode versions, where the latter were implemented by specifying the enhanced image as the reference and the original image as the "degraded" image. Our results demonstrate that this reverse-mode approach improves QA of enhanced images. We present a strategy for further improving the QA methods by using measures of contrast, sharpness, and color saturation.
In this paper we present an algorithm which uses adaptive selection of low-level features for main subject detection. The algorithm rst computes low-level features such as contrast and sharpness, each computed in a block-based fashion. Next, the algorithm quanti es the usefulness of each feature by using both statistical and geometric information measured across blocks. Finally, the saliency of each block is determined via a weighted linear combination of the features, where the weights are chosen based on each feature's estimated usefulness. Our results demonstrate that the adaptive nature of this algorithm allows it to perform competitively with other techniques, while maintaining very low computational complexity.Index Terms-Main subject detection, low-level feature, adaptive feature selection, block-based.
Main subject detection (MSD) refers to the task of determining which spatial regions in an image correspond to the most visually relevant or scene-defining object(s) for general viewing purposes. This task, while trivial for a human, remains extremely challenging for a computer. Here, we present an algorithm for MSD which operates by adaptively refining low-level features. The algorithm computes, in a block-based fashion, five feature maps corresponding to lightness distance, color distance, contrast, local sharpness, and edge strength. These feature maps are adaptively combined and gradually refined via three stages. The final combination of the refined feature maps produces an estimate of the main subject's location. We tested the proposed algorithm on two extensive image databases. Our results show that relatively simple, low-level features, when used in an adaptive and iterative fashion, can be very effective at MSD. © 2011 SPIE and IS&T.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.