The human eye cannot perceive small pixel changes in images or videos until a certain threshold of distortion. In the context of video compression, Just Noticeable Difference (JND) is the smallest distortion level from which the human eye can perceive the difference between reference video and the distorted/compressed one. Satisfied-User-Ratio (SUR) curve is the complementary cumulative distribution function of the individual JNDs of a viewer group. However, most of the previous works predict each point in SUR curve by using features both from source video and from compressed videos with assumption that the group-based JND annotations follow Gaussian distribution, which is neither practical nor accurate. In this work, we firstly compared various common functions for SUR curve modeling. Afterwards, we proposed a novel parameter-driven method to predict the video-wise SUR from video features. Besides, we compared the prediction results of source-only features based (SRC-based) models and source plus compressed videos features (SRC+PVS-based) models.
Outlier analysis and spammer detection recently gained momentum in order to reduce uncertainty of subjective ratings in image & video quality assessment tasks. The large proportion of unreliable ratings from online crowdsourcing experiments and the need for qualitative and quantitative large-scale studies in the deep-learning ecosystem played a role in this event.We study the effect that data cleaning has on trainable models predicting the visual quality for videos, and present results demonstrating when cleaning is necessary to reach higher efficiency. To this end, we present and analyze a benchmark on clean and noisy User Generated Content (UGC) large-scale datasets on which we re-trained models, followed by an empirical exploration of the constraint of data removal. Our results show that a dataset presenting between 7 and 30% of outliers benefits from cleaning before training.
User Generated Content (UGC) refers to media generated by users for end-consumers that represent most of the media exchange on social media. UGC is subject to acquisition and transmission limitations that disable access to the pristine, i.e., perfect source content. Evaluating their quality, especially with current pre-and post-processing algorithms or filters, is a major issue for most off-the-shelf full-reference quality metrics. We propose to conduct a benchmark on existing full-reference, non-reference, and aesthetic quality metrics for UGC with special effects. We aim to identify the challenges posed by both UGC and filtering. We then propose a new combination of metrics tailored to enhanced and filtered UGC, which reaches a trade-off between complexity and accuracy.
Just Noticeable Difference (JND) and Satisfied User Ratio (SUR) has been widely investigated for compressed image and video to use the least resources (e.g., storage and bandwidth) without damaging the Quality of Experience (QoE) for end users. However, the current JND subjective test methodologies are extremely time consuming due to the large range of encoding parameters. Besides, the state-of-the-arts SUR/JND prediction models get non-negligible prediction error due to the limited masking effect features. To this end, we first proposed a preprocessing method to reduce the JND subjective test time by using dynamic range of encoding parameters and collected a new Video-Wise JND (VW-JND) datasets for HD videos: HD-VJND. Afterwards, based on the collected datasets, we proposed a SUR prediction framework by extracting 3 types of features 1) masking effect features; 2) bitstreams features; 3) content features. Feature selection is applied to extracted features before regression. Besides, we also compared the direct and indirect SUR value predictions methods. Experiment results shows that our proposed optimization can reduce 7.14% of the subjective experiment time compared to the widely used Robust Binary Search (RBS). Furthermore, the proposed SUR and JND prediction frameworks outperform the SOTA model in HD-VJND datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.