The field of perceptual quality assessment has gone through a wide range of developments and it is still growing. In particular, the area of no-reference (NR) image and video quality assessment has progressed rapidly during the last decade. In this article, we present a classification and review of latest published research work in the area of NR image and video quality assessment. The NR methods of visual quality assessment considered for review are structured into categories and subcategories based on the types of methodologies used for the underlying processing employed for quality estimation. Overall, the classification has been done into three categories, namely, pixel-based methods, bitstream-based methods, and hybrid methods of the aforementioned two categories. We believe that the review presented in this article will be helpful for practitioners as well as for researchers to keep abreast of the recent developments in the area of NR image and video quality assessment. This article can be used for various purposes such as gaining a structured overview of the field and to carry out performance comparisons for the state-of-the-art methods. Keywords:No-reference; Image quality assessment; Video quality assessment; Perceptual quality 1 Review IntroductionThere has been a tremendous progress recently in the usage of digital images and videos for an increasing number of applications. Multimedia services that have gained wide interest include digital television broadcasts, video streaming applications, and real-time audio and video services over the Internet. The global mobile data traffic grew by 81% in 2013, and during 2014, the number of mobileconnected devices will exceed the number of people on earth, according to predictions made by Cisco. The video portion of the mobile data traffic was 53% in 2013 and is expected to exceed 67% by 2018 [1]. With this huge increase in the exposure of image and video to the human eye, the interest in delivering quality of experience (QoE) may increase naturally. The quality of visual media can get degraded during capturing, compression, transmission, reproduction, and displaying due to the distortions that might occur at any of these stages. The legitimate judges of visual quality are humans as end users, the opinions of whom can be obtained by subjective experiments. Subjective experiments involve a panel of participants which are usually non-experts, also referred to as test subjects, to assess the perceptual quality of given test material such as a sequence of images or videos. Subjective experiments are typically conducted in a controlled laboratory environment. Careful planning and several factors including assessment method, selection of test material, viewing conditions, grading scale, and timing of presentation have to be considered prior to a subjective experiment. For example, Recommendation (ITU-R) BT.500 [2] provides detailed guidelines for conducting subjective experiments for the assessment of quality of television pictures. The outcomes of a subjectiv...
Abstract-There is a growing need for robust methods for reference free perceptual quality measurements due to the increasing use of video in hand-held multimedia devices. These methods are supposed to consider pertinent artifacts introduced by the compression algorithm selected for source coding. This paper proposes a model that uses readily available encoder parameters as input to an artificial neural network to predict objective quality metrics for compressed video without using any reference and without need for decoding. The results verify its robustness for prediction of objective quality metrics in general and for PEVQ and PSNR in particular. The paper also focuses on reducing the complexity of the neural network.
The growing need of quick and online estimation of video quality necessitates the study of new frontiers in the area of no-reference visual quality assessment. Bitstream-layer model based video quality predictors use certain visual quality relevant features from the encoded video bitstream to estimate the quality. Contemporary techniques vary in the number and nature of features employed and the use of prediction model. This paper proposes a prediction model with a concise set of bitstream based features and a machine learning based quality predictor. Several full reference quality metrics are predicted using the proposed model with reasonably good levels of accuracy, monotonicity and consistency.
Abstract-In many applications and environments for mobile communication there is a need for reference free perceptual quality measurements. In this paper a method for prediction of a number of quality metrics is proposed, where the input to the prediction is readily available parameters at the receiver side of a communications channel. Since the parameters are extracted from the coded video bit stream the model can be used in user scenarios where it is normally difficult to estimate the quality due to the reference not being available, as in streaming video and mobile TV applications. The predictor turns out to give good results for both the PSNR and the PEVQ metrics.
In this paper an adaptive filter for reducing blocking and ringing artifacts is presented. The solution is designed with consideration of Mobile Equipment with limited computational power and memory. Also, the solution is computationally scalable if there is limited CPU resources in different user cases.
Abstract-Advancements in the video processing area have been proliferated by services that require low delay. Such services involve applications being offered at various temporal and spatial resolutions. It necessitates to study the impacts of related video coding conditions upon perceptual quality. But most of studies concerned with quality assessment of videos affected by coding distortions lack in variety of spatio-temporal resolutions. This paper presents a work done on quality assessment of videos encoded by state-of-the-art H.264/AVC standard at different bitrates and frame rates. Overall, 120 test scenarios for video sequences having different spatial and temporal spectral information were studied. The used coded bistreams in this work and the corresponding subjective assessment scores have been made public for the research community to facilitate further studies.
A common problem in the world's most widespread cellular telephone system, the GSM system, is the interfering signal generated in TDMA cellular telephony. The infamous "bumblebee" is generated by the switching nature of TDMA cellular telephony, the radio circuits are switched on and off at a rate of approximately 217 Hz (GSM). This paper describes a study of two solutions for eliminating the humming noise with IIR notch filters. The simpler one is suitable for any exterior equipment. This method still suffers from a small residual of the noise, resulting from the IDLE slots of the sending mobile. The more advanced IIR structure for use within the mobile also eliminates this residual.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.