A key problem in blind image quality assessment (BIQA) is how to effectively model the properties of human visual system in a data-driven manner. In this paper, we propose a simple and efficient BIQA model based on a novel framework which consists of a fully convolutional neural network (FCNN) and a pooling network to solve this problem. In principle, FCNN is capable of predicting a pixel-by-pixel similar quality map only from a distorted image by using the intermediate similarity maps derived from conventional full-reference image quality assessment methods. The predicted pixel-by-pixel quality maps have good consistency with the distortion correlations between the reference and distorted images. Finally, a deep pooling network regresses the quality map into a score. Experiments have demonstrated that our predictions outperform many stateof-the-art BIQA methods.Index Terms-No-reference image quality assessment, convolutional neural networks, pooling network, pixel distortion.
Most existing image quality assessment (IQA) methods focus on improving the performance of synthetic distorted images. Although these methods perform well on the synthetic distorted IQA database, once they are applied to the natural distorted database, the performance will severely decrease. In this work, we propose a blind image quality assessment based on generative adversarial network (BIQA-GAN) with its advantages of self-generating samples and self-feedback training to improve network performance. Three different BIQA-GAN models are designed according to the target domain of the generator. Comprehensive experiments on popular benchmarks show that our proposed method significantly outperforms the previous state-of-the-art methods for authentically distorted images, which also has good performances on synthetic distorted benchmarks.INDEX TERMS Generative adversarial networks, deep learning, image quality assessment, noreference/blind image quality assessment, natural distorted image.
Image quality assessment(IQA) is of increasing importance for image-based applications. Its purpose is to establish a model that can replace humans for accurately evaluating image quality. According to whether the reference image is complete and available, image quality evaluation can be divided into three categories: full-reference(FR), reduced-reference(RR), and non-reference(NR) image quality assessment. Due to the vigorous development of deep learning and the widespread attention of researchers, several non-reference image quality assessment methods based on deep learning have been proposed in recent years, and some have exceeded the performance of reduced -reference or even full-reference image quality assessment models. This article will review the concepts and metrics of image quality assessment and also video quality assessment, briefly introduce some methods of full-reference and semi-reference image quality assessment, and focus on the non-reference image quality assessment methods based on deep learning. Then introduce the commonly used synthetic database and real-world database. Finally, summarize and present challenges.Preprint. Under review.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.