2020
DOI: 10.1109/tip.2020.2967829
|View full text |Cite
|
Sign up to set email alerts
|

KonIQ-10k: An Ecologically Valid Database for Deep Learning of Blind Image Quality Assessment

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
305
0
2

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 406 publications
(347 citation statements)
references
References 49 publications
3
305
0
2
Order By: Relevance
“…This sub-task intends to make use of the distortion category information which is available in the legacy IQA datasets with some common synthetic distortions [11,161,162]. However, the massive Internet images captured by real cameras are usually afflicted by complex mixtures of multiple authentic distortions [7,163], which cannot be well-simulated by the limited algorithm-generated distortions in these legacy IQA datasets. As a result, such a distortion identification sub-task cannot accurately identify the complex mixtures of distortions existing in authentically distorted images and may lead to performance degradation when applying to evaluate the real-world images with diverse authentic distortions.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…This sub-task intends to make use of the distortion category information which is available in the legacy IQA datasets with some common synthetic distortions [11,161,162]. However, the massive Internet images captured by real cameras are usually afflicted by complex mixtures of multiple authentic distortions [7,163], which cannot be well-simulated by the limited algorithm-generated distortions in these legacy IQA datasets. As a result, such a distortion identification sub-task cannot accurately identify the complex mixtures of distortions existing in authentically distorted images and may lead to performance degradation when applying to evaluate the real-world images with diverse authentic distortions.…”
Section: Discussionmentioning
confidence: 99%
“…This inspires us to incorporate the visual saliency prediction as the auxiliary sub-task to learn a powerful multi-task deep IQA model for the quality evaluation on the authentically distorted images. [7], and larger MOS (mean opinion score) shown in the bottom indicates better subjective perceptual quality. Their saliency maps in the second row are generated by our DINet [8] and fused with the original images where a pixel with brighter intensity indicates a higher probability of attracting human visual attention.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Benchmark datasets are fundamental in computer vision and image processing research in order to track the performance, accuracy and efficiency of new methods and algorithms. The image quality assessment was also evaluated against literature methods with the well-known Computational and Subjective Image Quality database (CSIQ) image quality assessment database (LARSON; CHANDLER, 2010) and the KonIQ database, the largest image quality assessment database to date (Hosu et al, 2020).…”
Section: Benchmark Datasetsmentioning
confidence: 99%