2015
DOI: 10.1109/tip.2015.2426416
|View full text |Cite
|
Sign up to set email alerts
|

A Feature-Enriched Completely Blind Image Quality Evaluator

Abstract: Abstract-Existing blind image quality assessment (BIQA) methods are mostly opinion-aware. They learn regression models from training images with associated human subjective scores to predict the perceptual quality of test images. Such opinion-aware methods, however, require a large amount of training samples with associated human subjective scores and of a variety of distortion types. The BIQA models learned by opinion-aware methods often have weak generalization capability, hereby limiting their usability in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
416
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 933 publications
(454 citation statements)
references
References 45 publications
2
416
0
Order By: Relevance
“…In Section 3.6, we study the influence of each kind of features on the final BIQA. Note that the results of blind/referenceless image spatial quality evaluator (BRISQUE), blind image integrity notator using DCT statistics (BLIINDS2), codebook representation for no-reference image assessment (CORNIA), natural image quality evaluator (NIQE) and integrated local natural image quality evaluator (IL-NIQE) are implemented by [19].…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…In Section 3.6, we study the influence of each kind of features on the final BIQA. Note that the results of blind/referenceless image spatial quality evaluator (BRISQUE), blind image integrity notator using DCT statistics (BLIINDS2), codebook representation for no-reference image assessment (CORNIA), natural image quality evaluator (NIQE) and integrated local natural image quality evaluator (IL-NIQE) are implemented by [19].…”
Section: Methodsmentioning
confidence: 99%
“…(a) We utilize a set of high-quality pristine images to train a PMVG model [19]. We first choose the CSM in the 4th convolutional layer of the VGG-19 network to select K high-contrast patches from the 90 pristine images using the high-contrast selection method mentioned in Section 2.2.…”
Section: Deep Activation Poolingmentioning
confidence: 99%
See 2 more Smart Citations
“…With regard to 2D algorithms, apply them on two views and each frame, then take the mean value as the approximate value of VQA. In the experimental part of the paper, we randomly select 80% videos for training and 20% for test, and there is no overlap between the two sets [53]. The run time is set to 1200, and we take median value as the final results.…”
Section: Test and Comparison On Video Databasesmentioning
confidence: 99%