2019
DOI: 10.1186/s13640-019-0479-7
|View full text |Cite
|
Sign up to set email alerts
|

No-reference color image quality assessment: from entropy to perceptual quality

Abstract: This paper presents a high-performance general-purpose no-reference (NR) image quality assessment (IQA) method based on image entropy. The image features are extracted from two domains. In the spatial domain, the mutual information between the color channels and the two-dimensional entropy are calculated. In the frequency domain, the two-dimensional entropy and the mutual information of the filtered sub-band images are computed as the feature set of the input color image. Then, with all the extracted features,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
41
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 74 publications
(51 citation statements)
references
References 63 publications
0
41
0
Order By: Relevance
“…The main characteristics of the Flickr1024 dataset and four existing stereo datasets [4,5,10,[12][13][14][15] are listed in Table 1. Following [1], we use entropy to measure the amount of information included in each dataset, and use three no-reference image quality assessment (NRIQA) metrics to assess the perceptual image quality, including blind/referenceless image spatial quality evaluator (BRISQE) [11], SR-metric [9], and entropy-based image quality assessment (ENIQA) [3]. For image quality assessment, these NRIQA metrics are superior to many fullreferenced measures (e.g., PSNR, RMSE, and SSIM), and highly correlated to human perception [9].…”
Section: Comparisons To Existing Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…The main characteristics of the Flickr1024 dataset and four existing stereo datasets [4,5,10,[12][13][14][15] are listed in Table 1. Following [1], we use entropy to measure the amount of information included in each dataset, and use three no-reference image quality assessment (NRIQA) metrics to assess the perceptual image quality, including blind/referenceless image spatial quality evaluator (BRISQE) [11], SR-metric [9], and entropy-based image quality assessment (ENIQA) [3]. For image quality assessment, these NRIQA metrics are superior to many fullreferenced measures (e.g., PSNR, RMSE, and SSIM), and highly correlated to human perception [9].…”
Section: Comparisons To Existing Datasetsmentioning
confidence: 99%
“…For all the NRIQA metrics presented in this paper, we run the codes provided by their authors under their original models and default settings. Note that, small values of BRISQE [11] and ENIQA [3], and large values of SR-metric [9] represent high image quality. Table 1, the Flickr1024 dataset is larger than other datasets by at least 2.5 times.…”
Section: Comparisons To Existing Datasetsmentioning
confidence: 99%
“…In this section, we compare our NUDT dataset to several popular LF datasets [16], [69]- [72]. Following [92], we use four no-reference image quality assessment (NRIQA) metrics (i.e, BRISQUE [64], NIQE [65], CEIQ [66], ENIQA [67]) to evaluate the perceptual quality of the center-view images of these datasets. Besides, we also use a no-reference LF quality assessment metric (i.e., NRLFQA [68]) to evaluate TABLE I: Main characteristics of several popular LF datasets.…”
Section: B Comparison To Existing Datasetsmentioning
confidence: 99%
“…Besides, we also use a no-reference LF quality assessment metric (i.e., NRLFQA [68]) to evaluate TABLE I: Main characteristics of several popular LF datasets. Note that, average scores are reported for spatial resolution (SpaRes), single-image perceptual quality metrics (i.e., BRISQUE [64], NIQE [65], CEIQ [66], ENIQA [67]) and LF quality assessment metrics (i.e., NRLFQA [68]). the spatial quality and angular consistency of LFs.…”
Section: B Comparison To Existing Datasetsmentioning
confidence: 99%
“…NR-IQA provides image quality without any reference image or its features. NR-IQA can be divided into the following two classes [5]: The first class includes the algorithms developed for specific types of distortion, such blur [6][7][8][9], jpeg [10,11], jpeg2000 compression [12,13], noise [14], and others [15][16][17][18]. The second class includes the non-distortion-specific algorithms; for example, Moorthy and Bovik [19] presented a method on the basis of a two-step framework called blind image quality index for NR-IQA using natural scene statistics.…”
Section: Introductionmentioning
confidence: 99%