2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00363
|View full text |Cite
|
Sign up to set email alerts
|

From Patches to Pictures (PaQ-2-PiQ): Mapping the Perceptual Space of Picture Quality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
107
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 166 publications
(109 citation statements)
references
References 51 publications
0
107
2
Order By: Relevance
“…Several assessment indices are used to compare SOTA dehazing models, namely the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) for evaluations based on reference haze-free images, the average gradient (AG) [ 15 ], image entropy (IE) [ 35 ], the fog-aware density evaluator (FADE) [ 33 ] provided with the LIVE dataset, blind image quality measure of enhanced images (BIQME) [ 36 ], and patches-to-pictures quality predictor (PaQ2PiQ) [ 37 ] for non-reference image quality evaluation. AG is related to the edges and variance in an image and is defined by (7).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Several assessment indices are used to compare SOTA dehazing models, namely the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) for evaluations based on reference haze-free images, the average gradient (AG) [ 15 ], image entropy (IE) [ 35 ], the fog-aware density evaluator (FADE) [ 33 ] provided with the LIVE dataset, blind image quality measure of enhanced images (BIQME) [ 36 ], and patches-to-pictures quality predictor (PaQ2PiQ) [ 37 ] for non-reference image quality evaluation. AG is related to the edges and variance in an image and is defined by (7).…”
Section: Resultsmentioning
confidence: 99%
“…BIQME [ 36 ] extracts 17 features by analyzing contrast, sharpness, brightness etc., and it predicts evaluations on image quality by training a regression module. Meanwhile, PaQ2PiQ [ 37 ] evaluates both global and local image quality by training a modified ResNet-18 [ 28 ] on a new dataset. This training dataset comprises pictures and patches associated with human perceptual quality judgements, i.e., mean opinion score (MOS).…”
Section: Resultsmentioning
confidence: 99%
“…Deep convolutional neural networks (CNNs) have been shown to deliver standout performance in a wide variety of low-level computer vision applications [17], [23], [25], [58]. Recently, the release of several large-scale psychometric visual quality databases [29]- [32], [51] have sped the application of deep CNNs to perceptual video and image quality modeling.…”
Section: B Deep Learning-based Bvqa Modelsmentioning
confidence: 99%
“…CNN-based solutions have been observed to generally perform well on UGC picture quality problems [17], [27], [51] thanks to several recently released large-scale picture quality datasets [17], [51], [82]. Still, none of them have proven effective on UGC video quality databases [30]- [32].…”
Section: Deep Learning Featuresmentioning
confidence: 99%
“…Apart from MOSs, EXIF data, image attributes, and scene category labels were also recorded to facilitate the development of BIQA models for real-world applications. Concurrently, Ying et al [23] built a large dataset that contains patch quality annotations. As discussed previously, different datasets may use different subjective procedures, leading to different perceptual scales of the collected MOSs.…”
Section: Iqa Datasetsmentioning
confidence: 99%