2022
DOI: 10.1109/tip.2022.3205770
|View full text |Cite
|
Sign up to set email alerts
|

No-Reference Image Quality Assessment by Hallucinating Pristine Features

Abstract: Mapping images to deep feature space for comparisons has been wildly adopted in recent learning-based full-reference image quality assessment (FR-IQA) models. Analogous to the classical classification task, the ideal mapping space for quality regression should possess both inter-class separability and intra-class compactness. The inter-class separability that focuses on the discrimination of images with different quality levels has been highly emphasized in existing models. However, the intra-class compactness… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 117 publications
(73 reference statements)
0
4
0
Order By: Relevance
“…TReS (Golestaneh, Dadsetan, and Kitani 2022) proposes to compute local features with CNN and non-local features with self-attention, introduces a per-batch loss for correct ranking and a self-supervision loss between reference and flipped images. FPR (Chen et al 2022) hallucinates pseudo-reference features from the distorted image using mutual learning on reference and distorted images with triplet loss. Attention maps are predicted to aggregate scores over patches.…”
Section: Benchmark List Of Metricsmentioning
confidence: 99%
“…TReS (Golestaneh, Dadsetan, and Kitani 2022) proposes to compute local features with CNN and non-local features with self-attention, introduces a per-batch loss for correct ranking and a self-supervision loss between reference and flipped images. FPR (Chen et al 2022) hallucinates pseudo-reference features from the distorted image using mutual learning on reference and distorted images with triplet loss. Attention maps are predicted to aggregate scores over patches.…”
Section: Benchmark List Of Metricsmentioning
confidence: 99%
“…Typical NR‐IQA models rely on the natural scene statistics (NSS) construction [4–6] with the assumption that the image quality can be estimated by measuring the destruction level of the NSS. In recent years, many pioneers resort to deep learning technology for both FR‐IQA [7–11] and NR‐IQA [8, 12–15], endowing the IQA models with a strong capability for quality‐aware feature extraction. In particular, a shallow ConvNet was adopted for patch‐based NR‐IQA learning in [12].…”
Section: Introductionmentioning
confidence: 99%
“…It contains two main components: dataset-shared quality regressor and dataset-specific quality transformer. In the testing phase, only the quality regressor is used no matter which dataset the testing image is from cent years, many pioneers resort to deep learning technology for both FR-IQA [7][8][9][10][11] and NR-IQA [8,[12][13][14][15], endowing the IQA models with a strong capability for quality-aware feature extraction. In particular, a shallow ConvNet was adopted for patch-based NR-IQA learning in [12].…”
mentioning
confidence: 99%
“…Pan et al 22 introduced a distortion aware module in CNN to perform BIQA on different distortions. Chen et al 23 propose an NR-IQA method via feature level pseudo-reference hallucination. Pan et al 24 proposed a multi-branch convolutional neural network to perform NR-IQA.…”
Section: Introductionmentioning
confidence: 99%