2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00054
|View full text |Cite
|
Sign up to set email alerts
|

Perceptual Image Quality Assessment with Transformers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
46
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 100 publications
(49 citation statements)
references
References 37 publications
0
46
0
Order By: Relevance
“…To verify the effectiveness of our method, we utilize LPIPS [30], FID [17], KID [5], PieAPP [27], DISTS [11] and IQT [10] to guide the evalution of reconstructions. The combination of these scores is consistent with MOS to some degree.…”
Section: Quantitative Resultsmentioning
confidence: 99%
“…To verify the effectiveness of our method, we utilize LPIPS [30], FID [17], KID [5], PieAPP [27], DISTS [11] and IQT [10] to guide the evalution of reconstructions. The combination of these scores is consistent with MOS to some degree.…”
Section: Quantitative Resultsmentioning
confidence: 99%
“…Cheon et al [3] adapt Vision transformers [7] for perceptual IQA. They have used a pretrained Inception-Resnet-v2 network [25] as feature extraction backbone and transformer encoder-decoder architecture to obtain quality score predidction.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, learning-based FR-IQA methods (Prashnani et al 2018;Ding et al 2021) have achieved significant improvement. The most current IQT model (Cheon et al 2021) involves the visual transformer with extra quality and position embeddings to achieve the best performance for the FR-IQA task. Different from FR-IQA, the RR-IQA method (Rehman and Wang 2012) utilizes only parts of the FR image information.…”
Section: Related Workmentioning
confidence: 99%
“…1(a)) only use LQ images as input to directly measure image quality. FR/RR-IQA methods (Rehman and Wang 2012;Cheon et al 2021) (Fig. 1(b)) utilize the complete or partial information of the pixel-aligned HQ reference images.…”
Section: Introductionmentioning
confidence: 99%