ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2022
DOI: 10.1109/icassp43922.2022.9746997
|View full text |Cite
|
Sign up to set email alerts
|

No-Reference Quality Assessment of Variable Frame-Rate Videos Using Temporal Bandpass Statistics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…In this section, we evaluate the performances of our model and various other BVQA models on two UHD-VQA databases. The main models being compared have all been designed for video quality assessment in recent years, and include VSFA [ 38 ], TLVQM [ 39 ], VIDEVAL [ 40 ], GSTVQA [ 18 ], RAPIQUE [ 10 ], ChipQA [ 17 ], NOFU [ 6 ], HEKE [ 9 ] and HFR-BVQA [ 11 ]. Among them, VIDEVAL, ChipQA, TLVQM, RAPIQUE, and VSFA are five VQA models for UGC video.…”
Section: Experimental Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this section, we evaluate the performances of our model and various other BVQA models on two UHD-VQA databases. The main models being compared have all been designed for video quality assessment in recent years, and include VSFA [ 38 ], TLVQM [ 39 ], VIDEVAL [ 40 ], GSTVQA [ 18 ], RAPIQUE [ 10 ], ChipQA [ 17 ], NOFU [ 6 ], HEKE [ 9 ] and HFR-BVQA [ 11 ]. Among them, VIDEVAL, ChipQA, TLVQM, RAPIQUE, and VSFA are five VQA models for UGC video.…”
Section: Experimental Resultsmentioning
confidence: 99%
“…First, the VQA task is closely correlated with the perception of distortion information. However, the current BVQA methods [ 9 , 10 , 11 ] usually use backbone convolutional neural networks to extract global spatial distortion features. Such networks are designed for computer vision tasks, such as image classification, focusing more on objects rather than distortions.…”
Section: Introductionmentioning
confidence: 99%