2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX) 2017
DOI: 10.1109/qomex.2017.7965644
|View full text |Cite
|
Sign up to set email alerts
|

Empirical evaluation of no-reference VQA methods on a natural video quality database

Abstract: Abstract-No-Reference (NR) Video Quality Assessment (VQA) is a challenging task since it predicts the visual quality of a video sequence without comparison to some original reference video. Several NR-VQA methods have been proposed. However, all of them were designed and tested on databases with artificially distorted videos. Therefore, it remained an open question how well these NR-VQA methods perform for natural videos. We evaluated two popular VQA methods on our newly built natural VQA database KoNViD-1k. I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(19 citation statements)
references
References 8 publications
0
19
0
Order By: Relevance
“…These deficiencies of current video databases would be difficult to assess quantitatively. However, an indirect confirmation is given by the fact that the performance of two established objective VQA algorithms on our KoNViD-1k database was significantly worse than on the traditional databases that were used for their development, even when the techniques were trained on our natural video dataset [23].…”
Section: Related Workmentioning
confidence: 90%
“…These deficiencies of current video databases would be difficult to assess quantitatively. However, an indirect confirmation is given by the fact that the performance of two established objective VQA algorithms on our KoNViD-1k database was significantly worse than on the traditional databases that were used for their development, even when the techniques were trained on our natural video dataset [23].…”
Section: Related Workmentioning
confidence: 90%
“…In contrast to previous work, Men et al [17] introduced an NR-VQA method that was trained using a natural video quality database, KoNViD-1k [9], which consists of 1200 unique video sequences with authentic distortions. In particular, a video-level feature vector was compiled by combining multiple features, such as blurriness, colorfulness, contrast, and spatial and temporal information.…”
Section: Related and Previous Workmentioning
confidence: 99%
“…All methods were evaluated using fivefold cross-validation with 10 random train-validationtest split, and median PLCC and SROCC values are reported as proposed in [17] and [18]. The median PLCC and SROCC values of five baseline methods (Video BLIINDS [23], VIIDEO [20], Video CORNIA [37], FC Model [17], and STFC Model [18]) were measured by Men et al in [17] and [18]. On the other hand, the results of STS-MLP [38] and STS-SVR [38] were taken from their original publication because their authors also report on median PLCC and SROCC values using fivefold cross-validation with 10 random train-validation-test split.…”
Section: Comparison With the State Of The Artmentioning
confidence: 99%
See 1 more Smart Citation
“…In the feature aggregation aspect, most methods aggregate framelevel features to video-level features by averaging them over the temporal axis [8,18,[22][23][24]35]. Li et al [19] adopt a 1D convolutional neural network to aggregate the primary features for a time interval.…”
Section: Temporal Modelingmentioning
confidence: 99%