2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX) 2018
DOI: 10.1109/qomex.2018.8463426
|View full text |Cite
|
Sign up to set email alerts
|

Spatiotemporal Feature Combination Model for No-Reference Video Quality Assessment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 25 publications
(24 citation statements)
references
References 8 publications
0
24
0
Order By: Relevance
“…The video-level feature vectors were mapped to subjective quality scores with an SVR. Later, this model was developed significantly [18] by combining spatial and temporal information more intensively.…”
Section: Related and Previous Workmentioning
confidence: 99%
See 3 more Smart Citations
“…The video-level feature vectors were mapped to subjective quality scores with an SVR. Later, this model was developed significantly [18] by combining spatial and temporal information more intensively.…”
Section: Related and Previous Workmentioning
confidence: 99%
“…First, we evaluated the design choices of our proposed method on KoNViD-1k [9], before comparing it with other state-of-the-art NR-VQA techniques. We evaluated our algorithm using fivefold cross-validation and report on median PLCC and SROCC values like Men et al [18] and Yan et al [38]. First of all, the effects of the applied pretrained CNNs and transfer learning were evaluated.…”
Section: Parameter Studymentioning
confidence: 99%
See 2 more Smart Citations
“…In the feature aggregation aspect, most methods aggregate framelevel features to video-level features by averaging them over the temporal axis [8,18,[22][23][24]35]. Li et al [19] adopt a 1D convolutional neural network to aggregate the primary features for a time interval.…”
Section: Temporal Modelingmentioning
confidence: 99%