2020
DOI: 10.3390/jimaging6080074
|View full text |Cite
|
Sign up to set email alerts
|

No-Reference Quality Assessment of In-Capture Distorted Videos

Abstract: We introduce a no-reference method for the assessment of the quality of videos affected by in-capture distortions due to camera hardware and processing software. The proposed method encodes both quality attributes and semantic content of each video frame by using two Convolutional Neural Networks (CNNs) and then estimates the quality score of the whole video by using a Recurrent Neural Network (RNN), which models the temporal information. The extensive experiments conducted on four benchmark databases (CVD2014… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
22
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 14 publications
(22 citation statements)
references
References 44 publications
0
22
0
Order By: Relevance
“…This paper extends our previous work [ 5 ] in three aspects: (1) ResNet-50 [ 10 ] architectures exploited in the Multi-level feature extraction module are replaced by the more efficient and lightweight MobileNet-v2 [ 11 ], which improves the efficiency without sacrificing performance. (2) Frame-level features are mapped to a video quality score thanks to the Video quality estimation module which is much more simple than the Temporal modeling module.…”
Section: Introductionmentioning
confidence: 62%
See 4 more Smart Citations
“…This paper extends our previous work [ 5 ] in three aspects: (1) ResNet-50 [ 10 ] architectures exploited in the Multi-level feature extraction module are replaced by the more efficient and lightweight MobileNet-v2 [ 11 ], which improves the efficiency without sacrificing performance. (2) Frame-level features are mapped to a video quality score thanks to the Video quality estimation module which is much more simple than the Temporal modeling module.…”
Section: Introductionmentioning
confidence: 62%
“…The automatic estimation of the quality of a UGC as perceived by human observers is fundamental for a wide range of applications. For example, to discriminate professional and amateur video content on user-generated video distribution platforms [ 1 ], to choose the best sequence among many sequences for sharing in social media [ 2 ], to guide a video enhancement process [ 3 ], and to rank/choose user-generated videos [ 4 , 5 ].…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations