2020
DOI: 10.1007/s41233-020-00037-y
|View full text |Cite
|
Sign up to set email alerts
|

Subjective annotation for a frame interpolation benchmark using artefact amplification

Abstract: Current benchmarks for optical flow algorithms evaluate the estimation either directly by comparing the predicted flow fields with the ground truth or indirectly by using the predicted flow fields for frame interpolation and then comparing the interpolated frames with the actual frames. In the latter case, objective quality measures such as the mean squared error are typically employed. However, it is well known that for image quality assessment, the actual quality experienced by the user cannot be fully deduc… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 37 publications
0
2
0
Order By: Relevance
“…The quality ratings, which were obtained from human observers, are averaged into one number—the mean opinion score (MOS)—to characterize the perceptual quality of each considered video sequence. In addition, subjective VQA deals with many aspects of video quality measurement, such as the selection of test video sequences, grading scale, time interval of video presentation to human subjects, viewing conditions, and selection of human participants [ 4 , 5 ]. As a result, subjective VQA provides benchmark databases [ 5 , 6 , 7 ] which contain video sequences with their corresponding MOS values.…”
Section: Introductionmentioning
confidence: 99%
“…The quality ratings, which were obtained from human observers, are averaged into one number—the mean opinion score (MOS)—to characterize the perceptual quality of each considered video sequence. In addition, subjective VQA deals with many aspects of video quality measurement, such as the selection of test video sequences, grading scale, time interval of video presentation to human subjects, viewing conditions, and selection of human participants [ 4 , 5 ]. As a result, subjective VQA provides benchmark databases [ 5 , 6 , 7 ] which contain video sequences with their corresponding MOS values.…”
Section: Introductionmentioning
confidence: 99%
“…In a slight variation of PC, the reference stimulus is placed in the middle of the pair of stimuli to be compared. The task of the observers is to select the one that looks more (or less) similar to the reference [11]- [14]. Similar to DCR, assessing the (dis)similarity of the distorted stimuli to the reference should lead to a more appropriate, informed choice of the subject than a PC without reference.…”
Section: Introductionmentioning
confidence: 99%