The media quality assessment research community has traditionally been focusing on developing objective algorithms to predict the result of a typical subjective experiment in terms of
Mean Opinion Score (MOS)
value. However, the MOS, being a single value, is insufficient to model the complexity and diversity of human opinions encountered in an actual subjective experiment. In this work we propose a complementary approach for objective media quality assessment that attempts to more closely model what happens in a subjective experiment in terms of single observers and, at the same time, we perform a qualitative analysis of the proposed approach while highlighting its suitability. More precisely, we propose to model, using
neural networks (NNs)
, the way single observers perceive media quality. Once trained, these NNs, one for each observer, are expected to mimic the corresponding observer in terms of quality perception. Then, similarly to a subjective experiment, such NNs can be used to simulate the users’ single opinions, which can be later aggregated by means of different statistical indicators such as average, standard deviation, quantiles, etc. Unlike previous approaches that consider subjective experiments as a black box providing reliable ground truth data for training, the proposed approach is able to consider human factors by analyzing and weighting individual observers. Such a model may therefore implicitly account for users’ expectations and tendencies, that have been shown in many studies to significantly correlate with visual quality perception. Furthermore, our proposal also introduces and investigates an index measuring how much inconsistency there would be if an observer was asked to rate many times the same stimulus. Simulation experiments conducted on several datasets demonstrate that the proposed approach can be effectively implemented in practice and thus yielding a more complete objective assessment of end users’ quality of experience.
Subjective experiments are considered the most reliable way to assess the perceived visual quality. However, observers’ opinions are characterized by large diversity: in fact, even the same observer is often not able to exactly repeat his first opinion when rating again a given stimulus. This makes the Mean Opinion Score (MOS) alone, in many cases, not sufficient to get accurate information about the perceived visual quality. To this aim, it is important to have a measure characterizing to what extent the observed or predicted MOS value is reliable and stable. For instance, the Standard deviation of the Opinions of the Subjects (SOS) could be considered as a measure of reliability when evaluating the quality subjectively. However, we are not aware of the existence of models or algorithms that allow to objectively predict how much diversity would be observed in subjects’ opinions in terms of SOS. In this work we observe, on the basis of a statistical analysis made on several subjective experiments, that the disagreement between the quality as measured by means of different objective video quality metrics (VQMs) can provide information on the diversity of the observers’ ratings on a given processed video sequence (PVS). In light of this observation we: i) propose and validate a model for the SOS observed in a subjective experiment; ii) design and train Neural Networks (NNs) that predict the average diversity that would be observed among the subjects’ ratings for a PVS starting from a set of VQMs values computed on such a PVS; iii) give insights into how the same NN based approach can be used to identify potential anomalies in the data collected in subjective experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.