To quantify the excellence of multimedia quality, subjective evaluation experiments are conducted. In these experiments, the tradition of quantitative assessment is the most dominating, but it disregards the understanding of participants' interpretations, descriptions, and the evaluation criteria of quality. The goal of this paper is to present a new multimedia quality evaluation method called Open Profiling of Quality (OPQ) as a tool for building a deeper understanding on subjective quality. OPQ is a mixed method combining a conventional quantitative psychoperceptual evaluation and qualitative descriptive quality evaluation based on the individual's own vocabulary. OPQ is targeted for naïve participants applicable to experiments with heterogeneous and multimodal stimulus material. The paper presents the theoretical basis of the development of OPQ and overviews the methods for audiovisual quality research. We present three extensive quality evaluation studies where OPQ has been used with 120 participants. Finally, we conclude further recommendations of use of the method in quality evaluation research.
Subjective evaluation is used to identify impairment factors of multimedia quality. The final quality is often formulated via quantitative experiments, but this approach has its constraints, as subject's quality interpretations, experiences and quality evaluation criteria are disregarded. To identify these quality evaluation factors, this study examined qualitatively the criteria participants used to evaluate audiovisual video quality. A semi-structured interview was conducted with 60 participants after a subjective audiovisual quality evaluation experiment. The assessment compared several, relatively low audio-video bitrate ratios with five different television contents on mobile device. In the analysis, methodological triangulation (grounded theory, Bayesian networks and correspondence analysis) was applied to approach the qualitative quality. The results showed that the most important evaluation criteria were the factors of visual quality, contents, factors of audio quality, usefulness -followability and audiovisual interaction. Several relations between the quality factors and the similarities between the contents were identified. As a research methodological recommendation, the focus on content and usage related factors need to be further examined to improve the quality evaluation experiments.
Subjective quality evaluation is used to optimize the produced audiovisual quality from fundamental signal processing algorithms to consumer services. These studies typically follow the basic principles of controlled psychoperceptual experiments. However, when compromising compression and transmission parameters for consumer services, the ecological validity of conventional quality evaluation methods can be questioned. To tackle this, we firstly present a novel user-oriented quality evaluation method for mobile television in its usage contexts. Secondly, we present the results of an experiment conducted with 30 participants comparing acceptability and satisfaction of quality as well as goals of viewing in three mobile contexts and under four different residual transmission error rates, when the participants also performed simultaneous assessment tasks. Finally, we compare the results with a previous laboratory experiment. The studied error rates impacted negatively on all measured tasks with some contextual differences. Moreover, the evaluations were more favorable and less discriminate in the mobile contexts compared to the laboratory.
As in many digital telecommunications systems, the received data streams over Digital Video Broadcasting for Handhelds (DVB-H) may contain bursty transmission errors. The bursty error characteristics affect the end users' perceived audiovisual quality. This study examined the perceived unacceptability of instantaneous but noticeable audio, visual and audiovisual errors. The erroneous streams were generated from four popular television contents by applying three simulated error patterns with different error rates (1.7%, 6.9%, 13.8%) and error burst durations. Instantaneous unacceptability of errors was evaluated by 30 participants with simplified continuous assessment while watching the program content. The results show that with the two lowest error rates the audio errors were more unacceptable than video errors and with the highest error rate the visual and audiovisual errors become the most unacceptable.
The need to better understand the role of context has emerged after the revolution of mobile computing, as such devices are used in heterogeneous circumstances. However, it is difficult to say what context of use in mobile human-computer interaction actually means. This study summarises past research in mobile contexts of use and not only provides a deeper understanding of the characteristics associated with it, but also indicates a path for future research. This article presents an extensive and systematic literature review of more than 100 papers published in five high-quality journals and one main conference in the field of HCI during the years 2000-2007. The authors’ results show that context of use is still explored as a relatively static phenomenon in mobile HCI. Its most commonly mentioned characteristics are linked to social, physical, and technical components, while transitions between the contexts were rarely listed. Based on this review, a descriptive model of context of use for mobile HCI (CoU-HMCI) summarising five components, their subcomponents and descriptive properties is presented. The model can help both practitioners and academics to identify broadly relevant contextual factors when designing, experimenting with, and evaluating, mobile contexts of use.
Subjective quality evaluation is widely used to optimize system performance as a part of end-products. It is often desirable to know whether a certain system performance is acceptable, that is, whether the system reaches the minimum level to satisfy user expectations and needs. The goal of this paper is to examine research methods for assessing overall acceptance of quality in subjective quality evaluation methods. We conducted three experiments to develop our methodology and test its validity under heterogeneous stimuli in the context of mobile television. The first experiment examined the possibilities of using a simplified continuous assessment method for assessing overall acceptability. The second experiment explored the boundary between acceptable and unacceptable quality when the stimuli had clearly detectable differences. The third experiment compared the perceived quality impacts of small differences between the stimuli close to the threshold of acceptability. On the basis of our results, we recommend using a bidimensional retrospective measure combining acceptance and satisfaction in consumer-/user-oriented quality evaluation experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.