Abstract-The ever growing bandwidth in access networks, in combination with IPTV and Video on Demand (VoD) offerings, opens up unlimited possibilities to the users. The operators can no longer compete solely on the number of channels or content and increasingly make High Definition channels and Quality of Experience (QoE) a service differentiator. Currently the most reliable way of assessing and measuring QoE is conducting subjective experiments, where human observers evaluate a series of short video sequences, using one of the international standardized subjective quality assessment methodologies. Unfortunately, since these subjective experiments need to be conducted in controlled environments and pose limitations on the sequences and overall experiment duration they cannot be used for reallife QoE assessment of IPTV and VoD services. In this article, we propose a novel subjective quality assessment methodology based on full length movies. Our methodology enables audiovisual quality assessment in the same environments and under the same conditions users typically watch television. Using our new methodology we conducted subjective experiments and compared the outcome with the results from a subjective test conducted using a standardized method. Our findings indicate significant differences in terms of impairment visibility and tolerance and highlight the importance of real-life QoE assessment.
HTTP adaptive streaming technology has become widely spread in multimedia services because of its ability to provide adaptation to characteristics of various viewing devices and dynamic network conditions. There are various studies targeting the optimization of adaptation strategy. However, in order to provide an optimal viewing experience to the end-user, it is crucial to get knowledge about the Quality of Experience (QoE) of different adaptation schemes. This paper overviews the state of the art concerning subjective evaluation of adaptive streaming QoE and highlights the challenges and open research questions related to QoE assessment.
Abstract-In order to ensure optimal quality of experience toward end users during video streaming, automatic video quality assessment becomes an important field-of-interest to video service providers. Objective video quality metrics try to estimate perceived quality with high accuracy and in an automated manner. In traditional approaches, these metrics model the complex properties of the human visual system. More recently, however, it has been shown that machine learning approaches can also yield competitive results. In this paper, we present a novel noreference bitstream-based objective video quality metric that is constructed by genetic programming-based symbolic regression. A key benefit of this approach is that it calculates reliable white-box models that allow us to determine the importance of the parameters. Additionally, these models can provide human insight into the underlying principles of subjective video quality assessment. Numerical results show that perceived quality can be modeled with high accuracy using only parameters extracted from the received video bitstream.Index Terms-H.264/AVC, high definition, no-reference, objective video quality metric, quality of experience (QoE).
Abstract-Ensuring and maintaining adequate Quality of Experience towards end-users are key objectives for video service providers, not only for increasing customer satisfaction but also as service differentiator. However, in the case of High Definition video streaming over IP-based networks, network impairments such as packet loss can severely degrade the perceived visual quality. Several standard organizations have established a minimum set of performance objectives which should be achieved for obtaining satisfactory quality. Therefore, video service providers should continuously monitor the network and the quality of the received video streams in order to detect visual degradations. Objective video quality metrics enable automatic measurement of perceived quality. Unfortunately, the most reliable metrics require access to both the original and the received video streams which makes them inappropriate for real-time monitoring. In this article, we present a novel no-reference bitstream-based visual quality impairment detector which enables real-time detection of visual degradations caused by network impairments. By only incorporating information extracted from the encoded bitstream, network impairments are classified as visible or invisible to the end-user. Our results show that impairment visibility can be classified with a high accuracy which enables real-time validation of the existing performance objectives.
Abstract-HTTP Adaptive Streaming facilitates video streaming to mobile devices connected through heterogeneous networks without the need for a dedicated streaming infrastructure. By splitting different encoded versions of the same video into small segments, clients can continuously decide which segments to download based on available network resources and device characteristics. These encoded versions can, for example, differ in terms of bitrate and spatial or temporal resolution. However, as a result of dynamically selecting video segments, perceived video quality can fluctuate during playback which will impact end-users' Quality of Experience. Subjective studies have already been conducted to assess the influence of video delivery using HTTP Adaptive Streaming to mobile devices. Nevertheless, existing studies are limited to the evaluation of short video sequences in controlled environments. Research has already shown that video duration and assessment environment influence quality perception. Therefore, in this article, we go beyond the traditional ways for subjective quality evaluation by conducting novel experiments on tablet devices in more ecologically valid testing environments using longer duration video sequences. As such, we want to mimic realistic viewing behaviour as much as possible. Our results show that both video content and the range of quality switches significantly influence end-users' rating behaviour. In general, quality level switches are only perceived in high motion sequences or in case switching occurs between high and low quality video segments. Moreover, we also found that video stallings should be avoided during playback at all times.Index Terms-Quality of Experience (QoE), Subjective video quality assessment, HTTP adaptive streaming, Mobile video, Tablet devices.
In order to ensure adequate quality towards the end users at all time, video service providers are getting more interested in monitoring their video streams. Objective video quality metrics provide a means of measuring (audio)visual quality in an automated manner. Unfortunately, most of the current existing metrics cannot be used for real-time monitoring due to their dependencies on the original video sequence. In this paper we present a new objective video quality metric which classifies packet loss as visible or invisible based on information extracted solely from the captured encoded H.264/AVC video bit stream. Our results show that the visibility of packet loss can be predicted with a high accuracy, without the need for deep packet inspection. This enables service providers to monitor quality in real-time.
Lip synchronization is considered a key parameter during interactive communication. In the case of video conferencing and television broadcasting, the differential delay between audio and video should remain below certain thresholds, as recommended by several standardization bodies. However, further research has also shown that these thresholds can be relaxed, depending on the targeted application and use case. In this article, we investigate the influence of lip sync on the ability to perform real-time language interpretation during video conferencing. Furthermore, we are also interested in determining proper lip sync visibility thresholds applicable to this use case. Therefore, we conducted a subjective experiment using expert interpreters, which were required to perform a simultaneous translation, and using non-experts. Our results show that significant differences are obtained when conducting subjective experiments with expert interpreters. As interpreters are primarily focused on performing the si-N. Staelens · N. Vercammen · B. Vermeulen · P. Demeester
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.