Design of algorithms that are able to estimate video quality as perceived by human observers is of interest for a number of applications. Depending on the video content, the artifacts introduced by the coding process can be more or less pronounced and diversely affect the quality of videos, as estimated by humans. While it is well understood that motion affects both human attention and coding quality, this relationship has only recently started gaining attention among the research community, when video quality assessment (VQA) is concerned. In this paper, the effect of calculating several objective measure features, related to video coding artifacts, separately for salient motion and other regions of the frames of the sequence is examined. In addition, we propose a new scheme for quality assessment of coded video streams, which takes into account salient motion. Standardized procedure has been used to calculate the Mean Opinion Score (MOS), based on experiments conducted with a group of non-expert observers viewing standard definition (SD) sequences. MOS measurements were taken for nine different SD sequences, coded using MPEG-2 at five different bit-rates. Eighteen different published approaches related to measuring the amount of coding artifacts objectively on a single-frame basis were implemented. Additional features describing the intensity of salient motion in the frames, as well as the intensity of coding artifacts in the salient motion regions were proposed. Automatic feature selection was performed to determine the subset of features most correlated to video quality. The results show that salient-motion-related features enhance prediction and indicate that the presence of blocking effect artifacts and blurring in the salient regions and variance and intensity of temporal changes in non-salient regions influence the perceived video quality.
Video quality as perceived by human observers is the ground truth when Video Quality Assessment (VQA) is in question. It is dependent on many variables, one of them being the content of the video that is being evaluated. Despite the evidence that content has an impact on the quality score the sequence receives from human evaluators, currently available VQA databases mostly comprise of sequences which fail to take this into account. In this paper, we aim to identify and analyze differences between human cognitive, affective, and conative responses to a set of videos commonly used for VQA and a set of videos specifically chosen to include video content which might affect the judgment of evaluators when perceived video quality is in question. Our findings indicate that considerable differences exist between the two sets on selected factors, which leads us to conclude that videos starring a different type of content than the currently employed ones might be more appropriate for VQA.
Minimizing last mile delivery costs is of paramount importance for all shipping companies that strive to stay competitive on the market. A potential solution to the problem is the use of crowdsourcinga model where individuals voluntarily take on a task proposed by another entity (e.g. a company). In this paper, we present the results of a comparison of performance for three types of crowdsourced delivery fleets likely to be used in an urban setting. The fleets differ in the mode of transport the couriers use: bicycles, cars or both. The performance is quantified by the total number of deliveries made and the on-time delivery rates. Experimental results were obtained through a simulation that closely resembles real-world traffic conditions in a city with developed cycling infrastructure and takes into account the variations in the speed of couriers. The research shows that bicycle-based crowdsourced fleets outperform other kinds of fleets under simulated conditions. This makes them a faster, more environmentally-friendly and potentially cheaper alternative to traditional fleets that rely on cars.
While a fairly large number of databases exist that provide impaired video sequences designed for the development and evaluation of Quality of Experience (QoE) approaches, the impairments due to transmission errors resulting in packet loss in these databases are based on simulation of small number of scenarios and not representative of real transmission scenarios that arise in end-to-end transmission of multimedia to mobile devices. This paper proposes a solution to this problem in the form of a framework for recording real packet drops as they occur in different situations of wireless multimedia transmission to mobile devices. The logs can be used to generate realistic impairments, as well as design new highly-efficient quality assessment approaches based on monitoring network performance in real time. An Android-based mobile device is used to receive streamed H.264 videos and real Real-time Transport Protocol (RTP) packet reception sequences (traces) are recorded.To evaluate the approach and the impact of different transmission scenarios on perceived quality, a study of the quality experienced by the users in 3 different real wireless transmission scenarios was conducted and the results are presented in the paper, showing that they diverge significantly from the packet-loss sequences usually considered in quality of experience studies.
Customer churn is a problem virtually all companies face, and the ability to predict it reliably can be a cornerstone for successful retention campaigns. In this study, we propose an approach to customer churn prediction in non-contractual B2B settings that relies exclusively on invoice-level data for feature engineering and uses multi-slicing to maximally utilize available data. We cast churn as a binary classification problem and assess the ability of three established classifiers to predict it when using different churn definitions. We also compare classifier performance when different amounts of historical data are used for feature engineering. The results indicate that robust models for different churn definitions can be derived by using invoice-level data alone and that using more historical data for creating some of the features tends to lead to better performing models for some classifiers. We also confirm that the multi-slicing approach to dataset creation yields better performing models compared to the traditionally used single-slicing approach.
For decades, computed tomography (CT) images have been widely used to discover valuable anatomical information. Metallic implants such as dental fillings cause severe streaking artifacts which significantly degrade the quality of CT images. In this paper, we propose a new method for metal-artifact reduction using complementary magnetic resonance (MR) images. The method exploits the possibilities which arise from the use of emergent trimodality systems. The proposed algorithm corrects reconstructed CT images. The projected data which is affected by dental fillings is detected and the missing projections are replaced with data obtained from a corresponding MR image. A simulation study was conducted in order to compare the reconstructed images with images reconstructed through linear interpolation, which is a common metal-artifact reduction technique. The results show that the proposed method is successful in reducing severe metal artifacts without introducing significant amount of secondary artifacts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.