An important and unsolved problem today is that of automatic quantification of the quality of video flows transmitted over packet networks. In particular, the ability to perform this task in real time (typically for streams sent themselves in real time) is especially interesting. The problem is still unsolved because there are many parameters affecting video quality, and their combined effect is not well identified and understood. Among these parameters, we have the source bit rate, the encoded frame type, the frame rate at the source, the packet loss rate in the network, etc. Only subjective evaluations give good results but, by definition, they are not automatic. We have previously explored the possibility of using artificial neural networks (NNs) to automatically quantify the quality of video flows and we showed that they can give results well correlated with human perception. In this paper, our goal is twofold. First, we report on a significant enhancement of our method by means of a new neural approach, the random NN model, and its learning algorithm, both of which offer better performances for our application. Second, we follow our approach to study and analyze the behavior of video quality for wide range variations of a set of selected parameters. This may help in developing control mechanisms in order to deliver the best possible video quality given the current network situation, and in better understanding of QoS aspects in multimedia engineering.Index Terms-Packet video, random neural networks, real-time video transmission, video quality assessment, video signal characterization.
This paper addresses the problem of quantitatively evaluating the quality of a speech stream transported over the Internet as perceived by the end user. We propose an approach being able to perform this task automatically and, if necessary, in real time. Our method is based on using G-networks (open networks of queues with positive and negative customers) as neural networks (in this case, they are called Random Neural Networks) to learn, in some sense, how humans react vis-a-vis a speech signal that has been distorted by encoding and transmission impairments. This can be used for control purposes, for pricing applications, etc.Our method allows us to study the impact of several source and network parameters on the quality, which appears to be new (previous work analyzes the effect of one or two selected parameters only). In this paper we use our technique to study the impact on performance of several basic source and network parameters on a non-interactive speech flow, namely loss rate, loss distribution, codec, forward error correction, and packetization interval, all at the same time. This is important because speech/audio quality is affected by several parameters whose combined effect is neither well identified nor understood.
7UDGLWLRQDOO\ 4R6 KDV EHHQ DGGUHVVHG E\ XVLQJ network measurements (e.g., loss rates and delays), and little attention has been paid to the quality perceived by end-users of the applications running over the network. Here, we address the issue of integrating speech quality subjective scores and network parameters measurements, for designing control algorithms that would yield the best QoS that could be delivered under a given communications network situation. First, we build a neural network based automaton to measure speech quality in real time, at the style of a group of human subjects when participating in an MOS test. We consider the effects of changes in network parameters (e.g., packetization interval, packet loss rate and their pattern distribution) and encoding on speech signals transmitted over the network. Our database includes transmitted speech signals in different languages. Then, we outline a control mechanism which, based on the application performance within a session (i.e., MOS speech quality scores generated by the neural networks), dynamically adjusts parameters (codec and packetization interval). Finally, we analyze preliminary results to show two main benefits: first, a better use of bandwidth, and second, delivery of the best possible speech quality given the network current situation. Index Terms9RLFH RYHU ,3 3DFNHW 6ZLWFKHG 1HWZRUNV Speech Quality Assessment, Neural Networks and End-to-End Control Mechanisms.
The huge amount of electrical power of many countries is consumed in lighting the streets. However, vehicles pass with very low rate in specific periods of time and parts of the streets are not occupied by vehicles over time. In this paper, we propose a system that automatically switches off the light for the parts of the streets having no vehicles and turns on the light for these parts once there are some vehicles that are going to come. Logically, this system may save a large amount of the electrical power. In addition, it may increase the lifetime of the lamps and reduce the pollutions. This system automatically controls and monitors the light of the streets. It can light only the parts that have vehicles and help on the maintenance of the lighting equipments. Vehicular Ad-Hoc Networks (VANET) make it possible to propose such system. VANET enables the possibility to know the presence of vehicles, their locations, their directions and their speeds in real time. These quantities are what are needed to develop this system. An advantage of using VANET is that there is no need to use specific network and equipments to design the system, but VANET infrastructure will be used. This decreases the cost and speed up the deployment of such system. This paper focuses on the proposal of different possible architectures of this system. Results show that the saved energy may reach up to 65% and an increase of the lifetime of the lamps of 53%.
The present era is marked by rapid improvement and advances in technology. One of the most essential areas that demand improvement is the traffic signal, as it constitutes the core of the traffic system. This demand becomes stringent with the development of Smart Cities. Unfortunately, road traffic is currently controlled by very old traffic signals (tri-color signals) regardless of the relentless effort devoted to developing and improving the traffic flow. These traditional traffic signals have many problems including inefficient time management in road intersections; they are not immune to some environmental conditions, like rain; and they have no means of giving priority to emergency vehicles. New technologies like Vehicular Ad-hoc Networks (VANET) and Internet of Vehicles (IoV) enable vehicles to communicate with those nearby and with a dedicated infrastructure wirelessly. In this paper, we propose a new traffic management system based on the existing VANET and IoV that is suitable for future traffic systems and Smart Cities. In this paper, we present the architecture of our proposed Intelligent Traffic Management System (ITMS) and Smart Traffic Signal (STS) controller. We present local traffic management of an intersection based on the demands of future Smart Cities for fairness, reducing commute time, providing reasonable traffic flow, reducing traffic congestion, and giving priority to emergency vehicles. Simulation results showed that the proposed system outperforms the traditional management system and could be a candidate for the traffic management system in future Smart Cities. Our proposed adaptive algorithm not only significantly reduces the average waiting time (delay) but also increases the number of serviced vehicles. Besides, we present the implemented hardware prototype for STS.
There is no established method for accurately predicting how much blood loss has occurred during hemorrhage. In the present study, we examine whether a genetic algorithm neural network (GANN) can predict volume of hemorrhage in an experimental model in rats and we compare its accuracy to stepwise linear regression (SLR). Serial measurements of heart period; diastolic, systolic, and mean blood pressures; hemoglobin; pH; arterial PO2; arterial PCO2; bicarbonate; base deficit; and blood loss as percent of total estimated blood volume were made in 33 male Wistar rats during a stepwise hemorrhage. The GANN and SLR used a randomly assigned training set to predict actual volume of hemorrhage in a test set. Diastolic blood pressure, arterial PO2, and base deficit were selected by the GANN as the optimal predictors set. Root mean square error in prediction of estimated blood volume by GANN was significantly lower than by SLR (2.63%, SD 1.44, and 4.22%, SD 3.48, respectively; P < 0.001). A GANN can predict highly accurately and significantly better than SLR volume of hemorrhage without knowledge of prehemorrhage status, rate of blood loss, or trend in physiological variables.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.