QUIC, fostered by Google and under standardization in the IETF, integrates some of HTTP/s, TLS, and TCP functionalities over UDP. One of its main goals is to facilitate transport protocol design, with fast evolution and innovation. However, congestion control in QUIC is still severely jeopardized by packet losses, despite implemented loss recovery mechanisms, whose behavior strongly depends on the Round Trip Time. In this paper, we design and implement rQUIC, a framework that enables FEC within QUIC protocol to improve its performance over wireless networks. The main idea behind rQUIC is to reduce QUIC's loss recovery time by making it robust to erasures over wireless networks, as compared to traditional transport protocol loss detection and recovery mechanisms. We evaluate the performance of our solution by means of extensive simulations over different type of wireless networks and for different applications. For LTE and Wifi networks, our results illustrate significant gains of up to 60% and 25% savings in the completion time for bulk transfer and web browsing, respectively.
Abstract-Random Linear Network Coding (RLNC) has been proved to offer an efficient communication scheme, leveraging an interesting robustness against packet losses. However, it suffers from a high computational complexity and some novel approaches, which follow the same idea, have been recently proposed. One of such solutions is Tunable Sparse Network Coding (TSNC), where only few packets are combined in each transmissions. The amount of data packets to be combined in each transmissions can be set from a density parameter/distribution, which could be eventually adapted. In this work we present an analytical model that captures the performance of SNC on an accurate way. We exploit an absorbing Markov process where the states are defined by the number of useful packets received by the decoder, i.e the decoding matrix rank, and the number of non-zero columns at such matrix. The model is validated by means of a thorough simulation campaign, and the difference between model and simulation is negligible. A mean square error less than 4 · 10 −4 in the worst cases. We also include in the comparison some of more general bounds that have been recently used, showing that their accuracy is rather poor. The proposed model would enable a more precise assessment of the behavior of sparse network coding techniques. The last results show that the proposed analytical model can be exploited by the TSNC techniques in order to select by the encoder the best density as the transmission evolves.
We study the performance of the Message Queuing Telemetry Transport Protocol (MQTT) over QUIC. QUIC has been recently proposed as a new transport protocol, and it is gaining relevance at a very fast pace, favored by the support of key players, such as Google. It overcomes some of the limitations of the more widespread alternative, TCP, especially regarding the overhead of connection establishment. However, its use for Internet of Things (IoT) scenarios is still under consideration. In this paper we integrate a GO-based implementation of the QUIC protocol with MQTT, and we compare the performance of this combination with that exhibited by the more traditional MQTT/TLS/TCP approach. We use Linux Containers and we emulate various wireless network technologies by means of the ns-3 simulator. The results of an extensive measurement campaign, show that QUIC protocol can indeed yield good performances for typical IoT use cases.
In the last years, wireless sensor networks have been proposed for their deployment in underwater environments where a lot of applications like aquiculture, pollution monitoring and offshore exploration would benefit from this technology. Despite having a very similar functionality, Underwater Wireless Sensor Networks (UWSNs) exhibit several architectural differences with respect to the terrestrial ones, which are mainly due to the transmission medium characteristics (sea water) and the signal employed to transmit data (acoustic ultrasound signals). So, the design of appropriate network architecture for UWSNs is seriously hardened by the specific characteristics of the communication system. In this work we analyze several acoustic channel models for their use in underwater wireless sensor network architectures. For that purpose, we have implemented them by using the OPNET Modeler tool in order to perform an evaluation of their behavior under different network scenarios. Finally, some conclusions are drawn showing the impact on UWSN performance of different elements of channel model and particular specific environment conditions
The adoption of both Cyber–Physical Systems (CPSs) and the Internet-of-Things (IoT) has enabled the evolution towards the so-called Industry 4.0. These technologies, together with cloud computing and artificial intelligence, foster new business opportunities. Besides, several industrial applications need immediate decision making and fog computing is emerging as a promising solution to address such requirement. In order to achieve a cost-efficient system, we propose taking advantage from spot instances, a new service offered by cloud providers, which provide resources at lower prices. The main downside of these instances is that they do not ensure service continuity and they might suffer from interruptions. An architecture that combines fog and multi-cloud deployments along with Network Coding (NC) techniques, guarantees the needed fault-tolerance for the cloud environment, and also reduces the required amount of redundant data to provide reliable services. In this paper we analyze how NC can actually help to reduce the storage cost and improve the resource efficiency for industrial applications, based on a multi-cloud infrastructure. The cost analysis has been carried out using both real AWS EC2 spot instance prices and, to complement them, prices obtained from a model based on a finite Markov chain, derived from real measurements. We have analyzed the overall system cost, depending on different parameters, showing that configurations that seek to minimize the storage yield a higher cost reduction, due to the strong impact of storage cost.
We consider the transmission of packets across a lossy end-to-end network path so as to achieve low in-order delivery delay. This can be formulated as a decision problem, namely deciding whether the next packet to send should be an information packet or a coded packet. Importantly, this decision is made based on delayed feedback from the receiver. While an exact solution to this decision problem is challenging, we exploit ideas from queueing theory to derive scheduling policies based on prediction of a receiver queue length that, while suboptimal, can be efficiently implemented and offer substantially better performance than state of the art approaches. We obtain a number of useful analytic bounds that help characterise design trade-offs and our analysis highlights that the use of prediction plays a key role in achieving good performance in the presence of significant feedback delay. Our approach readily generalises to networks of paths and we illustrate this by application to multipath transport scheduler design.
We introduce rQUIC, an integration of the QUIC protocol, and a coding module. rQUIC has been designed to feature different coding/decoding schemes and is implemented in go language. We conducted an extensive measurement campaign to provide a thorough characterization of the proposed solution. We compared the performance of rQUIC with that of the original QUIC protocol for different underlying network conditions as well as different traffic patterns. Our results show that rQUIC not only yields a relevant performance gain (shorter delays), especially when network conditions worsen, but also ensures a more predictable behavior. For bulk transfer (long flows), the delay reduction almost reached 70% when the frame error rate was 5%, while under similar conditions, the gain for short flows (web navigation) was ≈ 55%. In the case of the video streaming the QoE gain (p1203 metric) was, approximately, 50%.
The presence of IoT in current networking scenarios is more relevant every day. IoT covers a wide range of applications, ranging from wearable devices to vehicular communications. With the consolidation of Industry 4.0, IIoT (Industrial IoT) environments are becoming more common. Communications in these scenarios are mostly wireless, and due to the lossy nature of wireless communications, the loss of information becomes an intrinsic problem. However, loss recovery schemes increase the delay that characterizes any communication. On the other hand, both reliability (robustness) and low delay are crucial requirements for some applications in IIoT. An interesting strategy to improve both of them is the use of Network Coding techniques, which have shown promising results, in terms of increasing reliability and performance. This work focuses on a possible new coding approach, based on systematic network coding scheme with overlapping generations. We perform a thorough analysis of its behavior. Based on the results, we draw out a number of conclusions for practical implementations in wireless networks, focusing our interest in IIoT environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.